ChatGPT's AI Caricature Trend Is Secretly Fueling Cybercrime — Your Selfies Could Be the Target
Cybersecurity experts warn against sharing personal info through AI caricature trends

The growing popularity of ChatGPT's AI caricature trend could be putting users at unexpected risk, cybersecurity experts warn. People who upload selfies and personal details to generate caricatures may be inadvertently providing material for cybercriminals to exploit. Images that seem harmless could be used to create fake social media accounts or AI-generated deepfakes, potentially exposing users to targeted scams.
Bob Long, vice-president at age authentication company Daon, said: 'You are doing fraudsters' work for them by giving them a visual representation of who you are.' Charlotte Wilson, head of enterprise at Check Point, added that selfies help criminals move from generic scams to high-conviction impersonation, increasing the effectiveness of fraud attempts.
How ChatGPT Processes User Images
The AI caricature trend asks users to upload a photo, sometimes alongside details such as job titles or employer names, so that ChatGPT can generate a personalised caricature. Experts have raised concerns about how these images are processed and stored.
Cybersecurity consultant Jake Moore explained that AI systems extract data from uploaded images that could include emotional cues, environmental details, and other information that might reveal a user's location. According to OpenAI's EU privacy policy, the company collects personal data including images users upload and may use this information to improve and develop its services, though users can control how their data is used through account settings.
Such data may be retained for an unknown period and could be incorporated into AI training datasets
OpenAI, the company behind ChatGPT, states that uploaded images are used to improve the system. The company clarified that this does not mean every image is placed in a public database. However, a breach of such systems could still allow bad actors access to sensitive data.
The Risks of Oversharing Personal Information
Experts warn that even seemingly minor details can put users at risk. High-resolution images can be exploited to create realistic fake accounts or deepfake content, which can then be used in targeted scams. Background clues, badges, uniforms, or logos increase the likelihood of images being misused.
Wilson highlighted that selfies are no longer just entertainment, explaining that personalised attacks are far more convincing than generic fraud attempts. Long also noted that the way the trend is framed makes it easier for criminals to gather material, describing the wording as potentially designed to simplify fraudsters' work.
Tips for Safe Participation
For those still wishing to participate in AI caricature trends, experts recommend taking precautions to limit exposure. Users should crop images tightly, keep backgrounds plain, and avoid including anything that can identify their employer or location. Personal details such as job titles, city, or company should not be shared in prompts.
OpenAI offers a privacy portal where users can opt out of AI training by selecting the 'do not train on my content' option. Text conversations with ChatGPT can also be excluded from AI improvement settings. Under EU law, users can request deletion of personal data, although some information may be retained for security and fraud prevention purposes.
Context and Popularity of AI Caricature Trends
AI caricature trends have become increasingly popular on social media, combining entertainment with new artificial intelligence tools. While they appear harmless, cybersecurity professionals caution that these trends carry hidden risks.
Uploaded selfies and personal information can provide fraudsters with enough material to conduct high-conviction impersonation and scams, illustrating the need for careful participation.
© Copyright IBTimes 2025. All rights reserved.




















