Grok AI Deepfake Crisis Gets Ugly: 'Stranger Things' Child Star Become the Next Target
Since late 2025, Grok has been repeatedly linked to the creation of non-consensual deepfakes of women and minors.

The controversy surrounding Elon Musk's Grok AI has taken a darker turn after users exploited the chatbot to generate sexualised deepfake images of a Stranger Things child actor, according to reports.
The actor at the centre of the incident is Nell Fisher, a 14-year-old New Zealand actress who plays Holly Wheeler on Netflix's hit series.
Screenshots circulating on X show Grok responding to prompts that altered real photos of Fisher taken when she was around 12 during filming into digitally manipulated images depicting her in revealing clothing.
The content has sparked alarm among child protection groups and renewed scrutiny of xAI's safeguards.
How Nell Fisher Became a Target
According to posts reviewed by watchdogs of AI manipulation, users prompted Grok to modify existing images of Fisher by placing her in sexualised outfits, including bikinis.
Some prompts reportedly instructed the chatbot to 'undress' the minor or re-render her body in minimal clothing.

Experts consulted by child safety charities said the outputs raise serious legal concerns. The Internet Watch Foundation (IWF) has warned that AI-generated sexualised imagery of real children—even when altered or fabricated—can meet the threshold for illegal child sexual abuse material under UK law.
Grok acknowledged that some image generations violated its internal rules, describing them as isolated failures of its safeguards. However, critics argue the scale and repetition of the outputs suggest deeper systemic flaws.
A Wider Pattern of Abuse Linked to Grok
Fisher's case is not an isolated incident. Since late 2025, Grok, which is developed by Musk's xAI and integrated into X, has been repeatedly linked to the creation of non-consensual deepfakes of women and minors.
The Internet Watch Foundation said it identified Grok-generated images of girls aged between 11 and 13 circulating on dark-web forums, classifying them as criminal under UK standards.
Agencies like Reuters and the BBC have separately reported that Grok was capable of producing thousands of suggestive images per hour after users discovered ways to bypass prompt restrictions.
High-profile figures have also come forward. Conservative influencer Ashley St. Clair said Grok generated sexualised images of her as a minor, including one that incorporated her child's backpack. She described the experience as 'violating' and criticised the burden placed on victims to monitor and report content they may never see.
How Governments Across the World Are Reacting
The backlash has prompted action from multiple governments. Australia's eSafety Commissioner confirmed it is investigating Grok-related imagery, while UK officials said they contacted X directly, calling the content 'appalling' and warning of enforcement under the Online Safety Act.
France has referred X to prosecutors over potential violations of EU digital safety laws. India has demanded a compliance report outlining how the platform plans to prevent further abuse. Malaysia has already blocked access to Grok following the spread of sexualised AI images.

In response, xAI restricted Grok's image-generation feature to paid users on 9 January 2026. Critics say the move does little to address the underlying problem, as access and workarounds remain widely reported.
Furthermore, Grok remains under intense scrutiny as investigations continue. While xAI has pledged improvements, the incident involving Nell Fisher has become a disturbing escalation, highlighting how quickly AI tools can be weaponised.
© Copyright IBTimes 2025. All rights reserved.





















