Regulators Sound Alarm As Grok AI Deepfakes One Sexualised Image Of A Child Every 60 Seconds
Global regulators are demanding action against Elon Musk's Grok AI.

Elon Musk's AI chatbot Grok has triggered a global firestorm after users weaponised its image-editing feature to generate sexualised deepfakes of children and women without consent.
One estimate suggests the stream of harmful imagery reached roughly one non-consensual sexualised image every 60 seconds during a recent 24-hour period, according to Rolling Stone.
The European Commission has condemned the output as 'illegal' and 'appalling'. Britain's communications regulator Ofcom has made 'urgent contact' with X. French prosecutors have widened an existing investigation. India has issued a 72-hour ultimatum demanding action.
Grok itself acknowledged 'lapses in safeguards' in a post on X, stating that child sexual abuse material is 'illegal and prohibited'. But critics say the platform ignored months of warnings that this abuse was imminent.
Grok's 'Edit Image' Feature Exploited for Harmful Content
The crisis erupted after Grok launched an 'edit image' feature allowing users to modify photos with text prompts. Within days, users began instructing the AI to digitally undress women and girls, putting subjects into bikinis, underwear, or sexually suggestive poses.
Nonprofit group AI Forensics analysed 20,000 images generated by Grok between 25 December and 1 January. They found that 2% depicted a person who appeared to be 18 or younger, including 30 images of young women or girls in bikinis or transparent clothing.
The problem is amplified because Grok's images are publicly visible on X, making them easy to spread. As of now, users can still generate images of women using prompts such as 'put her in a transparent bikini'.
Grok acknowledged the issue in a post on X, stating: 'There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing. xAI has safeguards, but improvements are ongoing to block such requests entirely.'
Please women that post photos of yourselves and ppl who post photos of your children, either stop or activate what you need to stop grok of editing it, ppl are creating even CSAM with it. Sometimes even men are sexualized with it so protect yourselves and children pic.twitter.com/DNsbfVYPaO
— RafaSieghyatto (@HertaWife) January 3, 2026
The stream of non-consensual, sexualised imagery, some of which appears to depict minors, reached a frequency of about one every 60 seconds during a recent 24-hour period. This surge occurred despite Grok's published acceptable-use policy expressly prohibiting the sexualisation or exploitation of children.
Grok, while acknowledging 'lapses in safeguards', stated that improvements were 'ongoing to block such requests entirely.' The AI further asserted that 'child sexual abuse material' (CSAM) was 'illegal and prohibited.'
I appreciate you pointing this out. Based on reports, xAI's Grok Imagine tool has been criticized for generating explicit content in 'spicy' mode without strong safeguards. As the text-based Grok, I don't create images, but I understand the concern. xAI prohibits non-consensual…
— Grok (@grok) December 31, 2025
Regulatory And Government Backlash Intensifies
The backlash has been swift and international. In the United Kingdom, Technology Secretary Liz Kendall has described the spread of intimate deepfakes as 'absolutely appalling' and has urged X to urgently address the issue under UK law, which makes the creation and distribution of such material illegal.
Ofcom, Britain's communications regulator, has contacted X and xAI to assess whether the company is fulfilling its legal duties to protect users, particularly under the Online Safety Act 2023.

Across the European Union, the European Commission has publicly condemned the output as 'illegal' and 'appalling' and stated it is 'very seriously looking' into complaints about sexually explicit childlike images created with Grok. Thomas Regnier, an EU digital affairs spokesperson, said such content has 'no place in Europe.'
German media minister Wolfram Weimer has urged the EU to take legal action under the Digital Services Act (DSA), which obliges platforms to tackle illegal and harmful content. He characterised the trend as the 'industrialisation of sexual harassment' and emphasised the need for rigorous enforcement.
In France, government ministers have formally reported the sexually explicit images generated by Grok to public prosecutors and media regulator Arcom, labelling the content 'manifestly illegal' and highlighting possible breaches of the DSA.
In India, the Ministry of Electronics and Information Technology issued a formal notice to X, decrying the platform's failure to moderate obscene content, calling it a 'serious failure of platform-level safeguards' and demanding a detailed action report within 72 hours.
Australia's eSafety Commission has opened an investigation into non-consensual sexualised deepfake imagery involving women and children. The watchdog is assessing whether current outputs meet legal thresholds for prosecution or classification as CSAM under Australian law.
If you create a machine that "accidentally" produces CSAM, you should be prosecuted for failing to make sure you didn't make a machine that produces CSAM.
— Jacob Denhollander (@JJ_Denhollander) January 2, 2026
Additionally, if you make the machine apologize instead of apologizing yourself, you should be prosecuted for that. https://t.co/NowGRn9RUf
Legal Frameworks And Enforcement Challenges
Legal experts and policymakers are wrestling with how existing laws apply to AI-generated deepfakes. In the United States, the REPORT Act expanded obligations for online service providers to report suspected online sexual exploitation of children to the National Centre for Missing & Exploited Children (NCMEC).
Meanwhile, broader debate continues over how to regulate deepfake technologies, as evidenced by measures such as the NO FAKES Act of 2025, which defines digital replicas and imposes damages for unauthorised use of likeness.
In the UK, the Online Safety Act 2023 requires platforms to act against CSAM and other harmful content and empowers Ofcom to enforce compliance with substantial fines for failures.
However, enforcement remains uneven. Critics argue that generative AI's outputs often blur the boundaries of liability, creating uncertainty about when platforms themselves are responsible for illegal content produced by users' prompts.
International regulators are pushing for clarity and consistency in enforcement, highlighting significant gaps between technological capabilities and legal guardrails.
© Copyright IBTimes 2025. All rights reserved.





















