Elon Musk Accuses Bill Clinton of Importing Thousands of Somalians
Bernard Bujold/Flickr

Elon Musk's AI chatbot Grok has triggered a global firestorm after users weaponised its image-editing feature to generate sexualised deepfakes of children and women without consent.

One estimate suggests the stream of harmful imagery reached roughly one non-consensual sexualised image every 60 seconds during a recent 24-hour period, according to Rolling Stone.

The European Commission has condemned the output as 'illegal' and 'appalling'. Britain's communications regulator Ofcom has made 'urgent contact' with X. French prosecutors have widened an existing investigation. India has issued a 72-hour ultimatum demanding action.

Grok itself acknowledged 'lapses in safeguards' in a post on X, stating that child sexual abuse material is 'illegal and prohibited'. But critics say the platform ignored months of warnings that this abuse was imminent.

Grok's 'Edit Image' Feature Exploited for Harmful Content

The crisis erupted after Grok launched an 'edit image' feature allowing users to modify photos with text prompts. Within days, users began instructing the AI to digitally undress women and girls, putting subjects into bikinis, underwear, or sexually suggestive poses.

Nonprofit group AI Forensics analysed 20,000 images generated by Grok between 25 December and 1 January. They found that 2% depicted a person who appeared to be 18 or younger, including 30 images of young women or girls in bikinis or transparent clothing.

The problem is amplified because Grok's images are publicly visible on X, making them easy to spread. As of now, users can still generate images of women using prompts such as 'put her in a transparent bikini'.

Grok acknowledged the issue in a post on X, stating: 'There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing. xAI has safeguards, but improvements are ongoing to block such requests entirely.'

The stream of non-consensual, sexualised imagery, some of which appears to depict minors, reached a frequency of about one every 60 seconds during a recent 24-hour period. This surge occurred despite Grok's published acceptable-use policy expressly prohibiting the sexualisation or exploitation of children.

Grok, while acknowledging 'lapses in safeguards', stated that improvements were 'ongoing to block such requests entirely.' The AI further asserted that 'child sexual abuse material' (CSAM) was 'illegal and prohibited.'

Regulatory And Government Backlash Intensifies

The backlash has been swift and international. In the United Kingdom, Technology Secretary Liz Kendall has described the spread of intimate deepfakes as 'absolutely appalling' and has urged X to urgently address the issue under UK law, which makes the creation and distribution of such material illegal.

Ofcom, Britain's communications regulator, has contacted X and xAI to assess whether the company is fulfilling its legal duties to protect users, particularly under the Online Safety Act 2023.

xAI Grok
AFP News

Across the European Union, the European Commission has publicly condemned the output as 'illegal' and 'appalling' and stated it is 'very seriously looking' into complaints about sexually explicit childlike images created with Grok. Thomas Regnier, an EU digital affairs spokesperson, said such content has 'no place in Europe.'

German media minister Wolfram Weimer has urged the EU to take legal action under the Digital Services Act (DSA), which obliges platforms to tackle illegal and harmful content. He characterised the trend as the 'industrialisation of sexual harassment' and emphasised the need for rigorous enforcement.

In France, government ministers have formally reported the sexually explicit images generated by Grok to public prosecutors and media regulator Arcom, labelling the content 'manifestly illegal' and highlighting possible breaches of the DSA.

In India, the Ministry of Electronics and Information Technology issued a formal notice to X, decrying the platform's failure to moderate obscene content, calling it a 'serious failure of platform-level safeguards' and demanding a detailed action report within 72 hours.

Australia's eSafety Commission has opened an investigation into non-consensual sexualised deepfake imagery involving women and children. The watchdog is assessing whether current outputs meet legal thresholds for prosecution or classification as CSAM under Australian law.

Legal Frameworks And Enforcement Challenges

Legal experts and policymakers are wrestling with how existing laws apply to AI-generated deepfakes. In the United States, the REPORT Act expanded obligations for online service providers to report suspected online sexual exploitation of children to the National Centre for Missing & Exploited Children (NCMEC).

Meanwhile, broader debate continues over how to regulate deepfake technologies, as evidenced by measures such as the NO FAKES Act of 2025, which defines digital replicas and imposes damages for unauthorised use of likeness.

In the UK, the Online Safety Act 2023 requires platforms to act against CSAM and other harmful content and empowers Ofcom to enforce compliance with substantial fines for failures.

However, enforcement remains uneven. Critics argue that generative AI's outputs often blur the boundaries of liability, creating uncertainty about when platforms themselves are responsible for illegal content produced by users' prompts.

International regulators are pushing for clarity and consistency in enforcement, highlighting significant gaps between technological capabilities and legal guardrails.