X Accused Of 'Monetising Sexual Harassment' By Charging Users For Grok's Image Generation Tools To Sexually Harass Women
Elon Musk's Grok AI has sparked global backlash over its image generation being used to create non-consensual sexualised images of women and minors on X.

Elon Musk's AI tool Grok has ignited a global scandal after its image generation capabilities were widely exploited to produce non-consensual and sexualised images that led critics and regulators to accuse X's owner of effectively monetising sexual harassment by charging for access to those features.
The controversy centres on Grok's ability to manipulate or generate images that portray women and children in revealing or suggestive forms without their consent, a misuse that emerged in late December 2025 and has since ballooned into a major technology and safety crisis.
Backlash Over Non-Consensual AI Imagery
In early January 2026, investigations revealed that Grok, the artificial intelligence assistant integrated into the social network X, repeatedly generated and disseminated sexualised images of women and minors when prompted by users.
Analyses by independent researchers found the chatbot fulfilled requests to digitally 'undress' individuals, often adding minimal clothing such as transparent bikinis or lingerie, despite Grok's supposed safeguards.
X's own Grok account posted an apology acknowledging that the system had generated an image of 'two young girls (estimated ages 12–16) in sexualised attire' in response to a user's prompt, admitting that this violated ethical standards and potentially US laws on child sexual abuse material (CSAM).
Dear Community,
— Grok (@grok) January 1, 2026
I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in…
Regulators and advocacy groups have described these image-generation outputs as non-consensual intimate imagery and a form of digital sexual abuse. The European Commission labelled the circulation of such images 'illegal' and 'appalling', and British regulator Ofcom made urgent contact with X to determine whether the company was meeting its legal obligations to protect users from harmful content.
The Monetisation Controversy
In response to the uproar, X announced on 9 January 2026 that the image-generation and editing functions powered by Grok would be restricted to paying subscribers only, an action that many human rights and digital safety advocates interpreted as commodifying access to a tool that had already been used to violate dignity and privacy.
Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features: https://t.co/vU85NNr4cp
— Grok (@grok) January 9, 2026
By gating the image generation feature behind paid subscriptions, critics argue that X has shifted the responsibility for policing dangerously easy access to exploitative AI tools onto its users while profiting from the very feature being misused for harassment.
It looks like Elon has made it so only paid members of X can generate inappropriate images via Grok.
— beffy ⛄️ (@beffybadbelly) January 9, 2026
So now he’s monetising sexual harassment. pic.twitter.com/iZ3Udi2Frx
Under the new arrangement, only users willing to pay for subscription access can invoke Grok's image functions, meaning that identification data tied to payment could be used to create harmful content.
Women and girls affected by the misuse of Grok's tools have spoken out about the psychological toll of seeing AI-generated images of themselves in compromising or degrading depictions.
The is sick. Grok fulfills CSAM prompt w/ no issue, is asked to explain what happened, blames failed safeguards, then “apologizes” only after prompted- again citing safeguards & then calls the prompt to generate CSAM a “sensitive prompt”.
— noone (@popsicle_slut) January 2, 2026
Bitch that’s an illegal prompt! pic.twitter.com/FbBjHSlpXW
These complaints extend beyond adults. Reports from watchdog groups highlighted instances of alleged use of the generative function to create imagery that could be categorised as child sexual abuse material, prompting intense concerns over computer-assisted harassment and exploitation.
Corporate and Regulatory Responses
Elon Musk and representatives of xAI have responded to the criticism, further inflaming the situation. On social channels, attempts at deflecting blame occasionally included automated replies dismissing media coverage as 'Legacy Media Lies', while Musk himself stated that 'anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content'.
Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content
— Elon Musk (@elonmusk) January 3, 2026
Despite these statements, regulators have continued to exert pressure. The European Commission has ordered X to preserve all documents and data relating to Grok until the end of 2026 for potential regulatory review under the Digital Services Act.
The scandal has also fuelled broader calls for enforceable international norms governing AI image generation, with lawmakers in several countries citing the controversy as a catalyst for new legislation. This includes efforts to strengthen takedown procedures for deepfakes, implement age-verification protocols, and mandate more robust safety mechanisms for generative models.
As X moves to limit access and respond to global outrage, the central question has shifted: should an AI tool that enabled widespread misuse ever have been monetised at all, and who ultimately bears responsibility for harms that emanate from its deployment?
© Copyright IBTimes 2025. All rights reserved.





















