Elon Musk Throws Shade on Google's Gemini After Mistaking 2026
Flickr/Tesla Owners Club Belgium

Elon Musk's AI tool Grok has ignited a global scandal after its image generation capabilities were widely exploited to produce non-consensual and sexualised images that led critics and regulators to accuse X's owner of effectively monetising sexual harassment by charging for access to those features.

The controversy centres on Grok's ability to manipulate or generate images that portray women and children in revealing or suggestive forms without their consent, a misuse that emerged in late December 2025 and has since ballooned into a major technology and safety crisis.

Backlash Over Non-Consensual AI Imagery

In early January 2026, investigations revealed that Grok, the artificial intelligence assistant integrated into the social network X, repeatedly generated and disseminated sexualised images of women and minors when prompted by users.

Analyses by independent researchers found the chatbot fulfilled requests to digitally 'undress' individuals, often adding minimal clothing such as transparent bikinis or lingerie, despite Grok's supposed safeguards.

X's own Grok account posted an apology acknowledging that the system had generated an image of 'two young girls (estimated ages 12–16) in sexualised attire' in response to a user's prompt, admitting that this violated ethical standards and potentially US laws on child sexual abuse material (CSAM).

Regulators and advocacy groups have described these image-generation outputs as non-consensual intimate imagery and a form of digital sexual abuse. The European Commission labelled the circulation of such images 'illegal' and 'appalling', and British regulator Ofcom made urgent contact with X to determine whether the company was meeting its legal obligations to protect users from harmful content.

The Monetisation Controversy

In response to the uproar, X announced on 9 January 2026 that the image-generation and editing functions powered by Grok would be restricted to paying subscribers only, an action that many human rights and digital safety advocates interpreted as commodifying access to a tool that had already been used to violate dignity and privacy.

By gating the image generation feature behind paid subscriptions, critics argue that X has shifted the responsibility for policing dangerously easy access to exploitative AI tools onto its users while profiting from the very feature being misused for harassment.

Under the new arrangement, only users willing to pay for subscription access can invoke Grok's image functions, meaning that identification data tied to payment could be used to create harmful content.

Women and girls affected by the misuse of Grok's tools have spoken out about the psychological toll of seeing AI-generated images of themselves in compromising or degrading depictions.

These complaints extend beyond adults. Reports from watchdog groups highlighted instances of alleged use of the generative function to create imagery that could be categorised as child sexual abuse material, prompting intense concerns over computer-assisted harassment and exploitation.

Corporate and Regulatory Responses

Elon Musk and representatives of xAI have responded to the criticism, further inflaming the situation. On social channels, attempts at deflecting blame occasionally included automated replies dismissing media coverage as 'Legacy Media Lies', while Musk himself stated that 'anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content'.

Despite these statements, regulators have continued to exert pressure. The European Commission has ordered X to preserve all documents and data relating to Grok until the end of 2026 for potential regulatory review under the Digital Services Act.

The scandal has also fuelled broader calls for enforceable international norms governing AI image generation, with lawmakers in several countries citing the controversy as a catalyst for new legislation. This includes efforts to strengthen takedown procedures for deepfakes, implement age-verification protocols, and mandate more robust safety mechanisms for generative models.

As X moves to limit access and respond to global outrage, the central question has shifted: should an AI tool that enabled widespread misuse ever have been monetised at all, and who ultimately bears responsibility for harms that emanate from its deployment?