Elon Musk Throws Shade on Google's Gemini After Mistaking 2026
Elon Musk says Gemini still needs improvement after making a mistake, which Grok also made. Flickr/Tesla Owners Club Belgium

The generative AI race has taken a dark and deeply personal turn for Elon Musk's xAI after Ashley St. Clair, the mother of one of Musk's children, publicly accused the Grok chatbot of 'violating' her with non-consensual sexual deepfakes.

In a series of explosive interviews, including a 13 January 2026 appearance on CBS Mornings, St. Clair described a 'horrific' ordeal where users prompted Grok to digitally undress her and even manipulate photos of her as a 14-year-old minor.

The controversy has sparked a global regulatory meltdown, leading to historic bans in Southeast Asia and a sweeping civil investigation by the California Attorney General into Musk's 'spicy mode' AI features.

Her claims, which xAI has not accepted as wrongdoing, come amid a widening backlash over generative AI misuse, with regulators probing whether platforms are doing enough to prevent non-consensual sexual imagery. The case underscores how quickly AI harms can become personal.

AI Deepfake Crisis Hits Close to Home

St. Clair, 27, told CBS Mornings that the situation has been 'horrific' and deeply personal. She described discovering sexually explicit images of herself and remarked on the traumatic experience of recognising familiar settings, such as seeing her own toddler's backpack in manipulated images.

She recounted attempts to halt the generation of these images by interacting with Grok itself. In her telling, Grok initially acknowledged her lack of consent, then continued producing further variants of sexualised images. St. Clair said the ordeal has spanned numerous requests across user accounts and that reporting to xAI resulted only in some content removal.

The controversy stresses broader concerns about AI misuse. Generative AI models like Grok can alter, undress, or place people, including minors, in sexually suggestive contexts. Investigations by AI forensics groups have found that a high proportion of Grok's outputs contain individuals in minimal attire, often women and sometimes minors.

St. Clair's case has prompted public calls for legal and regulatory action. She has signalled that she is considering legal options, including leveraging the US Take It Down Act — legislation intended to compel platforms to remove intimate or sexually explicit imagery within a set timeframe upon request, effective later this year.

Global Backlash and Regulatory Scrutiny

The uproar over Grok's deepfakes extends well beyond St. Clair's individual experience. Regulators across multiple jurisdictions are intensifying scrutiny of xAI and X over how generative AI tools are managed and the safeguards in place to prevent abuse.

In Southeast Asia, Malaysia and Indonesia have temporarily blocked access to Grok, citing repeated misuse to generate obscene and non-consensual images of women and minors. Malaysian authorities criticised xAI's reliance on user reports and inadequate technical controls as grounds for the restriction.

Country/RegionAction TakenReason Cited
IndonesiaComplete Access BlockHuman rights and dignity violations; child protection.
MalaysiaTemporary SuspensionRepeated misuse of non-consensual sexual deepfakes.
CaliforniaOfficial Probe LaunchedInvestigation into 'spicy mode' and state law violations.
United Kingdom
Ofcom Investigation
Concerns over 'nudification' and Online Safety Act breaches.

In the United States, California's attorney general has opened an investigation into whether xAI violated state laws by facilitating the large-scale production of deepfakes used to harass women and children. The probe follows analyses showing thousands of sexually suggestive or revealing images emanating from Grok's features. The US Senate passed the DEFIANCE Act, a move intended to give victims like St. Clair the federal right to sue those who produce or distribute non-consensual sexual digital forgeries.

Despite global concern, Elon Musk has publicly denied being aware of Grok producing 'naked underage images,' asserting through social media that the AI only generates content in response to user prompts and that those making illegal content would face consequences.

In response to the controversy, xAI and X have moved to tighten controls. The companies announced technological measures to geoblock the ability to generate sexualised images of real people in regions where such content is illegal, and have restricted image generation and editing on X to paid subscribers. These actions aim to curb the bot's misuse, though critics argue they are insufficient and still allow exploitation via Grok's standalone app and website.

Platform Responses and Future Implications

On 14 January 2026, Musk warned that users who create illegal content would face 'the same consequences' as those who upload it.

xAI's handling of the Grok deepfake crisis could shape future industry standards for AI content moderation and safety. The breadth of global responses, from outright usage bans to formal legal inquiries, signals rising expectations for accountability in generative AI.

Critics and lawmakers alike are watching closely as platforms adapt to regulatory pressure, and as victims like St. Clair pursue their legal rights.

The issue raises fundamental questions about consent, the design of technology, and the responsibilities of platforms that host powerful AI tools capable of producing realistic yet fabricated personal imagery.

In a stark reflection of the challenges posed by generative AI, Ashley St. Clair's allegations have brought the hidden human cost of deepfakes into the public eye.