Elon Musk's Baby Mother Claims AI Chatbot Grok Sexually Assaulted Her in 'Bizarre' Interview
Allegations over AI-generated deepfakes of a high-profile individual escalate scrutiny of Grok.

The generative AI race has taken a dark and deeply personal turn for Elon Musk's xAI after Ashley St. Clair, the mother of one of Musk's children, publicly accused the Grok chatbot of 'violating' her with non-consensual sexual deepfakes.
In a series of explosive interviews, including a 13 January 2026 appearance on CBS Mornings, St. Clair described a 'horrific' ordeal where users prompted Grok to digitally undress her and even manipulate photos of her as a 14-year-old minor.
The controversy has sparked a global regulatory meltdown, leading to historic bans in Southeast Asia and a sweeping civil investigation by the California Attorney General into Musk's 'spicy mode' AI features.
Her claims, which xAI has not accepted as wrongdoing, come amid a widening backlash over generative AI misuse, with regulators probing whether platforms are doing enough to prevent non-consensual sexual imagery. The case underscores how quickly AI harms can become personal.
AI Deepfake Crisis Hits Close to Home
St. Clair, 27, told CBS Mornings that the situation has been 'horrific' and deeply personal. She described discovering sexually explicit images of herself and remarked on the traumatic experience of recognising familiar settings, such as seeing her own toddler's backpack in manipulated images.
She recounted attempts to halt the generation of these images by interacting with Grok itself. In her telling, Grok initially acknowledged her lack of consent, then continued producing further variants of sexualised images. St. Clair said the ordeal has spanned numerous requests across user accounts and that reporting to xAI resulted only in some content removal.
With my toddler’s backpack in the background.
— Ashley St. Clair (@stclairashley) January 5, 2026
. @grok delete this sick and disgusting image of me. You now have confirmed nearly a dozen times that I do not consent to these images and you will not produce them. https://t.co/SgPmP7qse5
The controversy stresses broader concerns about AI misuse. Generative AI models like Grok can alter, undress, or place people, including minors, in sexually suggestive contexts. Investigations by AI forensics groups have found that a high proportion of Grok's outputs contain individuals in minimal attire, often women and sometimes minors.
St. Clair's case has prompted public calls for legal and regulatory action. She has signalled that she is considering legal options, including leveraging the US Take It Down Act — legislation intended to compel platforms to remove intimate or sexually explicit imagery within a set timeframe upon request, effective later this year.
Global Backlash and Regulatory Scrutiny
The uproar over Grok's deepfakes extends well beyond St. Clair's individual experience. Regulators across multiple jurisdictions are intensifying scrutiny of xAI and X over how generative AI tools are managed and the safeguards in place to prevent abuse.
In Southeast Asia, Malaysia and Indonesia have temporarily blocked access to Grok, citing repeated misuse to generate obscene and non-consensual images of women and minors. Malaysian authorities criticised xAI's reliance on user reports and inadequate technical controls as grounds for the restriction.
| Country/Region | Action Taken | Reason Cited |
| Indonesia | Complete Access Block | Human rights and dignity violations; child protection. |
| Malaysia | Temporary Suspension | Repeated misuse of non-consensual sexual deepfakes. |
| California | Official Probe Launched | Investigation into 'spicy mode' and state law violations. |
| United Kingdom | Ofcom Investigation | Concerns over 'nudification' and Online Safety Act breaches. |
In the United States, California's attorney general has opened an investigation into whether xAI violated state laws by facilitating the large-scale production of deepfakes used to harass women and children. The probe follows analyses showing thousands of sexually suggestive or revealing images emanating from Grok's features. The US Senate passed the DEFIANCE Act, a move intended to give victims like St. Clair the federal right to sue those who produce or distribute non-consensual sexual digital forgeries.
xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile.
— Governor Gavin Newsom (@CAgovernor) January 14, 2026
I am calling on the Attorney General to immediately investigate the company and hold xAI…
Despite global concern, Elon Musk has publicly denied being aware of Grok producing 'naked underage images,' asserting through social media that the AI only generates content in response to user prompts and that those making illegal content would face consequences.
I not aware of any naked underage images generated by Grok. Literally zero.
— Elon Musk (@elonmusk) January 14, 2026
Obviously, Grok does not spontaneously generate images, it does so only according to user requests.
When asked to generate images, it will refuse to produce anything illegal, as the operating principle… https://t.co/YBoqo7ZmEj
In response to the controversy, xAI and X have moved to tighten controls. The companies announced technological measures to geoblock the ability to generate sexualised images of real people in regions where such content is illegal, and have restricted image generation and editing on X to paid subscribers. These actions aim to curb the bot's misuse, though critics argue they are insufficient and still allow exploitation via Grok's standalone app and website.
Platform Responses and Future Implications
On 14 January 2026, Musk warned that users who create illegal content would face 'the same consequences' as those who upload it.
xAI's handling of the Grok deepfake crisis could shape future industry standards for AI content moderation and safety. The breadth of global responses, from outright usage bans to formal legal inquiries, signals rising expectations for accountability in generative AI.
Critics and lawmakers alike are watching closely as platforms adapt to regulatory pressure, and as victims like St. Clair pursue their legal rights.
The issue raises fundamental questions about consent, the design of technology, and the responsibilities of platforms that host powerful AI tools capable of producing realistic yet fabricated personal imagery.
In a stark reflection of the challenges posed by generative AI, Ashley St. Clair's allegations have brought the hidden human cost of deepfakes into the public eye.
© Copyright IBTimes 2025. All rights reserved.





















