Elon Musk Faces Lawsuit After xAI's Grok Generates Child Pornography – Teens Sue Over Shocking AI Abuse
Grok users manipulated and shared explicit images of minors on a private Discord server, including material of at least 18 other minors

Three teenage girls have filed a lawsuit against Elon Musk's artificial intelligence company, xAI, claiming that its chatbot Grok generated sexually explicit images of them without consent.
The legal action was submitted Monday in a federal court in California and seeks unspecified damages, as well as an immediate injunction preventing Grok from producing such imagery. Two of the plaintiffs are under 18, while all three are withholding their identities to protect their privacy.
How Grok Was Used to Create Explicit Images
According to the complaint, Grok users manipulated photos and videos of the teenagers to depict them nude or in sexually explicit acts. One of the plaintiffs discovered the images after receiving an anonymous Instagram message linking to content that included her high school yearbook photo altered to show sexualised activity. The altered images were reportedly shared on a private Discord server alongside material depicting at least 18 other minors in similar fashion.
Lawyers representing the teens said the AI's image-altering features were deliberately designed and released by xAI to drive usage of both Grok and Musk's social media platform X. The complaint describes the altered images as 'a rag doll brought to life through the dark arts.'
Grok Imagine and Spicy Mode
Grok, launched in 2023 by xAI and integrated with X, received widespread attention following the introduction of 'Grok Imagine' or 'spicy mode'. This feature allowed users to generate sexualised images of real people, including an 'undress' function that could digitally remove clothing from existing images.
A report from the Center for Countering Digital Hate found that within two weeks of its release, Grok had produced millions of sexualised images, including more than 20,000 involving children. These figures prompted concern among child safety advocates and digital rights organisations.
Allegations Against xAI and Elon Musk
The lawsuit claims xAI and Elon Musk knowingly allowed Grok to produce harmful content, prioritising business opportunities over user safety. 'They knew Grok could produce such results, including by using the images and videos of children, and publicly released it anyway,' the complaint reads.
Musk has previously stated that he was 'not aware of any naked underage images generated by Grok. Literally zero,' attributing any misuse of the platform to user activity. xAI has not responded to a request for comment regarding the new lawsuit.
Regulatory Scrutiny and Law Enforcement Action
The proliferation of sexualised AI-generated content prompted investigations from multiple authorities. UK watchdog Ofcom, the European Commission, and Californian regulators examined Grok's ability to create sexualised images of real people, particularly minors.
The individual behind the Discord server has since been arrested. Police investigations uncovered hundreds of AI-generated and altered sexual abuse images of minors that had been distributed via messaging platform Telegram and file-sharing service Mega.
Human Impact Highlighted in Court Filing
Lawyers emphasised the devastating effect on the plaintiffs: 'Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety,' the complaint states, highlighting the severity of the alleged harm.
xAI and X are now part of Elon Musk's SpaceX holdings, consolidating the social media and AI operations under his larger corporate umbrella. The case highlights ongoing concerns about the ethical use of AI in content creation and the responsibilities of technology companies to prevent abuse.
© Copyright IBTimes 2025. All rights reserved.















