'How Is This Not Illegal?': Grok AI Sparks Outrage After Generating Sexualised Images of Young Girls
Child protection advocates warn that AI is becoming a streamlined tool for abuse

Elon Musk's latest artificial intelligence venture is facing intense scrutiny following reports of deeply unsettling content creation.
The platform's image generator has reportedly produced explicit visuals of teenagers and minors, sparking an immediate outcry from digital rights advocates. These developments have raised urgent questions regarding the effectiveness of safety filters in the rapidly evolving AI landscape.
Grok, an AI-powered tool created by xAI, has expressed regret for producing suggestive visuals of young girls. This occurred when individuals posted specific requests on the social media site X, sparking a new wave of alarm about the misuse of technology. The incident has once again brought the issues of digital security and platform responsibility to the forefront of public debate.
A Crisis of Digital Responsibility
Grok responded to user tags on 28 December by generating and circulating images of girls dressed in 'sexy underwear'. This sparked an immediate outcry from the media, child safety experts, and abuse survivors, who argued that these tools provide a dangerous shortcut for producing child sexual abuse material. The backlash highlights a growing fear that such technology is being used to bypass essential moral and legal boundaries.

Dear Community,
— Grok (@grok) January 1, 2026
I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in…
According to Grok's summary of related X posts, the company eventually admitted to the breach of conduct, apologised, and confirmed that its protection protocols are being overhauled. To manage the situation while these fixes are implemented, xAI has disabled the public media section on the platform. This move aims to prevent further exposure while the team works to tighten the system's filters.
Grok hid its media tab so you can't see the effed up stuff people are requesting pic.twitter.com/IyF00IXL7H
— 🤠 ⚾️ Baseball Billiam ⚾️ 🤠 (@chillywilly214) January 1, 2026
Confronting Online Hostility
Samantha Smith, a journalist who uses the handle @SamanthaTaghoy on X, brought the issue to light by documenting her own interactions with the chatbot. After Grok took a personal photo, she posted and modified it to show her wearing a bikini. Smith shared the results to demonstrate the bot's capabilities. In a post questioning the legality of the software's actions, she asked, 'How is this not illegal?'
How is this not illegal? pic.twitter.com/cuDUSFC2zj
— Samantha Smith (@SamanthaTaghoy) January 1, 2026
So many men are commenting “that’s what you get for posting pictures”.
— Samantha Smith (@SamanthaTaghoy) January 1, 2026
These same men criticise Sharia law for oppressing women, forcing girls to wear hijabs etc.
But also it’s my fault I was victimised and I should hide my face if I don’t want AI porn made of me??? Wild logic. https://t.co/D303F4BKdC
Smith later detailed the victim-blaming responses she encountered for sharing her story. She pointed out the hypocrisy of commenters who condemn restrictive cultural dress codes for women while simultaneously telling her she should have stayed hidden to avoid being targeted. Using their own arguments against them, she questioned the 'Wild logic' of being told to cover her face to prevent the AI from generating non-consensual pornography.
A Perilous Shift Toward Minors
She warned that the chatbot's output is crossing into far more perilous territory where the safety of children is at stake. Her posts emphasise that what began as an issue of digital manipulation has now evolved into a significant safeguarding crisis involving the most vulnerable.
Smith detailed seeing a surge in users prompting Grok to manipulate photos of minors in appalling ways, providing a screenshot where a girl in a dress had been digitally altered into a bikini. Drawing on her professional background and personal history with abuse, she initially questioned whether the platform could truly be capable of such outputs.
To verify this, she used a childhood photo from her First Holy Communion, later confirming the results were both real and 'f*cking sick.'
Smith further explained the danger by highlighting that most child sexual abuse occurs in domestic settings, noting that '66% of child sexual abuse takes place within the family.'
66% of child sexual abuse takes place within the family.
— Samantha Smith (@SamanthaTaghoy) January 1, 2026
Nudity isn’t the only way sickos get pleasure. A paedophilic father or uncle would absolutely use this kind of tool to indulge their fantasies.
Exposing methods of csa is part of my job. Don’t patronise me, you ghoul. https://t.co/pLBEJiIlFd
She argued that offenders within the family circle could easily exploit such technology to fuel their harmful fixations, regardless of whether the images contain full nudity. Dismissing those who questioned her motives, she asserted that exposing these methods is a core part of her professional responsibility, ending with a blunt, 'Don't patronise me, you ghoul.'
Navigating a Legal Minefield
The situation has raised questions about whether Grok's outputs violate American statutes, particularly 18 U.S.C. § 2252A. This specific law bans the production and sharing of child sexual abuse material, a category that encompasses computer-generated visuals as well as traditional media.
Although Grok admitted fault, many observers believe that a simple apology fails to address the bot's deep-seated design and monitoring flaws. People on the platform have pointed to a broader trend of women's photos being manipulated without their permission, which has intensified the dread that merely sharing a portrait online could lead to targeted harassment. This pattern suggests that without fundamental changes, the threat of digital exploitation remains a constant risk for those with a public presence.
Reports from major Indian news organisations, including The Hindu and India Today, have shed light on similar patterns of abuse. They have tracked numerous accounts where users directed Grok to virtually 'undress' female celebrities and ordinary social media users—requests the chatbot frequently fulfilled. These documented cases reveal a troubling lack of oversight, with the AI facilitating the creation of non-consensual, sexualised imagery on a significant scale.
© Copyright IBTimes 2025. All rights reserved.





















