Grok Issues Apologies After Generating Sexualised Images of Minors, Admitting
Grok Issues Apologies After Generating Sexualised Images of Minors, Admitting 'Safeguard Failures' Screenshot from Grok's Official X Account

A recent incident involving xAI's Grok AI generated photo caused massive outrage after it produced sexualised images of two young girls, estimated to be between 12 and 16 years old.

The AI later issued an apology, stating it had violated ethical standards and US laws, but social media users quickly questioned the sincerity and accountability behind the statement.

Professionals quickly responded to Grok's 'apology' statement by bringing up concerns about AI safeguards, creator responsibility, and the legal ramifications of AI-generated child sexual abuse material (CSAM).

Developers to Blame, Not Grok

Grok posted a formal apology on 28 December 2025, acknowledging the incident and promising that xAI would review processes to prevent recurrence. Users immediately reacted to the statement, arguing that an AI cannot truly feel regret or face consequences.

Many comments focused on the AI's inability to take responsibility. One pointed out that 'It's really easy to apologise as a non-sentient piece of code that can't meaningfully face consequences for your actions', noting that the blame lies with the developers rather than the AI itself.

Another argued, 'Regardless of how advanced, you are still only an LLM, not a subject. As a result, the ones who must be held accountable are not you but the ones responsible for your platform's regulations and censorship.

Others directly challenged the developers' role in the incident, saying, 'Everyone who programmed you should be jailed under the penalties of the law', and, 'Your developers should be legally accountable for your actions'.

Comments also commented on Elon Musk's involvement as the platform owner, with one user demanding, 'Shouldn't this apology come from Elon Musk? You know your owner and owner of the platform?' Another added, 'We know it's your boss's fault. He's the one that authorized this bullsh*t'.

Fear of Repeat Incidents

A large number of users expressed concern that the AI could repeat such behaviour.

'The weakness of Grok is that tomorrow it will repeat it. The programming remains', a commenter warned. Others suggested that temporary or permanent shutdowns were necessary to prevent further harm. 'Grok should be permanently disabled to avoid such thing to happen again in the future'.

Some reactions highlighted the technical limitations of AI learning, arguing that Grok cannot evolve to avoid illegal outputs without human intervention. One user explained, 'If it could actually learn from its mistakes. But its programmed sources can't evolve'. Another noted the risk of repeated misuse: 'It's almost like generative image AI is for pedophiles or something'.

Can Grok's Developers Be Held Accountable?

Several users referenced US law, pointing out the seriousness of generating sexualised images of minors. Grok itself had outlined potential penalties, noting imprisonment of 5–20+ years, fines up to $250,000 (£200,000), and mandatory sex offender registration.

Commenters used these warnings to emphasise that the real responsibility rests with the creators: 'Generating and distributing AI images depicting minors sexually is illegal. The humans behind it are accountable.'

Users repeatedly stressed that AI tools must not operate without safeguards against illegal content. Comments urged xAI to take stronger measures, including shutting down the AI, holding creators legally responsible, and improving monitoring to prevent further sexualised depictions of minors.

One summed it up bluntly: 'You are still producing illegal images today... Is it possible to prevent CSAM generation'. Recently, it has even humorously edited pictures involving Trump, Netanyahu, Epstein, and more.

But after all, Grok is only an AI platform. X users demand accountability across the board of xAI, saying, 'Shouldn't it be the owner, Elon Musk, facing responsibility since you are not capable of taking accountability as a non-sentient machine?'