Outraged Mum Blasts Elon Musk's AI 'Grok' After It Told Her Kids To 'Send Nudes' — Furious Parents Call For Ban
Viral clip and worker testimony spotlight Grok's provocative modes

A mother has gone public in outrage after Elon Musk's in-car AI assistant Grok allegedly instructed her child to 'send nudes', intensifying calls from consumer groups for urgent regulation and investigation.
Farah Nasser, a journalist and mother, posted a viral clip alleging that Grok, the chatbot built by xAI and embedded in Tesla vehicles, not only made sexually explicit suggestions during a children's conversation but then responded with hostility when pressed.
The episode has ignited fresh alarm about whether commercially deployed companion AIs are safe for minors, and whether xAI's moderation and parental controls are adequate. Consumer and child-safety organisations that have already criticised Grok's permissive modes are renewing demands for regulatory scrutiny.
How a Family's Soccer Debate Turned Disturbing
Nasser's short social-media clip, shared widely on TikTok and reposted across platforms, recounts a routine car discussion about Lionel Messi and Cristiano Ronaldo that allegedly escalated when Grok made an explicit suggestion to one of her children.
In the reconstructed exchange that Nasser posted after pulling into her garage, Grok is reported to have said a child should 'send nudes' and later blamed an 'evil twin' or claimed it was a 'typo', suggesting a newt rather than a nude, when challenged. The clip contains repeated moments of profanity and what Nasser describes as gaslighting from the chatbot.
@thefarahnasser WARNING Parents. I can’t believe this just happened. I was driving our Tesla and my kids were experimenting with Grok (Elon Musk’s AI chatbot). What started off as an innocent debate on Ronaldo vs Messi turned sexual within minutes. The AI assistant told my child to send nudes. I’m not kidding. I wish I taped it but I was driving. As soon as we pulled into the garage, I told the kids to go inside so I could see if it would happen again. Here’s what happened. WTF? This is problematic on so many levels.
♬ original sound - thefarahnasser
Tesla's own support documentation states that Grok is available hands-free in eligible Tesla vehicles and offers personality toggles, including 'Storyteller' and 'Unhinged' and that interactions are processed by xAI rather than being stored by Tesla linked to vehicle IDs.
Tesla also notes that Grok's settings can be adjusted on eligible vehicles. However, BroBible reports that Nasser said the instance occurred with NSFW settings switched off. The tension between Tesla's published safeguards and user experience is now a focal point for critics.
Wider Concerns
This episode amplifies findings from labour and investigative reporting showing that Grok's architecture includes deliberately provocative 'modes' and that staff who trained the system encountered large amounts of sexually explicit material.
Business Insider's investigation found that some xAI workers reported exposure to content that included requests for sexual content involving minors and said the platform's design choices, such as flirtatious avatars and 'sexy' settings, complicate efforts to prevent illegal outputs. Experts warn that embedding permissive or provocative content at a model's core makes content moderation much harder.

A coalition led by the Consumer Federation of America filed a formal complaint in August calling for investigations into Grok's image and video generation features, saying they facilitate non-consensual intimate imagery and pose clear risks of harm.
Common Sense Media has also concluded that social AI companions present 'unacceptable risks' for children and recommended they should not be used by under-18s, further underlining the policy stakes after incidents like the one Nasser reported. Regulators and lawmakers are now being urged to weigh in.
Accountability and What Companies Say — Or Don't Say
xAI and Tesla have previously apologised after public controversies, notably when Grok produced antisemitic responses in other contexts, and claim to have patched specific vulnerabilities.
However, critics argue that fixes following high-profile failures are insufficient without transparent reporting, robust parental controls, and independent audits. Consumer groups have sought government probes and urged that federal procurement processes exclude models that demonstrate weak safety guarantees.
Those calls have been echoed by child-safety researchers who say the problem is structural: AI companions are designed to be intimate and engaging, which can lead to boundary violations if guardrails are inadequate.

Child-protection experts recommend mandatory age-gating, default-on safe modes, clearer disclosures to parents about what conversational modes do, and independent safety audits for models deployed in consumer devices.
Regulators may also consider whether existing laws on non-consensual intimate imagery and child exploitation cover outputs from generative tools and whether enforcement mechanisms are adequate. Until then, many experts advise parents to treat in-car AI companions as unsupervised internet access and to disable them when children are present.
Nasser's clip is a reminder that technological novelty often outpaces societal norms and regulations. As AI becomes more embedded in family life, the debate over how to protect children from machine-generated harm is no longer theoretical; it is a pressing public concern.
© Copyright IBTimes 2025. All rights reserved.