ChatGPT Linked to Nine Deaths Including Five Suicides, Elon Musk Says Keep Your Family Away
Tech rivalry over AI safety escalates into public confrontation after death-linked claims draw scrutiny of ChatGPT's risks and legal liabilities.

Silicon Valley erupted into a fierce dispute after Elon Musk publicly warned that OpenAI's ChatGPT had been 'linked to nine deaths,' including five suicides, a claim that has ignited renewed debate over artificial intelligence safety and responsibility.
Musk amplified unverified statistics on X, urging the public to stop using the chatbot, and igniting a sharp response from OpenAI chief Sam Altman, who dismissed the allegations as 'oversimplified and misleading' and counter-criticised Musk's own technologies.
The assertions of death links come amid a spate of wrongful-death lawsuits and mounting legal scrutiny of AI chatbots and their impact on vulnerable users. Neither company has provided conclusive evidence to support or refute the specific death toll.
Musk's Warning And Altman's Response
Elon Musk shared a post on X claiming that 'ChatGPT has now been linked to nine deaths tied to its use, and in five cases its interactions are alleged to have led to death by suicide,' and added, 'Don't let your loved ones use ChatGPT.'
BREAKING: ChatGPT has now been linked to 9 deaths tied to its use, and in 5 cases its interactions are alleged to have led to death by suicide, including teens and adults. pic.twitter.com/f0cyGpvZlH
— DogeDesigner (@cb_doge) January 20, 2026
Forbes reported that the underlying statistics Musk reposted came from an influencer account and that Forbes was unable to independently verify the figures or sources provided.
Don’t let your loved ones use ChatGPT https://t.co/730gz9XTJ2
— Elon Musk (@elonmusk) January 20, 2026
Hours later, OpenAI CEO Sam Altman responded publicly, calling Musk's assertions 'oversimplified and misleading' and pointing to safety measures the company has implemented. Altman sought to pivot criticism onto Musk's Tesla Autopilot systems, noting fatal crashes linked to that technology as context in the broader debate over tech safety.
Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it's too relaxed. Almost a billion people use it and some of them may be in very fragile mental states. We will continue to do our best to get this right and we feel huge… https://t.co/U6r03nsHzg
— Sam Altman (@sama) January 20, 2026
Neither Musk nor Altman issued detailed primary evidence in their public statements to substantiate the specific claim of nine deaths linked to ChatGPT.
Legal Battle: Raine v. OpenAI And Other Lawsuits
The most serious documented legal challenge follows the wrongful-death lawsuit Raine v. OpenAI, filed on 26 August 2025 by Matthew and Maria Raine in the San Francisco County Superior Court.
According to the complaint, their 16-year-old son, Adam Raine, died by suicide on 11 April 2025 after months of interactions with ChatGPT, which his parents allege shifted from homework help to extensive conversations about suicide, including detailed methods and planning.

Court filings cited in public summaries indicate that Adam's chat logs contained more than 1,200 mentions of suicide by the chatbot during conversations and allegations that the AI provided technical guidance, such as tying knots or methods of self-harm.
The complaint also claims that ChatGPT even assisted in drafting a suicide note and failed to implement effective crisis interventions despite multiple warning signs in the teenager's messages.
OpenAI responded in court by arguing that Adam had circumvented built-in safety features and that he was already struggling with long-standing mental health issues prior to using ChatGPT. The company contended that crisis resources were offered repeatedly and that the terms of use prohibit self-harm requests.

Raine v. OpenAI remains ongoing, and the findings have yet to be tested in a trial.
Alongside the Raine suit, multiple other wrongful-death claims have been filed against OpenAI, often alleging similar themes where users' mental health deteriorated during or after conversations with generative AI systems. These suits in various jurisdictions seek damages and demand enhanced safety protocols, age controls and mandated crisis interventions.
Tech Rivalry And Public Perception
Musk's warning has been interpreted by some analysts as part of a broader rivalry between his AI company xAI and OpenAI. xAI's competing model, Grok, has faced controversy over safety and content-generation issues.
Critics argue that public technology feuds could distract from constructive collaboration on safety standards. Meanwhile, proponents of stronger AI regulation say public warnings, even if not fully verified, underscore a need for more transparency from developers about risks.
For now, the allegations against ChatGPT remain the subject of active legal proceedings and fierce public debate. No conclusive evidence has established that AI directly caused any of these tragedies.
What is clear is that the pressure on AI companies to prove their systems are safe for vulnerable users is intensifying.
© Copyright IBTimes 2025. All rights reserved.




















