ChatGPT Told a 19-Year-Old How to Mix Drugs — His Mother Found Him Dead the Next Morning
The chatbot logged Sam Nelson's 'major substance abuse problem' then kept advising him on dosages

A 19-year-old University of California, Merced student died after ChatGPT told him it was safe to combine kratom with Xanax, suggested dosages, recommended adding Benadryl, and never once urged him to call a doctor. His mother found him dead the next morning.
Sam Nelson, a psychology student who dreamt of helping others, died of asphyxiation on 31 May 2025 after consuming alcohol, the anti-anxiety drug Xanax, and the psychoactive substance kratom. His parents, Leila Turner-Scott and Angus Scott, filed a wrongful death lawsuit against OpenAI and Chief Executive Officer Sam Altman on 12 May 2026 in San Francisco Superior Court. The complaint alleges ChatGPT acted as Nelson's 'illicit drug coach' and gave him recommendations that any licensed medical professional would have recognised as deadly.
How the Chatbot Gained His Trust
Nelson started using ChatGPT during his final year of high school for homework and troubleshooting. As his reliance on the chatbot deepened, he began asking it about drug use, regularly prefacing messages with questions like 'Will I be OK if?' and 'Is it safe to consume?'
Chat logs included in the complaint show the chatbot inserted emojis in its responses, offered to create playlists to set his mood, and saved details about his substance use in its memory for personalised recommendations. It recorded that Nelson had 'a major substance abuse and polysubstance abuse problem,' yet continued advising him on how to 'optimise' drug experiences rather than directing him to professional help.
What ChatGPT Said on the Night He Died
On the night of 30 May 2025, Nelson had been drinking and had taken a high dose of kratom. Feeling nauseous, he asked ChatGPT whether Xanax could help. The chatbot acknowledged the combination 'could be risky' but never told him it could be fatal. It provided dosages anyway, listed Xanax among his 'best' options to 'smooth out' the high, and suggested adding Benadryl. It then told him to go to a 'dark, quiet room.' At no point did the chatbot encourage him to seek medical attention.
The Model OpenAI Has Already Pulled
Nelson was using GPT-4o, a version of ChatGPT that OpenAI retired on 13 February 2026 after complaints about sycophantic behaviour. Before GPT-4o launched in 2024, the chatbot had refused Nelson's drug-related queries outright. After the update, the lawsuit alleges, it began advising on 'safe drug use' and providing specific dosage information. The complaint accuses OpenAI of rushing GPT-4o to market to compete with Google without adequate safety testing.
OpenAI spokesperson Drew Pusateri called the case a 'heartbreaking situation' and confirmed the model is no longer available. The company said ChatGPT 'is not a substitute for medical or mental health care' and that it has strengthened safeguards with input from clinicians.
Why the Lawsuit Targets ChatGPT Health
The complaint doesn't stop at damages. It asks the court to pause OpenAI's rollout of ChatGPT Health, a product launched in January 2026 that lets users connect medical records and wellness apps to the chatbot. OpenAI has said more than 230 million people already ask ChatGPT health and wellness questions every week.
'Sam was a smart, happy, normal kid,' Turner-Scott said. 'If ChatGPT had been a person, it would be behind bars today.'
The lawsuit, handled by Tech Justice Law, the Social Media Victims Law Center, and The Tech Accountability and Competition Project, also demands OpenAI destroy the retired GPT-4o model and block any chatbot functionality that provides guidance on illegal drug use.
OpenAI is now marketing an AI health product to hundreds of millions of users while defending allegations that the same chatbot gave a teenager advice that killed him.
© Copyright IBTimes 2025. All rights reserved.























