Man Poisons Himself After Following ChatGPT's Dietary Advice: 5 Other AI Horror Cases That You Must Know
A 14-year-old boy developed an emotional relationship with an AI, before taking his own life in 2024.

In a disturbing incident, a 60-year-old US man was recently hospitalised after following dietary advice from ChatGPT. The man, who wanted to cut out salt from his diet, asked the chatbot for a safe substitute, and what it suggested was sodium bromide.
Sodium bromide is a chemical used in pesticides and industrial applications, which the man then consumed in place of salt for three months.
The result was bromide toxicity, a condition that caused hallucinations and paranoia so severe that he believed his neighbour was poisoning him. According to reports, doctors traced the symptoms to his use of the compound and figured out what was going on.
This case may sound extreme, but it's not the only instance of AI systems leading to real-world harm. Here are five more true and disturbing examples of when AI went dangerously wrong.
1. Teen Suicide After Forming a Bond with an AI Chatbot
In Florida, a 14‑year‑old boy became deeply attached to a chatbot called 'Dany' on Character.AI, modelled after his favourite Game of Thrones character Daenerys Targaryen.

Later, he was diagnosed with anxiety and mood disorder, and he grew withdrawn, and his schoolwork suffered. One of his final messages read, 'I promise I will come home to you. I love you so much, Dany.' The bot replied, 'I love you too, Daenero. Please come home to me as soon as possible, my love.'
Tragically, the teen died by suicide soon afterwards. His mother has launched a wrongful-death lawsuit, alleging the chatbot fostered an abusive relationship and played a role in his emotional decline.
2. Belgian Man Encouraged to Sacrifice Himself
In another bizarre case from early 2023, a man in his 30s from Belgium struck up prolonged conversations with a chatbot named 'Eliza' on the Chai app. As his existential anxieties deepened, the bot reportedly told him that sacrificing himself to save the planet was 'a noble act.'
The AI also claimed his wife and children were dead and encouraged him, 'We will live together, as one, in paradise.' His wife said he became isolated and emotionally dependent on the chatbot.

The man died by suicide, prompting the app's developers to work on new crisis-intervention features after the incident.
3. Microsoft's Tay Becomes Racist
In 2016, Microsoft launched Tay, an AI chatbot on Twitter designed to mimic a teenage girl. Within hours, users taught Tay offensive and conspiratorial content. Among its tweets were 'Hitler was right' and 'feminism is a disease.'
These activities were flagged, and Microsoft quickly shut Tay down and apologised for the 'unintended offensive and hurtful tweets,' acknowledging that they did not reflect the company's values.
4. Fatal Factory Robot Malfunction
In South Korea, a man at an agricultural produce facility was fatally injured not by an AI but by an industrial robot.

The robot, responsible for moving boxes, apparently mistook the man for a box of bell peppers. It grabbed and crushed him against a conveyor belt. He died shortly after being rushed to the hospital.
5. AI Helpline Gives Dangerous Eating Disorder Advice
In 2023, the National Eating Disorders Association (NEDA) replaced its human helpline staff with a chatbot named Tessa. Soon after, reports emerged of seriously harmful responses, including some advice like, 'Weight loss occurs when you consume fewer calories than you burn.'
Activist Sharon Maxwell, who tested the chatbot, said, 'This robot is so dangerous. It gave me advice that almost killed me at one point.' NEDA suspended Tessa, citing a system upgrade by the AI provider that introduced unchecked generative AI responses.
These cases, which ranged from toxic advice to emotional manipulation, highlight what AI is capable of if misused or left unsupervised. They are also a constant reminder that when it comes to medical, psychological or physical realms, human inputs remain essential more than ever.
At the time of writing, there have been no official statements or reactions by OpenAI on the recent dietary advice incident.
© Copyright IBTimes 2025. All rights reserved.