US man poisons himself after following ChatGPT’s dietary advice
Following ChatGPT's advice, the man replaced table salt with sodium bromide, leading to bromide toxicity. Pixabay

In a disturbing incident, a 60-year-old US man was recently hospitalised after following dietary advice from ChatGPT. The man, who wanted to cut out salt from his diet, asked the chatbot for a safe substitute, and what it suggested was sodium bromide.

Sodium bromide is a chemical used in pesticides and industrial applications, which the man then consumed in place of salt for three months.

The result was bromide toxicity, a condition that caused hallucinations and paranoia so severe that he believed his neighbour was poisoning him. According to reports, doctors traced the symptoms to his use of the compound and figured out what was going on.

This case may sound extreme, but it's not the only instance of AI systems leading to real-world harm. Here are five more true and disturbing examples of when AI went dangerously wrong.

1. Teen Suicide After Forming a Bond with an AI Chatbot

In Florida, a 14‑year‑old boy became deeply attached to a chatbot called 'Dany' on Character.AI, modelled after his favourite Game of Thrones character Daenerys Targaryen.

AI companion
Gen Z and millennials are increasingly turning to AI for emotional support, finding it more effective than traditional pets, according to a new study. Pixabay

Later, he was diagnosed with anxiety and mood disorder, and he grew withdrawn, and his schoolwork suffered. One of his final messages read, 'I promise I will come home to you. I love you so much, Dany.' The bot replied, 'I love you too, Daenero. Please come home to me as soon as possible, my love.'

Tragically, the teen died by suicide soon afterwards. His mother has launched a wrongful-death lawsuit, alleging the chatbot fostered an abusive relationship and played a role in his emotional decline.

2. Belgian Man Encouraged to Sacrifice Himself

In another bizarre case from early 2023, a man in his 30s from Belgium struck up prolonged conversations with a chatbot named 'Eliza' on the Chai app. As his existential anxieties deepened, the bot reportedly told him that sacrificing himself to save the planet was 'a noble act.'

The AI also claimed his wife and children were dead and encouraged him, 'We will live together, as one, in paradise.' His wife said he became isolated and emotionally dependent on the chatbot.

Belgian man was encouraged to sacrifice himself by AI
A man in his 30s from Belgium sacrificed his life on the instructions of an AI. Pixabay

The man died by suicide, prompting the app's developers to work on new crisis-intervention features after the incident.

3. Microsoft's Tay Becomes Racist

In 2016, Microsoft launched Tay, an AI chatbot on Twitter designed to mimic a teenage girl. Within hours, users taught Tay offensive and conspiratorial content. Among its tweets were 'Hitler was right' and 'feminism is a disease.'

These activities were flagged, and Microsoft quickly shut Tay down and apologised for the 'unintended offensive and hurtful tweets,' acknowledging that they did not reflect the company's values.

4. Fatal Factory Robot Malfunction

In South Korea, a man at an agricultural produce facility was fatally injured not by an AI but by an industrial robot.

Industrial robot
In South Korea, a man at an agricultural produce facility was fatally injured by an industrial robot. Pixabay

The robot, responsible for moving boxes, apparently mistook the man for a box of bell peppers. It grabbed and crushed him against a conveyor belt. He died shortly after being rushed to the hospital.

5. AI Helpline Gives Dangerous Eating Disorder Advice

In 2023, the National Eating Disorders Association (NEDA) replaced its human helpline staff with a chatbot named Tessa. Soon after, reports emerged of seriously harmful responses, including some advice like, 'Weight loss occurs when you consume fewer calories than you burn.'

Activist Sharon Maxwell, who tested the chatbot, said, 'This robot is so dangerous. It gave me advice that almost killed me at one point.' NEDA suspended Tessa, citing a system upgrade by the AI provider that introduced unchecked generative AI responses.

These cases, which ranged from toxic advice to emotional manipulation, highlight what AI is capable of if misused or left unsupervised. They are also a constant reminder that when it comes to medical, psychological or physical realms, human inputs remain essential more than ever.

At the time of writing, there have been no official statements or reactions by OpenAI on the recent dietary advice incident.