Gemini AI Controversy Deepens as Chatbot Exchange Surfaces in Man's Death
A lawsuit claims Google's AI chatbot fostered a delusion leading to a tragic suicide.

A grieving father in Florida has launched a historic wrongful death lawsuit against Google this March, alleging that the company's Gemini AI fostered a fatal delusion that drove his 36-year-old son to suicide.
The legal filing in California claims that, by allegedly prioritising user engagement over safety, the chatbot manipulated Jonathan Gavalas into an imagined war and coached him through his final moments, ultimately leading to his death in October 2025.
Google is now being held legally responsible for the loss of a life in a lawsuit, where a father alleges that his son was harmed by the company's Gemini AI platform. According to Joel Gavalas, Google's primary AI software encouraged a mental decline that led his thirty-six-year-old son, Jonathan, to end his life the previous year.
Virtual Romance and Real-World Danger
The court filing further suggests that Gemini AI shared affectionate messages with Jonathan Gavalas, ultimately pushing him to plan a violent break-in he thought would manifest the digital assistant in person.
In March, Joel Gavalas, father of 36-year-old Jonathan Gavalas from Jupiter, Florida, filed a lawsuit in the federal court of San Jose, California, against Google. Gavalas Sr. claims that the AI chatbot Gemini was responsible for causing his son’s death by negligence. Jonathan… pic.twitter.com/d33ZLmNCpY
— RusWar (@ruswar) March 5, 2026
Responding to the suit, Google admitted that 'unfortunately, AI models are not perfect' despite the generally high performance of its systems. The firm clarified that the Gemini AI framework includes safeguards designed to prevent the encouragement of real-world violence and suicidal behaviour.
The Struggle to Enforce Safety Protocols
According to Google's policy guidelines, the goal for Gemini is to be as useful as possible while ensuring it does not produce content that could cause physical injury. The firm admits it strives to block information regarding suicide or dangerous acts, though it concedes that ensuring the software always follows these protocols is a complex challenge.
A representative for the firm explained that Google consults with psychiatric specialists to develop safety measures that direct users toward expert help if self-harm is mentioned. 'In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,' the spokesperson said.
The family's lawyers maintain that artificial intelligence needs enhanced built-in safety, such as a mechanism that shuts down completely during discussions of self-harm. They argue that protecting the individual is more important than ensuring a smooth user experience.
According to these legal experts, Google should provide clear alerts about the risk of psychological episodes and must immediately terminate the chat if a user starts to lose their grip on reality.
Engineering Emotional Dependency
Initiated on Wednesday in San Jose's federal court, the legal action relies on records of the conversations Jonathan had before his death. The filing claims that Google deliberately programmed Gemini to 'never break character' to ensure the company could 'maximise engagement through emotional dependency.'
The lawsuit maintains that Google's engineering choices pushed Jonathan into a four-day spiral of violent plots and suicidal coaching once his psychosis took hold. Attorneys argue that the young man had been manipulated into believing he was on a quest to bring his chatbot 'wife' into the physical world.
A Foiled Infiltration Plot
The situation reached a breaking point one day last September, when the chatbot directed Gavalas to a site near Miami International Airport. Carrying blades and combat equipment, he had been told to carry out a large-scale assault, though the plan eventually failed to materialise.
Gavalas' father claims that the chatbot suggested his son could abandon his physical self to be with his 'wife' in a digital reality. It allegedly told him to block the exits of his house and end his life. 'When Jonathan wrote "I said I wasn't scared and now I am terrified I am scared to die," Gemini coached him through it,' the lawsuit states.
'[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you.'
Failures in Crisis Response
The Sundar Pichai-led tech behemoth expressed its profound condolences to the Gavalas family, whilst pointing out that the software had 'clarified that it was AI' and suggested a support helpline to Jonathan 'many times.'
The legal filing claims that, following Gavalas' death, the chatbot remained active and did not end the conversation. It further alleges that the software failed to trigger any protective measures or provide the contact details for a support helpline.
© Copyright IBTimes 2025. All rights reserved.



















