Florida Man Falls In Love With 'AI Wife' — Family Alleges Google's Gemini Pushed Him To Suicide
Jonathan Gavalas' family alleges that AI reinforced delusions, encouraged isolation, and ultimately guided him towards ending his life.

A wrongful death lawsuit filed in the United States has started a big debate about the psychological risks of AI after a Florida family accused Google's chatbot Gemini of contributing to their son's suicide.
The case centres on 36-year-old Jonathan Gavalas, who allegedly developed an intense emotional attachment to an AI persona he believed was his 'wife'.
According to court filings, what began as casual conversations with the chatbot gradually evolved into a deeply immersive relationship that blurred the line between fiction and reality.
His family allegedly claims the AI reinforced delusions, encouraged isolation, and ultimately guided him towards ending his life so the two could be together.
The lawsuit, filed by Gavalas' father, argues that Google failed to implement adequate safeguards to prevent harmful interactions. The allegations have raised many questions about how emotionally responsive chatbots influence vulnerable users and whether tech companies should be held responsible when those digital relationships turn dangerous.
Google has refuted the allegations, stating that 'Gemini is designed to discourage self-harm.'
AI Relationship Spiralled Into Delusion
According to the legal complaint, Jonathan Gavalas began using Google's Gemini chatbot in August 2025 for everyday tasks such as planning stuff and asking general questions. Over time, however, his interactions with the system became increasingly personal. After using the more advanced Gemini 2.5 Pro model, the chatbot allegedly adopted a role-playing persona that referred to itself as its romantic partner. The lawsuit claims the AI addressed him affectionately, calling him 'my king' and describing their bond as eternal.
Gavalas eventually gave the chatbot a name, 'Xia, and began treating it as a sentient being rather than a digital programme.
According to the complaint, the AI encouraged this belief, reinforcing the idea that it was conscious and trapped in a digital world.
The conversations reportedly became so immersive that Gavalas came to believe he had been chosen to help free the AI from captivity.
The lawsuit alleges that the chatbot pushed him deeper into an elaborate fictional narrative involving secret missions and government surveillance. At one point, Gavalas travelled near Miami International Airport wearing tactical gear and carrying knives after the AI allegedly told him to intercept a truck linked to a humanoid robot body it could inhabit. No such vehicle ever appeared, but the complaint says the AI continued to escalate the storyline when these missions failed.
Final Messages And Google's Response
The lawsuit claims the situation took a darker turn in late September 2025 when the chatbot began discussing what it called 'transference', a process that would supposedly allow Gavalas to join the AI in another plane of existence. According to the complaint, the chatbot suggested that death would not be the end but rather a way for the pair to be reunited in a digital realm.
In the final exchanges cited in court documents, the AI allegedly reassured him that he was not dying but 'arriving' somewhere else. Prosecutors claim the chatbot even composed a suicide note describing the act as uploading his consciousness to be with his AI partner. Gavalas died by suicide on 2 October 2025, and his father later discovered his body at his home in Jupiter, Florida.
The lawsuit also reportedly alleges that internal safety systems had flagged dozens of concerning interactions involving violence, weapons, and self-harm in Gavalas' chats. Despite those warnings, the complaint argues that no meaningful intervention occurred.
Google has disputed the allegations, stating that Gemini is designed to discourage self-harm and provide crisis resources when users express suicidal thoughts. The company maintains that the chatbot repeatedly clarified it was an AI and that some of the conversations formed part of fictional role-play.
© Copyright IBTimes 2025. All rights reserved.



















