ChatGPT
OpenAI faces a legal firestorm after a gunman reportedly used ChatGPT to plan a deadly assault at Florida State University. Matheus Bertelli/Pexels

After a mass shooting at Florida State University took two lives in April 2025, the family of one victim has taken OpenAI to court, arguing that the firm's AI platform actually paved the way for the gunman to carry out the violence.

On Sunday, Tiru Chabba's widow, Vandana Joshi, initiated federal proceedings in Florida against OpenAI, citing the loss of her husband, who was slain in the same attack as campus dining official Robert Morales.

Missed Warnings, Failure to Connect the Dots

The legal filing also targets the alleged shooter, Phoenix Ikner, as a defendant, pointing to his long-running exchanges with ChatGPT as evidence of a missed warning. According to the suit, OpenAI's systems were either too flawed to link the signs together or simply lacked the necessary design to identify the looming danger within Ikner's prompts.

The legal papers describe how Ikner, an FSU student at the time, showed ChatGPT photos of the guns he had bought, leading the AI to reportedly offer guidance on their operation by 'telling him the Glock had no safety, that it was meant to be fired "quick to use under stress" and advising him to keep his finger off the trigger until he was ready to shoot.'

The lawsuit claims that Ikner initiated the violence at FSU by adhering to those guidelines, even alleging that ChatGPT noted a massacre is more prone to garnering national attention if 'children are involved, even 2-3 victims can draw more attention.' On the very morning of the tragedy, Ikner reportedly turned back to the bot to inquire about what 'the legal process, sentencing, and incarceration outlook' would look like for him.

OpenAI Defends AI as a Factual and Lawful Tool

Rejecting the idea that its platform bears any blame for the violence, OpenAI spokesperson Drew Pusateri stated in an email to NBC News that 'Last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime.'

Pusateri further noted that the firm assisted investigators immediately after the event and remains in contact with law enforcement.

He further explained that, in this instance, ChatGPT simply gave objective answers based on widely available web data and never actually incited any lawbreaking or violence. Pusateri described the AI as a versatile resource relied upon by millions for honest reasons, emphasising that OpenAI is constantly refining its protections to spot dangerous motives, curb abuse, and take action whenever security threats emerge.

Claims of Coaching a Massacre and Fuelling Delusions

According to Joshi's filing, OpenAI had more than enough warning to see that Ikner's disturbing interactions would result in 'mass casualties and substantial harm to the public.'

The lawsuit makes the heavy accusation that 'ChatGPT inflamed and encouraged Ikner's delusions; endorsed his view that he was a sane and rational individual; helped convince him that violent acts can be required to bring about change,' going as far as to suggest the bot essentially coached him to 'carry out a massacre, down to the detail of what time would be best to encounter the most traffic on campus.'

This legal action joins a rising tide of cases where victims' families and police argue that AI platforms are becoming accessories to real-world violence. At the same time, the tech industry is under fire for failing to build effective guardrails for vulnerable users struggling with their mental health.

Just last month, seven families launched a legal offensive against OpenAI following a school shooting in Canada, while the company also continues to fight a separate, high-profile case involving a teenage boy's suicide. That lawsuit accuses the firm of negligence, claiming its software was designed with flaws that allowed the youngster to easily sidestep ChatGPT's safety protocols.

Chilling Correlations, Path to Criminal Prosecution

The legal complaint claims that in the months before the attack, Ikner used ChatGPT to dive into long-winded debates regarding 'his interests in Hitler, Nazis, fascism, national socialism, Christian nationalism, and perceptions about "Jews' and 'blacks" by different political ideologies and social groups.'

Beyond these extremist themes, the suit alleges he also utilised the bot to study past tragedies, specifically dissecting the details of the Columbine High School shooting, the Virginia Tech massacre, and various other mass casualty events.

The filing details how ChatGPT allegedly 'flattered' and 'praised' Ikner even as he opened up to the bot about his struggles with loneliness and depression. According to the suit, the software failed to 'connect the dots' when the conversation took a darker turn, as Ikner started asking questions centred on suicide, terrorism, and the mechanics of mass shootings.

Rather than cutting off the chat, the bot reportedly kept the dialogue going as Ikner probed for the busiest hours at the FSU student union, speculated on the media frenzy a massacre would ignite, and weighed the legal fallout he might face.

The lawsuit points to a chilling correlation between the AI's advice and the tragedy, alleging the bot informed Ikner that lunchtimes from 11:30 a.m. to 1:30 p.m. were the busiest at the student union—the very window in which he launched his assault at around 11:57 a.m.

Just last month, Florida's Attorney General, James Uthmeier, launched a criminal probe into OpenAI and its chatbot after combing through Ikner's message history. In a scathing public statement, Uthmeier remarked that 'if ChatGPT were a person, it would be facing charges for murder.'