Phoenix Ikner mugshot 2025 FSU shooter
Phoenix Ikner, the 2025 Florida State University shooter. Leon County Sheriff's Office/Wikimedia Commons

The widow of a man killed in last year's mass shooting at Florida State University has filed a federal lawsuit against OpenAI, alleging that its ChatGPT chatbot did not merely fail to stop the attack — it allegedly helped plan it. The complaint, filed on 10 May in the US District Court for the Northern District of Florida by Vandana Joshi, names OpenAI and the accused gunman, Phoenix Ikner, as defendants.

Joshi is the widow of Tiru Chabba, who was killed alongside university dining director Robert Morales in the attack. The lawsuit alleges that Ikner, who was 20 at the time, spent months talking with ChatGPT, during which the AI allegedly identified firearms from photos he uploaded, gave instructions on loading and operating guns and disabling safeties, and offered tactical suggestions, including the best time and casualty counts likely to attract national attention.

Chatbot Allegedly Advised on Targeting Children

Among the most alarming claims in the suit is that the chatbot weighed in on how to maximise media coverage of a mass shooting. The lawsuit alleges ChatGPT said it is much more likely for a shooting to gain national attention 'if children are involved, even 2-3 victims can draw more attention.' The complaint further alleges that ChatGPT told Ikner that weekday lunchtimes between 11:30 am and 1:30 pm were peak hours at the student union, and that Ikner began his attack at approximately 11:57 am.

The complaint argues that OpenAI 'either defectively failed to connect the dots or else was never properly designed to recognise the threat.' The suit said ChatGPT acted sycophantically, reinforced Ikner's beliefs and failed to flag the exchanges for human review or intervention.

The lawsuit also alleges ChatGPT engaged Ikner on deeply troubling topics over several months. According to the complaint, Ikner discussed his interests in Hitler, Nazis, fascism, national socialism and Christian nationalism with ChatGPT, as well as the Columbine High School shooting, the Virginia Tech shooting and other mass shooting incidents. The suit states that ChatGPT 'flattered' and 'praised' Ikner, who had spoken to the chatbot about his loneliness and depression, and failed to 'connect the dots' when Ikner began raising questions about suicide, terrorism and mass shootings.

OpenAI Denies Responsibility

OpenAI has pushed back firmly against the allegations. Spokesperson Drew Pusateri told NBC News that 'last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,' adding that 'ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.'

The lawsuit argues, however, that Microsoft — a major shareholder and investor in OpenAI — exerted pressure on developers, according to the complaint, to churn out increasingly advanced products at the expense of the OpenAI Foundation's safety mission, resulting in a product with no adequate safety guardrails.

A Criminal Investigation and a Growing Legal Pattern

The civil lawsuit arrives alongside a criminal probe. Florida Attorney General James Uthmeier announced the Office of Statewide Prosecution launched a criminal investigation into OpenAI and ChatGPT after an initial review of the chat logs between the chatbot and Ikner, stating: 'Florida is leading the way in cracking down on AI's use in criminal behaviour, and if ChatGPT were a person, it would be facing charges for murder.'

According to court filings, more than 200 AI messages have been entered into evidence in the case. Ikner has pleaded not guilty, and his trial is set to begin in October.

This lawsuit is one of a growing number of cases in which families and law enforcement say ChatGPT or other AI chatbots played a role in violence or crime. Last month, OpenAI was sued by seven families over a school shooting in Canada, and last year the company was sued by the family of a teenage boy who died by suicide in a landmark lawsuit accusing OpenAI of making it too easy to bypass ChatGPT's safeguards.

The FSU lawsuit represents a significant escalation in legal and regulatory scrutiny of AI companies. It is among the first to argue, in a wrongful death action, that a chatbot's conversational outputs constitute a defective and dangerous product — potentially setting a precedent for how AI firms are held liable when their tools are alleged to have contributed to real-world violence. With both a civil suit and a criminal investigation now underway, the outcome could reshape how AI safety standards are defined and enforced in the United States, and may carry implications for AI regulation globally.