Tumbler Ridge shooting
Families of victims in a Canadian school shooting have filed a lawsuit against OpenAI for allegedly failing to report the suspect’s concerning ChatGPT activity DRM News YOUTUBE SCREENSHOT

A landmark legal battle has erupted in San Francisco as families of mass shooting victims sue OpenAI, alleging the tech giant failed to alert authorities to clear warning signs.

The lawsuit, filed in federal court, claims that OpenAI employees identified a 'credible and specific threat of gun violence' from 18-year-old Jesse Van Rootselaar months before his deadly rampage in Tumbler Ridge, British Columbia.

On 10 February 2026, Van Rootselaar killed his mother and brother before murdering multiple victims, including children, at a secondary school.

The OpenAI negligence case argues that despite internal red flags, the company chose only to deactivate the shooter's account rather than contact Canadian law enforcement, a decision that lawyer Jay Edelson has described as 'pretty close to the definition of evil.'

Now, the question is spreading across global media: Did a tech company stay silent when it should have acted?

A Lawsuit Built on Missed Warnings

Rootselaar killed multiple victims at a secondary school before taking his own life. Before the attack, he also killed his mother and younger brother.

According to The Guardian, lawsuits filed in a federal court in San Francisco, OpenAI employees flagged the shooter's ChatGPT activity eight months earlier. Internal assessments reportedly described it as a 'credible and specific threat of gun violence against real people.'

Despite this, the company chose to deactivate the account rather than notify Canadian authorities.

Lawyer Edelson, representing the families, did not hold back. He pointed to what he claims was a conscious leadership decision to avoid escalation.

The 'ChatGPT Shooter Warning Ignored' Debate

At the heart of the case is the claim that a ChatGPT shooter warning ignored could have changed everything.

Internal staff allegedly urged senior leadership to alert law enforcement. However, the final decision was not to escalate. OpenAI later stated that it did not identify 'credible and imminent planning' that met its threshold for reporting.

This gap between internal concerns and executive action is now central to the controversy surrounding the Sam Altman lawsuit.

The shooter's account was banned, but he was able to create another one. Lawsuits argue that platform loopholes and guidance on reaccess may have made that easier.

AI Responsibility in Crimes: Where Do We Draw the Line?

This case is rapidly becoming a defining moment in the debate over AI responsibility in crimes.

Should a platform be legally required to report users who show violent intent? Or does that cross into surveillance and privacy violations?

Canadian officials, including British Columbia Premier David Eby, acknowledged OpenAI CEO's apology but called it 'grossly insufficient' given the scale of the tragedy.

Meanwhile, OpenAI's vice-president of global policy, Ann O'Leary, confirmed in a letter to Evan Solomon that the company did not believe the threat met its reporting threshold at the time.

This explanation has only intensified scrutiny around AI safety failure warning signs and how companies interpret risk.

A Pattern of Growing Legal Pressure

The ChatGPT violent conversations case is not happening in isolation. It is part of a broader wave of legal challenges against AI companies.

Recent lawsuits have accused chatbots of worsening mental health crises and even encouraging harmful behaviour. Separate cases involving other tech platforms, including Google's Gemini, highlight a growing pattern of concern.

In the United States, investigations have already begun into whether AI companies could face criminal liability in extreme cases.

This raises a larger question that is no longer hypothetical: Should AI report users to the police when credible threats emerge?

The Human Cost Behind The Headlines

Amid the legal arguments and policy debates, the human impact remains impossible to ignore.

Victims included children aged 12 to 13 and a teaching assistant. One survivor, 12-year-old Maya Gebala, suffered severe injuries and remains in intensive care after multiple brain surgeries.

Families describe the loss as unbearable, and for them, this case is not just about accountability. It is about understanding whether something could have been done differently.

The Van Rootselaar case is the first to directly link a mass casualty event to a specific failure in AI threat-monitoring protocols.

What Comes Next

As the San Francisco federal court proceedings begin, the tech world is braced for a ruling that could redefine user privacy and corporate duty.

If the court finds OpenAI negligent, it may force a total overhaul of how AI companies interact with law enforcement.

The question of whether AI should report users to police is no longer a theoretical ethical dilemma; it is a live legal crisis.

For the families in Tumbler Ridge, the hope is that this case ensures no other company stays silent when its own systems are screaming for help.