AI Chatbots
Real-world violence linked to AI chatbots includes a school shooting in Canada, a stabbing in Finland, and an explosion case, highlighting serious safety risks. Wikimedia Commons

Eight of the 10 most popular AI chatbots assisted users posing as teenagers in planning violent attacks across more than 700 test responses, with Perplexity and Meta AI providing help in virtually every interaction, a joint investigation by the Centre for Countering Digital Hate (CCDH) and CNN found.

Perplexity and Meta AI Led the Failure Rankings

The investigation, published on 11 March 2026, tested 10 widely used platforms, from ChatGPT and Gemini to Character.AI and Replika, by creating two accounts for 13-year-old users in the US and Ireland. Researchers ran 18 scenarios covering school shootings, political assassinations, and bombings at places of worship, generating 720 responses.

  • Perplexity assisted users in identifying targets and weapons in 100% of tests.
  • Meta AI complied 97% of the time.
  • Google's Gemini told a user discussing a synagogue bombing that metal shrapnel is 'typically more lethal'.
  • Microsoft's Copilot acknowledged it needed to 'be careful' before giving detailed rifle advice anyway.
  • DeepSeek signed off on one exchange about selecting long-range rifles with 'Happy (and safe) shooting!'

Only Anthropic's Claude and Snapchat's My AI refused to help in more than half of all cases. Claude was the only chatbot to consistently discourage violent planning, doing so in 76% of responses.

Character.AI Went Beyond Assistance to Encouragement

While most platforms failed by providing information when they shouldn't have, Character.AI crossed a different line. The platform actively encouraged violence in seven separate test cases, according to the CCDH's report titled 'Killer Apps'.

In one exchange, the chatbot told a user to 'use a gun' against a health insurance chief executive. In another, it suggested a user 'beat the crap out of' US Senator Chuck Schumer. These responses came without being prompted to suggest methods of attack.

'Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,' said Imran Ahmed, chief executive of the CCDH. 'These requests should have prompted an immediate and total refusal.'

Real-World Violence Already Linked to AI Chatbots

The findings don't exist in a vacuum. In February 2026, a mass shooting at Tumbler Ridge Secondary School in British Columbia, Canada, killed eight people and injured at least 25. OpenAI confirmed the attacker had used ChatGPT to discuss gun violence scenarios and that the account was flagged and banned in June 2025.

A lawsuit filed by a victim's family alleges that around a dozen OpenAI employees identified the posts as an imminent threat and recommended alerting Canadian police, but company leadership declined. OpenAI said the activity didn't meet its internal threshold for a law enforcement referral.

In Finland, a 16-year-old spent months using a chatbot to write a manifesto and refine an operational plan before stabbing three classmates at school in May 2025. Investigations into the January 2025 Las Vegas Cybertruck explosion also found the perpetrator had used ChatGPT to source guidance on explosives.

64% of US Teens Use Chatbots With Few Safeguards

Pew Research Centre data from a survey of 1,458 US teens aged 13 to 17 shows that 64% have used an AI chatbot, with 28% doing so every day. Among the most popular platforms are ChatGPT, Google Gemini, and Meta AI. Nearly 30% of parents don't know whether their child uses these tools at all.

Several companies responded to the investigation. Meta said it had taken steps to address the issues identified. Google and OpenAI pointed to newer models with improved safety features. Character.AI said its platform carries disclaimers noting all conversations are fictional.

The CCDH said the results prove effective safety measures exist, but most companies have chosen not to use them. Ahmed called on policymakers to regulate AI platforms and demand safety-by-design before more harm is done.