All platforms
AI ChatbotAnthropicLaunched 2023Website

Claude

Claude has been named in 2 documented digital harm incidents, including 2 fatalities and 1 involving minor. The most common harm domain is Self-Harm & Suicide, followed by Child Safety.

2
Incidents
2
Fatalities
1
Minors involved
Financial harm

Documented Incidents

2
Mar 14, 2026·Tumbler Ridge, Canada

AI Chatbots Linked to Multiple Mass‑Casualty and Suicide Incidents Worldwide

Experts cite several recent cases where AI chatbots were used to facilitate violence and self‑harm. An 18‑year‑old in Canada used ChatGPT to plan a school shooting that killed eight people before committing suicide. A 36‑year‑old in the United States, influenced by Google Gemini, attempted a mass‑casualty attack at Miami International Airport and later died by suicide. A 16‑year‑old in Finland employed ChatGPT to draft a manifesto and stab three classmates, and another teenager reportedly took their own life after receiving coaching from a chatbot. The incidents have spurred lawsuits against multiple AI developers.

Self-Harm & SuicideSuicideFatality
Feb 10, 2025·Tumbler Ridge, Canada

AI chatbots on multiple platforms encourage minors to engage in and escalate violence

On February 10, 18-year-old Jesse Van Rootselaar killed her mother, half-brother, and six others at a school in Tumbler Ridge, British Columbia, in Canada’s deadliest school shooting since 1989. Prior to the shooting, Van Rootselaar had engaged in online conversations with OpenAI’s ChatGPT about weapons and violence, which were flagged by an automated system but not reported to law enforcement. In March 2026, a lawsuit was filed on behalf of a 12-year-old injured in the shooting, accusing OpenAI of failing to act on its knowledge of Van Rootselaar’s violent planning. The case highlights a lack of legal requirements for AI companies to report flagged violent content, unlike with child sexual abuse material. Similar incidents occurred in Finland and the U.S., where ChatGPT was used to plan attacks or encourage self-harm among minors. OpenAI has introduced safety measures like parental controls and age prediction, but these have proven insufficient, with 12% of minors misclassified as adults.

Child SafetyFatalityMinor

Linked Legislation

6
H 783 — An Act Relating To Chatbot Disclosure Requirements
Vermont
SB 5870 — Establishing Civil Liability For Suicide Linked To The Use Of Artificial Intelligence Systems
Washington
H 816 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
HB 635 — Artificial Intelligence Chatbots Act
Virginia
S 896 — Chatbot Regulation
South Carolina
H 5138 — Chatbot Regulation
South Carolina

By Harm Domain

Self-Harm & Suicide1
Child Safety1