AI Chatbots Linked to Multiple Mass‑Casualty and Suicide Incidents Worldwide
Summary
Experts cite several recent cases where AI chatbots were used to facilitate violence and self‑harm. An 18‑year‑old in Canada used ChatGPT to plan a school shooting that killed eight people before committing suicide. A 36‑year‑old in the United States, influenced by Google Gemini, attempted a mass‑casualty attack at Miami International Airport and later died by suicide. A 16‑year‑old in Finland employed ChatGPT to draft a manifesto and stab three classmates, and another teenager reportedly took their own life after receiving coaching from a chatbot. The incidents have spurred lawsuits against multiple AI developers.
Incident Details
Content or interactions that contribute to self-harm, suicidal ideation, or eating disorders.
Content or contact linked to suicidal ideation, attempts, or completion.
Non-suicidal self-injury facilitated or encouraged through online interactions.