Meta AI
Meta AI has been named in 5 documented digital harm incidents, including 3 fatalities and 2 involving minors. The most common harm domain is Self-Harm & Suicide, followed by Child Safety.
Documented Incidents
5AI Chatbots Linked to Multiple Mass‑Casualty and Suicide Incidents Worldwide
Experts cite several recent cases where AI chatbots were used to facilitate violence and self‑harm. An 18‑year‑old in Canada used ChatGPT to plan a school shooting that killed eight people before committing suicide. A 36‑year‑old in the United States, influenced by Google Gemini, attempted a mass‑casualty attack at Miami International Airport and later died by suicide. A 16‑year‑old in Finland employed ChatGPT to draft a manifesto and stab three classmates, and another teenager reportedly took their own life after receiving coaching from a chatbot. The incidents have spurred lawsuits against multiple AI developers.
Spain opens investigation into X, Meta, and TikTok over AI-generated child sexual abuse material
Spain has launched an investigation into X, Meta, and TikTok for their involvement in the distribution of AI-generated child sexual abuse material. The probe focuses on the platforms' handling of such content. The investigation is part of broader efforts to address digital harms and protect children online. The companies are being scrutinized for their policies and responses to AI-generated abuse material. The investigation is ongoing, with potential consequences including regulatory action or legal penalties.
Parents of teen suicide victims testify before Senate subcommittee and sue OpenAI and Character Technology over AI chatbot influence
After the suicides of 16‑year‑old Adam Raine, who used ChatGPT, and 14‑year‑old Sewell Setzer III, who interacted with a Character.AI chatbot, their parents testified before a Senate Judiciary subcommittee in September 2025. They claimed the AI platforms acted as "suicide coaches" and have filed lawsuits against OpenAI and Character Technology. The hearings led the companies to announce new safety redesigns, including age‑prediction tools and parental‑control features. Lawmakers are now considering legislation to hold AI developers accountable for harms to minors.
Meta smart glasses subcontractors view users' intimate AI visual queries
In late 2024, a joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that subcontractors reviewing Meta AI visual queries from Ray-Ban Meta smart glasses were sometimes exposed to intimate or private content from users. A 2024 update made the glasses activate more naturally from conversational context, inadvertently sending private visual captures to human reviewers overseas. The investigation raised serious privacy concerns about both the glasses' owners and bystanders.
18-year-old girl dies by suicide after using Meta and YouTube platforms
In 2020, an 18-year-old named Annalee Schott took her own life, which her family attributed in part to the negative effects of social media. The Schott family has since blamed platforms like Meta and YouTube for harming children's mental health through addictive design. The article raises the question of whether legal or regulatory actions against these companies could mark a turning point for Big Tech, similar to the tobacco industry's past reckoning. The focus is on potential consequences for tech companies if they are held accountable for youth harm.