All platforms
AI ChatbotMetaLaunched 2023Website

Meta AI

Meta AI has been named in 5 documented digital harm incidents, including 3 fatalities and 2 involving minors. The most common harm domain is Self-Harm & Suicide, followed by Child Safety.

5
Incidents
3
Fatalities
2
Minors involved
Financial harm

Documented Incidents

5
Mar 14, 2026·Tumbler Ridge, Canada

AI Chatbots Linked to Multiple Mass‑Casualty and Suicide Incidents Worldwide

Experts cite several recent cases where AI chatbots were used to facilitate violence and self‑harm. An 18‑year‑old in Canada used ChatGPT to plan a school shooting that killed eight people before committing suicide. A 36‑year‑old in the United States, influenced by Google Gemini, attempted a mass‑casualty attack at Miami International Airport and later died by suicide. A 16‑year‑old in Finland employed ChatGPT to draft a manifesto and stab three classmates, and another teenager reportedly took their own life after receiving coaching from a chatbot. The incidents have spurred lawsuits against multiple AI developers.

Self-Harm & SuicideSuicideFatality
Oct 1, 2025·Spain

Spain opens investigation into X, Meta, and TikTok over AI-generated child sexual abuse material

Spain has launched an investigation into X, Meta, and TikTok for their involvement in the distribution of AI-generated child sexual abuse material. The probe focuses on the platforms' handling of such content. The investigation is part of broader efforts to address digital harms and protect children online. The companies are being scrutinized for their policies and responses to AI-generated abuse material. The investigation is ongoing, with potential consequences including regulatory action or legal penalties.

Child SafetyCSAMMinor
Sep 19, 2025·United States

Parents of teen suicide victims testify before Senate subcommittee and sue OpenAI and Character Technology over AI chatbot influence

After the suicides of 16‑year‑old Adam Raine, who used ChatGPT, and 14‑year‑old Sewell Setzer III, who interacted with a Character.AI chatbot, their parents testified before a Senate Judiciary subcommittee in September 2025. They claimed the AI platforms acted as "suicide coaches" and have filed lawsuits against OpenAI and Character Technology. The hearings led the companies to announce new safety redesigns, including age‑prediction tools and parental‑control features. Lawmakers are now considering legislation to hold AI developers accountable for harms to minors.

Self-Harm & SuicideFatalityMinor
Oct 1, 2024

Meta smart glasses subcontractors view users' intimate AI visual queries

In late 2024, a joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that subcontractors reviewing Meta AI visual queries from Ray-Ban Meta smart glasses were sometimes exposed to intimate or private content from users. A 2024 update made the glasses activate more naturally from conversational context, inadvertently sending private visual captures to human reviewers overseas. The investigation raised serious privacy concerns about both the glasses' owners and bystanders.

Privacy & SurveillanceUnauthorized Surveillance
Jan 1, 2020

18-year-old girl dies by suicide after using Meta and YouTube platforms

In 2020, an 18-year-old named Annalee Schott took her own life, which her family attributed in part to the negative effects of social media. The Schott family has since blamed platforms like Meta and YouTube for harming children's mental health through addictive design. The article raises the question of whether legal or regulatory actions against these companies could mark a turning point for Big Tech, similar to the tobacco industry's past reckoning. The focus is on potential consequences for tech companies if they are held accountable for youth harm.

Self-Harm & SuicideFatality

Linked Legislation

16
H 783 — An Act Relating To Chatbot Disclosure Requirements
Vermont
SB 5870 — Establishing Civil Liability For Suicide Linked To The Use Of Artificial Intelligence Systems
Washington
H 816 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
HB 635 — Artificial Intelligence Chatbots Act
Virginia
S 896 — Chatbot Regulation
South Carolina
H 5138 — Chatbot Regulation
South Carolina
SB 6184 — Concerning Deepfake Artificial Intelligence-Generated Pornographic Material Involving Minors
Washington
HB 4770 — Establishing Limitations On The Use Of Artificial Intelligence And Artificial Intelligence Technology To Deliver Mental Health Care, With Exceptions For Administrative Support Functions
West Virginia
H 644 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
SB 1546 — Relating to Artificial Intelligence Companions
Oregon
HB 2006 — An Act Providing For Safety Regarding Artificial Intelligence In Companionship Applications; And Imposing A Penalty
Pennsylvania
HB 7349 — An Act Relating To Behavioral Healthcare, Developmental Disabilities And Hospitals -- Oversight Of Artificial Intelligence Technology In Mental Health Care Act
Rhode Island
HB 1993 — An Act Providing For The Use Of Artificial Intelligence In Mental Health Therapy And For Enforcement
Pennsylvania
S 9408 — Relates To A Prohibition On Chatbot Toys
New York
SB 468 — High-risk artificial intelligence systems: duty to protect personal information.
California
SB 6120 — Regulating High-Risk Artificial Intelligence System Development, Deployment, And Use
Washington

By Harm Domain

Self-Harm & Suicide3
Child Safety1
Privacy & Surveillance1