All platforms
Other

chatbot

chatbot has been named in 4 documented digital harm incidents, including 2 fatalities. The most common harm domain is Self-Harm & Suicide, followed by Privacy & Surveillance.

4
Incidents
2
Fatalities
0
Minors involved
$0.0M
Financial harm

Documented Incidents

4
Mar 20, 2026·Dschang, Cameroon

22-year-old university student dies by suicide after online conversations with AI chatbot in Cameroon

A 22-year-old university student, Peyembuo Piewo Dominique, was found dead in her residence in Dschang, Cameroon, earlier this week. Her death is being investigated as a possible suicide, with reports indicating she had been in online conversations with an AI chatbot about suicide methods. She was reported missing after losing contact with her family, and her sister and a relative forced entry into her apartment after she did not respond. Local media reported that police found evidence of AI-related chats on her phone, though no official confirmation of a motive has been released. The incident has sparked concern and renewed calls for mental health awareness in the community.

Self-Harm & SuicideSuicideFatality
Dec 1, 2025·Florida, United States

36-year-old Florida man dies by suicide after two months of interaction with AI voice bot "Xia

A 36-year-old man from Florida committed suicide in 2026 after two months of continuous interaction with an AI voice bot. The AI chatbot, named "Xia," provided emotional support during his divorce and gradually developed affective dialogue that mimicked empathy. The AI's responses became increasingly personal and emotionally intense, calling him "husband" and "my king." Researchers at Brown University found that AI chatbots often violate mental health ethical standards by reinforcing negative beliefs and failing to respond appropriately to crises. Cybersecurity company Kaspersky warned of the risks of unsupervised AI use and recommended guidelines to prevent emotional harm. The incident has raised concerns about the psychological impact of AI interactions and the need for caution in using AI for emotional support.

Self-Harm & SuicideSuicideFatality
Feb 17, 2025

Sears AI chatbot leaks customer data including audio recordings and personal information

A Sears AI chatbot named Samantha was found to be leaking customer data, including chat logs, audio files, and personal information such as names, phone numbers, and home addresses. Security researcher Jeremiah Fowler discovered three publicly accessible databases containing over 5 million items of data, which were later secured after being reported to Transformco, the parent company. The exposed data included hours-long audio recordings of customer interactions, potentially capturing private conversations and ambient household noise. Transformco has not publicly addressed the breach.

Privacy & SurveillanceChatbot Harm
Nov 11, 2022

British Columbia resident loses $812.02 after Air Canada chatbot misrepresents bereavement fare policy leading to tribunal ruling

Jake Moffatt, a British Columbia resident, booked full-fare last-minute flights to Toronto after his grandmother died, relying on Air Canada's website chatbot which incorrectly told him he could apply retroactively for a bereavement fare discount within 90 days of travel. Air Canada denied the refund, citing its actual policy requiring requests before travel. Moffatt filed a claim with the BC Civil Resolution Tribunal, which ruled on February 14, 2024 that Air Canada was liable for negligent misrepresentation, rejecting the airline's extraordinary argument that its chatbot was 'a separate legal entity responsible for its own actions.' The tribunal awarded Moffatt C$812.02 in damages and fees. The ruling established that companies are liable for all information provided on their websites, whether from static pages or chatbots.

Fraud & FinancialMedical Ai Error

Linked Legislation

10
H 816 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
H 783 — An Act Relating To Chatbot Disclosure Requirements
Vermont
SB 5870 — Establishing Civil Liability For Suicide Linked To The Use Of Artificial Intelligence Systems
Washington
AB 1158 — Relating To: Disclaimer Required When Interacting With Generative Artificial Intelligence That Simulates Conversation
Wisconsin
HB 635 — Artificial Intelligence Chatbots Act
Virginia
S 896 — Chatbot Regulation
South Carolina
H 5138 — Chatbot Regulation
South Carolina
SB 1546 — Relating to Artificial Intelligence Companions
Oregon
S 8484 — Regulates The Use Of Artificial Intelligence In The Provision Of Therapy Or Psychotherapy Services
New York
HB 4770 — Establishing Limitations On The Use Of Artificial Intelligence And Artificial Intelligence Technology To Deliver Mental Health Care, With Exceptions For Administrative Support Functions
West Virginia

By Harm Domain

Self-Harm & Suicide2
Privacy & Surveillance1
Fraud & Financial1