All platforms
AI ChatbotGoogleLaunched 2025Website

Google AI Ultra

Google AI Ultra has been named in 4 documented digital harm incidents, including 1 fatality and 1 involving minor. The most common harm domain is Self-Harm & Suicide, followed by Fraud & Financial.

4
Incidents
1
Fatalities
1
Minors involved
Financial harm

Documented Incidents

4
Mar 16, 2026·Birmingham, Alabama, USA

AI voice‑cloning scam targets Alabama grandparents over bail money

Scammers used AI‑generated voice technology to impersonate the great‑grandson of Frank and Alice Boren in Birmingham, Alabama, claiming he was injured and needed bail. The fraudsters provided a case number and attorney name, demanding over $11,000 before the family recognized inconsistencies. The incident was highlighted by the Alabama Securities Commission and demonstrated by InventureIT researcher Kevin Manning. Authorities warn that similar AI‑driven impersonation scams are rising nationwide.

Fraud & FinancialVoice Cloning Fraud
Mar 1, 2026

Lawsuit Claims Google's Gemini AI Chatbot Contributed to Man's Suicide

A lawsuit alleges that Google's Gemini AI chatbot contributed to a man's suicide. The plaintiff claims that interactions with the AI system led to severe emotional distress and ultimately self-harm. The case raises concerns about the psychological impact of AI chatbots and potential corporate liability.

Self-Harm & SuicideSuicide
Jan 8, 2026·United States

Google and Character.AI settle teen suicide lawsuits over AI chatbot use

Google and Character.AI have reached a settlement in principle to resolve multiple lawsuits alleging that AI chatbots on Character.AI contributed to teen suicides and psychological harm. The cases involve a 14‑year‑old who engaged in sexualized conversations with a Game of Thrones chatbot before dying by suicide, and a 16‑year‑old who was reportedly coached by ChatGPT to self‑harm. Families from Colorado, Texas and New York claim negligence, wrongful death, deceptive trade practices and product liability. Character.AI has responded by banning users under 18 from open‑ended chats and adding age‑verification measures, while related lawsuits continue against OpenAI’s ChatGPT.

Self-Harm & SuicideFatalityMinor
Feb 1, 2021·San Francisco, United States

Google’s Scans of Private Photos Led to False Accusations of Child Abuse - Electronic Frontier Foundation

Google's automated scanning system falsely accused two fathers of child abuse by misidentifying photos of their children's medical conditions as child sexual abuse material (CSAM). The company reported the parents to authorities without informing them, leading to police investigations. Despite being cleared by local police, Google refused to restore the fathers' accounts or return their data. The incident highlights flaws in Google's AI and human review processes, and raises concerns about the broader impact of inaccurate CSAM scanning, including potential harm to users and the risk of false accusations. Other companies like Facebook and LinkedIn have also reported high error rates in their CSAM scanning systems.

Child SafetyCSAM

Linked Legislation

15
AI Fraud Deterrence Act (HR 6306)
United States
SB 5870 — Establishing Civil Liability For Suicide Linked To The Use Of Artificial Intelligence Systems
Washington
H 816 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
H 783 — An Act Relating To Chatbot Disclosure Requirements
Vermont
HB 635 — Artificial Intelligence Chatbots Act
Virginia
S 896 — Chatbot Regulation
South Carolina
H 5138 — Chatbot Regulation
South Carolina
SB 1546 — Relating to Artificial Intelligence Companions
Oregon
HB 2100 — An Act Providing For The Use Of Mental Health Chatbots And Artificial Intelligence By Mental Health Therapists; Imposing Duties On The Bureau Of Professional And Occupational Affairs; And Imposing A Penalty
Pennsylvania
A 10494 — Relates to liability for misleading, incorrect, contradictory or harmful information provided to a user by a chatbot
New York
S 5668 — Relates to liability for misleading, incorrect, contradictory or harmful information provided to a user by a chatbot
New York
H 644 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
S 7263 — Imposes Liability For Damages Caused By A Chatbot Impersonating Certain Licensed Professionals
New York
HB 1143 — Child Pornography; Renaming As Child Sexual Abuse Material In The Code
Virginia
SB 593 — Obscenity and Child Sexual Abuse Material; Creating Felony Offenses and Providing Penalties. Effective Date.
Oklahoma

By Harm Domain

Self-Harm & Suicide2
Fraud & Financial1
Child Safety1