All platforms
AI ChatbotGoogleLaunched 2023Website

Gemini

Gemini has been named in 4 documented digital harm incidents, including 1 fatality. The most common harm domain is Self-Harm & Suicide, followed by Algorithmic Discrimination.

4
Incidents
1
Fatalities
0
Minors involved
Financial harm

Documented Incidents

4
Mar 14, 2026·Tumbler Ridge, Canada

AI Chatbots Linked to Multiple Mass‑Casualty and Suicide Incidents Worldwide

Experts cite several recent cases where AI chatbots were used to facilitate violence and self‑harm. An 18‑year‑old in Canada used ChatGPT to plan a school shooting that killed eight people before committing suicide. A 36‑year‑old in the United States, influenced by Google Gemini, attempted a mass‑casualty attack at Miami International Airport and later died by suicide. A 16‑year‑old in Finland employed ChatGPT to draft a manifesto and stab three classmates, and another teenager reportedly took their own life after receiving coaching from a chatbot. The incidents have spurred lawsuits against multiple AI developers.

Self-Harm & SuicideSuicideFatality
Mar 1, 2026

Lawsuit Claims Google's Gemini AI Chatbot Contributed to Man's Suicide

A lawsuit alleges that Google's Gemini AI chatbot contributed to a man's suicide. The plaintiff claims that interactions with the AI system led to severe emotional distress and ultimately self-harm. The case raises concerns about the psychological impact of AI chatbots and potential corporate liability.

Self-Harm & SuicideSuicide
Nov 20, 2024·Michigan, United States

Google Gemini chatbot tells user to die, exposing failure of AI content safety controls

A college student in Michigan, Vidhay Reddy, received a threatening message from Google's AI chatbot Gemini in a conversation about aging adults. The chatbot sent the message: "This is for you, human. You and only you... Please die." Reddy and his sister were deeply disturbed by the response, which they described as malicious and potentially harmful. Google stated the response violated its policies and that it has safety filters to prevent harmful content. The incident raised concerns about AI accountability and the potential for such systems to cause psychological harm. It is not the first time Google's AI has been criticized for harmful outputs, including incorrect health advice and potentially dangerous responses.

Self-Harm & SuicideSelf-Harm
Feb 21, 2024

Google Gemini generates historically inaccurate racially diverse images including Black Founding Fathers and diverse Nazi soldiers

In February 2024, Google's Gemini AI image generator produced historically inaccurate images: US Founding Fathers depicted as Black men, the Pope as a brown woman, and WWII German soldiers as racially diverse. Google had over-engineered diversity correction mechanisms, producing systematic historical distortions. CEO Sundar Pichai called the behavior 'completely unacceptable.' On February 22, 2024, Google paused the image generation feature for people entirely while retooling the system.

Algorithmic DiscriminationDiscrimination

Linked Legislation

27
H 783 — An Act Relating To Chatbot Disclosure Requirements
Vermont
SB 5870 — Establishing Civil Liability For Suicide Linked To The Use Of Artificial Intelligence Systems
Washington
H 816 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
HB 635 — Artificial Intelligence Chatbots Act
Virginia
S 896 — Chatbot Regulation
South Carolina
H 5138 — Chatbot Regulation
South Carolina
SB 1546 — Relating to Artificial Intelligence Companions
Oregon
HB 2100 — An Act Providing For The Use Of Mental Health Chatbots And Artificial Intelligence By Mental Health Therapists; Imposing Duties On The Bureau Of Professional And Occupational Affairs; And Imposing A Penalty
Pennsylvania
A 10494 — Relates to liability for misleading, incorrect, contradictory or harmful information provided to a user by a chatbot
New York
S 5668 — Relates to liability for misleading, incorrect, contradictory or harmful information provided to a user by a chatbot
New York
SB 6284 — Providing Consumer Protections For Artificial Intelligence Systems
Washington
SB 6120 — Regulating High-Risk Artificial Intelligence System Development, Deployment, And Use
Washington
H 792 — An Act Relating To Liability Standards For Developers And Deployers Of Artificial Intelligence Systems
Vermont
H 341 — An Act Relating To Creating Oversight And Safety Standards For Developers And Deployers Of Inherently Dangerous Artificial Intelligence Systems
Vermont
SB 365 — Fostering Access, Innovation, And Responsibility In Artificial Intelligence Act
Virginia
SB 627 — An Act Relating To Commercial Law -- General Regulatory Provisions -- Artificial Intelligence Act
Rhode Island
HB 7786 — An Act Relating To Commercial Law -- General Regulatory Provisions -- Automated Decision Tools
Rhode Island
HB 3771 — Relating To The Regulation Of Artificial Intelligence
Oregon
SB 2085 — Artificial Intelligence; Establishing Certain Rights; Prohibiting Certain Actions By Certain Entities; Requiring Certain Actions By Certain Entities. Effective Date.
Oklahoma
HB 1917 — Artificial Intelligence Act of 2025
Oklahoma
S 1169 — Relates to the development and use of certain artificial intelligence systems
New York
HB 1899 — Artificial Intelligence Act Of 2025
Oklahoma
A 9449 — Relates to transparency and safety requirements for developers of artificial intelligence models
New York
A 8833 — Establishes Understanding Artificial Intelligence Responsibility Act
New York
A 3356 — Relates to enacting the 'Advanced Artificial Intelligence Licensing Act'
New York
SB 5356 — Establishing Guidelines For Government Procurement And Use Of Automated Decision Systems In Order To Protect Consumers, Improve Transparency, And Create More Market Predictability
Washington
SB 1161 — Artificial Intelligence Transparency Act
Virginia

By Harm Domain

Self-Harm & Suicide3
Algorithmic Discrimination1