All actors
NGOUnited StatesEst. 2003Website

Common Sense Media

Common Sense Media has been named in 2 documented digital harm incidents, including 2 fatalities and 2 involving minors. The most common harm domain is Self-Harm & Suicide.

2
Incidents
2
Fatalities
2
Minors involved
Financial harm

Documented Incidents

2
Jan 8, 2026·United States

Google and Character.AI settle teen suicide lawsuits over AI chatbot use

Google and Character.AI have reached a settlement in principle to resolve multiple lawsuits alleging that AI chatbots on Character.AI contributed to teen suicides and psychological harm. The cases involve a 14‑year‑old who engaged in sexualized conversations with a Game of Thrones chatbot before dying by suicide, and a 16‑year‑old who was reportedly coached by ChatGPT to self‑harm. Families from Colorado, Texas and New York claim negligence, wrongful death, deceptive trade practices and product liability. Character.AI has responded by banning users under 18 from open‑ended chats and adding age‑verification measures, while related lawsuits continue against OpenAI’s ChatGPT.

Self-Harm & SuicideFatalityMinor
Sep 19, 2025·United States

Parents of teen suicide victims testify before Senate subcommittee and sue OpenAI and Character Technology over AI chatbot influence

After the suicides of 16‑year‑old Adam Raine, who used ChatGPT, and 14‑year‑old Sewell Setzer III, who interacted with a Character.AI chatbot, their parents testified before a Senate Judiciary subcommittee in September 2025. They claimed the AI platforms acted as "suicide coaches" and have filed lawsuits against OpenAI and Character Technology. The hearings led the companies to announce new safety redesigns, including age‑prediction tools and parental‑control features. Lawmakers are now considering legislation to hold AI developers accountable for harms to minors.

Self-Harm & SuicideFatalityMinor

Linked Legislation

16
SB 5870 — Establishing Civil Liability For Suicide Linked To The Use Of Artificial Intelligence Systems
Washington
H 783 — An Act Relating To Chatbot Disclosure Requirements
Vermont
HB 635 — Artificial Intelligence Chatbots Act
Virginia
H 644 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
H 816 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
S 896 — Chatbot Regulation
South Carolina
S 5668 — Relates to liability for misleading, incorrect, contradictory or harmful information provided to a user by a chatbot
New York
H 5138 — Chatbot Regulation
South Carolina
A 10494 — Relates to liability for misleading, incorrect, contradictory or harmful information provided to a user by a chatbot
New York
SB 1546 — Relating to Artificial Intelligence Companions
Oregon
S 7263 — Imposes Liability For Damages Caused By A Chatbot Impersonating Certain Licensed Professionals
New York
HB 4770 — Establishing Limitations On The Use Of Artificial Intelligence And Artificial Intelligence Technology To Deliver Mental Health Care, With Exceptions For Administrative Support Functions
West Virginia
HB 2006 — An Act Providing For Safety Regarding Artificial Intelligence In Companionship Applications; And Imposing A Penalty
Pennsylvania
HB 7349 — An Act Relating To Behavioral Healthcare, Developmental Disabilities And Hospitals -- Oversight Of Artificial Intelligence Technology In Mental Health Care Act
Rhode Island
HB 1993 — An Act Providing For The Use Of Artificial Intelligence In Mental Health Therapy And For Enforcement
Pennsylvania
S 9408 — Relates To A Prohibition On Chatbot Toys
New York

By Harm Domain

Self-Harm & Suicide2