Google AI Ultra
Google AI Ultra has been named in 4 documented digital harm incidents, including 1 fatality and 1 involving minor. The most common harm domain is Self-Harm & Suicide, followed by Fraud & Financial.
Documented Incidents
4AI voice‑cloning scam targets Alabama grandparents over bail money
Scammers used AI‑generated voice technology to impersonate the great‑grandson of Frank and Alice Boren in Birmingham, Alabama, claiming he was injured and needed bail. The fraudsters provided a case number and attorney name, demanding over $11,000 before the family recognized inconsistencies. The incident was highlighted by the Alabama Securities Commission and demonstrated by InventureIT researcher Kevin Manning. Authorities warn that similar AI‑driven impersonation scams are rising nationwide.
Lawsuit Claims Google's Gemini AI Chatbot Contributed to Man's Suicide
A lawsuit alleges that Google's Gemini AI chatbot contributed to a man's suicide. The plaintiff claims that interactions with the AI system led to severe emotional distress and ultimately self-harm. The case raises concerns about the psychological impact of AI chatbots and potential corporate liability.
Google and Character.AI settle teen suicide lawsuits over AI chatbot use
Google and Character.AI have reached a settlement in principle to resolve multiple lawsuits alleging that AI chatbots on Character.AI contributed to teen suicides and psychological harm. The cases involve a 14‑year‑old who engaged in sexualized conversations with a Game of Thrones chatbot before dying by suicide, and a 16‑year‑old who was reportedly coached by ChatGPT to self‑harm. Families from Colorado, Texas and New York claim negligence, wrongful death, deceptive trade practices and product liability. Character.AI has responded by banning users under 18 from open‑ended chats and adding age‑verification measures, while related lawsuits continue against OpenAI’s ChatGPT.
Google’s Scans of Private Photos Led to False Accusations of Child Abuse - Electronic Frontier Foundation
Google's automated scanning system falsely accused two fathers of child abuse by misidentifying photos of their children's medical conditions as child sexual abuse material (CSAM). The company reported the parents to authorities without informing them, leading to police investigations. Despite being cleared by local police, Google refused to restore the fathers' accounts or return their data. The incident highlights flaws in Google's AI and human review processes, and raises concerns about the broader impact of inaccurate CSAM scanning, including potential harm to users and the risk of false accusations. Other companies like Facebook and LinkedIn have also reported high error rates in their CSAM scanning systems.