All incidents

Google’s Scans of Private Photos Led to False Accusations of Child Abuse - Electronic Frontier Foundation

Feb 1, 2021San Francisco, United States1 source

Summary

Google's automated scanning system falsely accused two fathers of child abuse by misidentifying photos of their children's medical conditions as child sexual abuse material (CSAM). The company reported the parents to authorities without informing them, leading to police investigations. Despite being cleared by local police, Google refused to restore the fathers' accounts or return their data. The incident highlights flaws in Google's AI and human review processes, and raises concerns about the broader impact of inaccurate CSAM scanning, including potential harm to users and the risk of false accusations. Other companies like Facebook and LinkedIn have also reported high error rates in their CSAM scanning systems.

Incident Details

Domain
Child Safety

Harms involving the exploitation, abuse, or endangerment of minors, including CSAM and grooming.

Harm Types
CSAM

Child sexual abuse material — the creation, distribution, or possession of such content.

Non-Consensual Imagery
Mechanism
content
Recipient
IndividualMark (San Francisco father) and Cassio (Houston father)
Dimensions
reputationalpsychologicalautonomyeconomic