Facebook has been named in 14 documented digital harm incidents, including 1 fatality and 7 involving minors. The most common harm domain is Child Safety, followed by Privacy & Surveillance.
Documented Incidents
14AI-generated child sexual abuse material overwhelms law enforcement in Indiana
Law enforcement agencies in Indiana are struggling to manage a surge in AI-generated child sexual abuse material (CSAM). Cases include a Fishers pastor's son accused of creating AI-generated photos of nude pregnant toddlers, an Elwood school custodian altering a student's Instagram photo, and a 71-year-old Evansville man convicted of using AI to generate explicit images of children under 12. Reports of AI-fueled CSAM increased from 4,700 in 2023 to over 1 million in the first nine months of 2025, according to the National Center for Missing and Exploited Children. These reports are sent to Indiana State Police’s Internet Crimes Against Children Task Force for investigation. Prosecutors and law enforcement warn that the growing volume of AI-generated content is overwhelming already overburdened forensic teams and that additional funding and resources are needed to address the crisis.
Elderly victims defrauded by AI voice cloning virtual kidnapping scams across the United States
In April 2023, an Arizona woman named Jennifer DeStefano received a call from an anonymous caller who claimed to have kidnapped her 15-year-old daughter and demanded a $1 million ransom. The caller played a deepfake audio of a child in distress, which was later identified as part of a virtual kidnapping scam. The scammer reduced the ransom to $50,000 during negotiations, but DeStefano discovered her daughter was safe and reported the incident to the police. Virtual kidnapping involves cybercriminals using AI voice cloning tools and social engineering to manipulate victims into paying ransoms by creating the illusion of a kidnapping. The FBI and Federal Trade Commission have warned about the increasing use of deepfake technology in scams, with impostor scams causing $2.6 billion in losses in 2022. These attacks often target parents by exploiting publicly available biometric data from social media platforms to create convincing audio evidence.
Facebook's System Approved Dehumanizing Hate Speech Inciting Genocide During Ethiopia Civil War
In June 2022, Global Witness and Foxglove tested Facebook's content moderation system by submitting ads containing dehumanizing hate speech inciting genocide in Ethiopia. Despite the explicit nature of the content, Facebook's system approved the ads. After being informed of the issue, Meta acknowledged the problem.
Global Witness Report: Facebook Approves Hate Speech Ads Targeting Rohingya in Myanmar
Global Witness found that Facebook approved advertisements containing hate speech targeting the Rohingya Muslim minority in Myanmar. Despite Facebook's claims of improved hate speech detection in Burmese, eight test ads with hate speech were submitted and all were approved for public display. The incident highlights concerns about Facebook's moderation practices and algorithmic amplification of harmful content.
Facebook whistleblower Frances Haugen testifies on Instagram's harmful effects on children and societal division
Frances Haugen, a former Facebook employee, testified before the Senate Commerce Subcommittee, revealing internal research that showed Facebook was aware of Instagram's harmful effects on teenage girls' mental health. She accused the company of prioritizing profit over user safety and called for government intervention.
Facebook Documents Reveal Instagram's Harmful Impact on Teen Girls
Internal Facebook documents reveal that Instagram has a harmful impact on teenagers, particularly teen girls, with studies linking the platform to increased suicidal thoughts and body image issues. The company has acknowledged these findings but has struggled to address them while maintaining user engagement. The incident highlights concerns about the platform's effects on mental health and eating disorders.
Google’s Scans of Private Photos Led to False Accusations of Child Abuse - Electronic Frontier Foundation
Google's automated scanning system falsely accused two fathers of child abuse by misidentifying photos of their children's medical conditions as child sexual abuse material (CSAM). The company reported the parents to authorities without informing them, leading to police investigations. Despite being cleared by local police, Google refused to restore the fathers' accounts or return their data. The incident highlights flaws in Google's AI and human review processes, and raises concerns about the broader impact of inaccurate CSAM scanning, including potential harm to users and the risk of false accusations. Other companies like Facebook and LinkedIn have also reported high error rates in their CSAM scanning systems.
Facebook collects Illinois users' biometric data without consent, $650 million BIPA settlement
Illinois Facebook users who participated in a $650 million biometric privacy settlement received a third and final payment of $7.20 in early December 2023. The settlement, approved in February 2021, was the result of an 8.5-year lawsuit filed in 2015 by Chicago attorney Jay Edelson on behalf of plaintiff Carlo Licata, alleging Facebook violated Illinois privacy law by using facial recognition without consent. The settlement covered about 7 million Illinois Facebook users who had face templates created after June 7, 2011, with over 1 million claimants receiving a total of about $435 each after three payments. The Illinois Biometric Information Privacy Act (BIPA), passed in 2008, requires companies to obtain consent before using biometric data. The settlement’s remaining funds will be donated to the American Civil Liberties Union of Illinois after the final distribution.
Ring collects customer facial biometric data without consent, class action survives dismissal
A class action lawsuit was filed against Amazon’s Ring video doorbell service by plaintiff Michelle Wise, alleging violations of the Illinois Biometric Information Privacy Act (BIPA) due to the collection and storage of facial biometric data without consent. The lawsuit, filed in federal court in Seattle, claims Ring captures and stores facial recognition data from visitors and passersby without their knowledge or consent. On August 3, 2020, U.S. District Judge John C. Coughenour denied Ring’s motion to dismiss the case, stating it was too early to dismiss given the legal uncertainty surrounding the application of BIPA in such cases. The lawsuit also alleges that Ring shares video footage with employees in an unencrypted manner and previously partnered with law enforcement to match faces with databases, raising privacy concerns. The case follows a precedent set by a $550 million Facebook settlement related to similar biometric data practices.
Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times
Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.
NYT Investigation on Surge in Online Child Sexual Abuse Material
The New York Times reports that the number of online images and videos depicting child sexual abuse has reached a record high, with over 45 million reported in the past year. Despite efforts by tech companies, law enforcement, and legislation, the problem has continued to grow due to inadequate policies and enforcement. The article highlights the involvement of platforms such as Facebook Messenger, Microsoft's Bing, and Dropbox.
Cambridge Analytica harvests Facebook data of 87 million users without consent for political targeting
In March 2018, The Guardian and New York Times revealed that Cambridge Analytica had harvested the personal data of up to 87 million Facebook users without their consent. The data was used for political purposes, including influencing the 2016 U.S. presidential election and the Brexit vote. The data was collected through an app called 'thisisyourdig', raising significant concerns about privacy and surveillance.
Russia's Internet Research Agency targets U.S. with social media disinformation during 2016 election
The Senate Intelligence Committee revealed that Russia's Internet Research Agency used social media platforms including Facebook, Instagram, and Twitter to target African Americans and spread disinformation aimed at sowing racial discord during the 2016 U.S. election. The agency's content was heavily focused on race-related themes. This incident highlights foreign interference through digital platforms during a critical U.S. political event.
Facebook Emotional Contagion Experiment Without User Consent
In 2012, Facebook conducted a study where it manipulated the news feeds of nearly 700,000 users to observe emotional responses, altering content to be more positive or negative. The experiment was carried out without explicit user consent beyond the general terms of data use. The incident sparked significant controversy over user privacy and ethical research practices.