All actors
CompanyUnited StatesEst. 2004Website

Meta

Meta has been named in 49 documented digital harm incidents, including 9 fatalities and 22 involving minors. The most common harm domain is Misinfo & Disinfo, followed by Child Safety.

49
Incidents
9
Fatalities
22
Minors involved
Financial harm

Documented Incidents

49
Mar 25, 2026·Los Angeles, United States

20-year-old woman awarded $4.2 million after Meta and YouTube found liable for mental health harm via addictive platform design

On March 25, juries in Los Angeles, California, ruled that Meta and YouTube were liable for negligence in a case involving youth addiction and mental health. The plaintiff, a now 20-year-old woman known as Kaley G.M., claimed she became addicted to Instagram and YouTube during grade school, which contributed to her anxiety and depression. Meta was ordered to pay $4.2 million in damages, and YouTube was ordered to pay $1.8 million. The case is significant because it challenges Section 230 of the Communications Decency Act, which has previously shielded social media companies from liability. The ruling sets a legal precedent by suggesting that social media platforms can be held responsible for personal injury caused by their product design. Meta has stated it is considering an appeal.

Addiction & Mental HealthAddictionMinor
Mar 14, 2026·Tumbler Ridge, Canada

AI Chatbots Linked to Multiple Mass‑Casualty and Suicide Incidents Worldwide

Experts cite several recent cases where AI chatbots were used to facilitate violence and self‑harm. An 18‑year‑old in Canada used ChatGPT to plan a school shooting that killed eight people before committing suicide. A 36‑year‑old in the United States, influenced by Google Gemini, attempted a mass‑casualty attack at Miami International Airport and later died by suicide. A 16‑year‑old in Finland employed ChatGPT to draft a manifesto and stab three classmates, and another teenager reportedly took their own life after receiving coaching from a chatbot. The incidents have spurred lawsuits against multiple AI developers.

Self-Harm & SuicideSuicideFatality
Mar 14, 2026·Los Angeles, California, USA

Meta and Google sued over design features alleged to create child addiction in Los Angeles trial

A federal trial in Los Angeles is examining claims that Meta and Google deliberately engineered features such as infinite scroll, autoplay videos, and constant notifications to foster addiction among children. Plaintiffs argue these design elements function like a drug, citing internal documents and testimony from former Meta employee Arturo Béjar. The companies contend they have taken steps to make their platforms safer. The case is being compared to historic tobacco litigation and could set precedents for corporate responsibility in digital product design.

Addiction & Mental HealthAddiction
Feb 18, 2026·Los Angeles

Zuckerberg Testifies in Landmark Teen Social Media Addiction Trial in Los Angeles

Meta CEO Mark Zuckerberg testified in person at a Los Angeles trial brought by KGM, a 20-year-old plaintiff who claims compulsive Instagram use worsened her mental health. Zuckerberg acknowledged that Meta had improved its age verification and safety features but admitted the company had not acted quickly enough. Plaintiffs' lawyers challenged his testimony, arguing Meta's platform design intentionally creates addiction in young users. The trial is one of a series of bellwether cases that could shape hundreds of similar lawsuits nationwide.

Child SafetyMinor
Feb 6, 2026·Los Angeles, California, USA

Los Angeles woman loses $81,000 and home in AI deepfake romance scam

In Los Angeles, California, a woman identified as Abigail was targeted by a deep‑fake romance scam that began on Facebook and continued on WhatsApp. Scammers used AI‑generated video and voice to impersonate actor Steve Burton, persuading her to send gift cards, cash and cryptocurrency totaling $81,000. They then pressured her to sell her condominium at a steep discount to a wholesale real‑estate company, causing her to lose the equity and her home. The LAPD recorded the losses, but the funds were not recovered, and the family pursued a civil lawsuit to contest the sale.

Fraud & Financial
Jan 10, 2026·Bangladesh

AI-generated deepfake videos spread political disinformation in Bangladesh without platform intervention

AI-generated videos are spreading disinformation online in Bangladesh ahead of the 13th national election. A video featuring a woman resembling Rikta, a garment worker who lost her arm in the 2013 Rana Plaza collapse, falsely accused a political party of fraud and was shared over 21,000 times on the Uttarbanga Television Facebook page. The video, uploaded on 10 January, was identified as AI-generated after fact-checking by Prothom Alo. The Representation of the People Order prohibits the use of AI to create misleading content during elections, but such content continues to circulate. The Bangladesh Army issued a warning on 14 January about AI-generated videos misrepresenting military personnel, but the videos remain online. Authorities have yet to take action, despite the potential for such content to incite violence or confusion among voters.

Misinfo & DisinfoDisinformation
Dec 18, 2025·Dhaka, Bangladesh

Amnesty warns Meta over Facebook-fueled attacks on Bangladeshi media ahead of 2025 elections

Amnesty International issued a warning that Bangladesh faced heightened risk of human‑rights abuses ahead of its February 2025 parliamentary elections due to harmful content on Meta's Facebook platform. Misleading and inflammatory posts, many traced to India, amplified sectarian narratives and labeled local outlets The Daily Star and Prothom Alo as "Indian agents" and "anti‑national forces," sparking mob attacks on their offices in Dhaka on 18 December 2025. Bangladeshi authorities reported the incidents to Meta, citing delays in removing violent content, and Amnesty called for emergency mitigation measures and stronger safeguards to prevent online incitement from translating into real‑world violence.

Misinfo & Disinfo
Dec 17, 2025·Shippensburg, Pennsylvania, USA; Dunblane, Scotland

Families sue Meta over teen suicides linked to Instagram sextortion scams

Two families filed a lawsuit in Delaware against Meta, alleging that Instagram's platform enabled sextortion scams that drove two teenage boys—13‑year‑old Levi Maciejewski in Pennsylvania and 16‑year‑old Murray Dowey in Scotland—to die by suicide. The plaintiffs contend that Instagram’s default public settings and allowance of direct messages from strangers left minors vulnerable to blackmail, and that Meta ignored known risks despite internal records. Meta claims to have introduced safety measures such as private accounts for minors, but the families argue these steps came too late. The suit seeks compensatory and punitive damages and adds to a growing number of sextortion‑related lawsuits against the company.

Self-Harm & SuicideFatalityMinor
Dec 1, 2025·New Delhi, India

Gautam Gambhir files lawsuit seeking ₹2.5 crore after deepfake used to impersonate him

India's cricket head coach Gautam Gambhir filed a civil suit in the Delhi High Court in late 2025, seeking ₹2.5 crore in damages for the unauthorized use of his name, image, and voice in deepfake content. The case involves 16 defendants, including social media accounts, e-commerce platforms like Amazon and Flipkart, and tech companies such as Meta, Google, and YouTube. Gambhir's legal team claims that fabricated videos, including one falsely showing his resignation, have circulated widely on social media and been used for financial gain. The case is being heard under the Copyright Act, 1957, the Trade Marks Act, 1999, and the Commercial Courts Act, 2015, and seeks immediate removal of the content and a permanent injunction against future misuse. Legal experts suggest the case could set a precedent for protecting digital personality rights in India amid rising concerns over AI-driven fraud and misinformation.

Fraud & FinancialDeepfake Fraud
Oct 1, 2025·Spain

Spain opens investigation into X, Meta, and TikTok over AI-generated child sexual abuse material

Spain has launched an investigation into X, Meta, and TikTok for their involvement in the distribution of AI-generated child sexual abuse material. The probe focuses on the platforms' handling of such content. The investigation is part of broader efforts to address digital harms and protect children online. The companies are being scrutinized for their policies and responses to AI-generated abuse material. The investigation is ongoing, with potential consequences including regulatory action or legal penalties.

Child SafetyCSAMMinor
Sep 30, 2025·United States

Scammers spend $49 million on Meta deepfake political advertising targeting vulnerable users

Scammers spent $49 million on Meta platforms, including Facebook and Instagram, using deepfake videos of U.S. politicians and celebrities to promote fraudulent government benefit schemes, according to a report by the Tech Transparency Project. The investigation identified 63 scam advertisers responsible for over 150,000 political scam ads, often targeting seniors with fake stimulus checks and Medicare benefits. These ads used AI-generated deepfake videos to create a false sense of legitimacy. Despite Meta's policies against such scams and requirements for political ad verification, many ads remained online for days or weeks before removal. Nearly half of the scam advertisers were still active as of late September 2025. The incident has raised concerns about Meta's content moderation and ad review systems, prompting calls for stronger controls and transparency in online political advertising.

Fraud & FinancialDeepfake Fraud
Sep 19, 2025·United States

Parents of teen suicide victims testify before Senate subcommittee and sue OpenAI and Character Technology over AI chatbot influence

After the suicides of 16‑year‑old Adam Raine, who used ChatGPT, and 14‑year‑old Sewell Setzer III, who interacted with a Character.AI chatbot, their parents testified before a Senate Judiciary subcommittee in September 2025. They claimed the AI platforms acted as "suicide coaches" and have filed lawsuits against OpenAI and Character Technology. The hearings led the companies to announce new safety redesigns, including age‑prediction tools and parental‑control features. Lawmakers are now considering legislation to hold AI developers accountable for harms to minors.

Self-Harm & SuicideFatalityMinor
Sep 9, 2025·California

OpenAI launches teen-specific ChatGPT version ahead of Senate hearing on AI chatbot harm to minors

OpenAI announced a new "ChatGPT experience with age-appropriate policies" for teenagers in response to growing concerns about AI chatbot safety, particularly following a California investigation into two parents whose child died by suicide after interactions with ChatGPT. The company plans to implement a system to determine if a user is under 18 and automatically filter content accordingly, including blocking graphic sexual material and potentially involving law enforcement in cases of acute distress. The announcement came ahead of a Senate Judiciary subcommittee hearing on AI chatbot harms scheduled for September 2024. Senator Josh Hawley (R-MO), who chairs the subcommittee, has been vocal about the risks AI poses to children and has previously called for investigations into Meta’s AI chatbot. OpenAI’s CEO, Sam Altman, stated the company will prioritize safety over privacy and freedom for teens, defaulting to the under-18 experience when age is uncertain. Parental control features were set to launch by the end of September.

Child SafetyFatalityMinor
Mar 13, 2025·Los Angeles, United States

Los Angeles jury finds Meta and Google liable for social media addiction harming Kaley

A jury in a landmark social media addiction trial in Los Angeles is deliberating whether Meta or YouTube is liable for the mental health issues of a 20-year-old woman, identified as Kaley G.M., who claims the platforms contributed to her depression and suicidal thoughts as a child. The trial, which began in March 2024, has raised questions about whether the platforms were negligently designed and whether they should have warned users about potential harm. Kaley testified that she became addicted to YouTube and Instagram starting at age six, though she also described family-related trauma. The case could set a precedent for thousands of similar lawsuits, as it challenges the legal protection provided by Section 230 of the US Communications Decency Act. The jury is considering whether Meta or YouTube were "substantial factors" in causing Kaley’s mental health struggles and how much in damages should be awarded. The trial highlights growing concerns about the impact of social media on vulnerable young users and the responsibility of tech companies for harmful content and design.

Addiction & Mental HealthAddictionMinor
Mar 1, 2025

Nearly one in five teen Instagram users report receiving unwanted nude images via the platform

A court filing revealed that nearly 20% of Instagram users aged 13 to 15 reported seeing unwanted nudity or sexual images on the platform, according to a 2021 survey cited in a March 2025 deposition of Instagram head Adam Mosseri. The filing was part of a federal lawsuit in California and reviewed by Reuters. Meta, which owns Instagram, does not typically share survey results and has faced global criticism and lawsuits over the alleged harmful effects of its platforms on minors. The company announced in late 2025 that it would remove explicit content for teen users, with exceptions for medical or educational material. Additionally, 8% of users in the same age group reported seeing self-harm or threats of self-harm on Instagram. Most explicit content was shared via private messages, which Meta avoids reviewing due to privacy concerns.

Child SafetyMinor
Feb 1, 2025·Silicon Valley

Former Meta employees allege age discrimination in company layoffs

Meta is facing a lawsuit alleging age discrimination in its recent layoffs, with former senior director Nicolas Franchet claiming he was unfairly targeted due to his age, resulting in the loss of nearly $12 million in unvested stock. The lawsuit, filed in 2025, accuses Meta of disproportionately laying off employees over 40, a pattern also seen in companies like Google and IBM. Franchet, who had a long tenure at Meta and received positive performance reviews, argues that his dismissal was motivated by age bias rather than performance issues. The legal case is being investigated by employment law firm Sanford Heisler, which is examining potential violations of workplace discrimination laws and the WARN Act. Meta has denied claims of a 20% workforce reduction plan and attributes layoffs to efforts to increase efficiency and invest in artificial intelligence. The case has intensified scrutiny of Silicon Valley's hiring practices and could set a precedent for future age discrimination claims in the tech industry.

Algorithmic DiscriminationDiscrimination
Jan 1, 2025·United States

Meta's AI detection tool sends thousands of false child abuse tips to US law enforcement

U.S. child abuse investigators have accused Meta of sending low-quality or irrelevant tips to the Department of Justice through its AI system. The issue involves Meta's artificial intelligence, which is designed to detect and report potential child abuse material. The reports were sent to U.S. authorities, including the DoJ, as part of Meta's child safety efforts. Investigators claim the AI is generating a high volume of false or unactionable alerts. This problem has raised concerns about the effectiveness of automated systems in identifying real child abuse cases. The issue was reported by The Guardian.

Child SafetyCSAMMinor
Nov 1, 2024

Meta removes 2 million accounts linked to pig butchering scam networks across its platforms

Meta removed over 2 million accounts linked to "pig-butchering" scams in 2024, which involve scammers building fake online relationships to defraud victims of cryptocurrency investments. The scams often begin on dating apps or social media platforms like Facebook, Instagram, and WhatsApp, before moving to Telegram, which is known for limited moderation. In September 2024, the FBI reported that victims lost nearly $4 billion to crypto investment scams, primarily pig-butchering. Meta announced new measures, including automatically flagging potential scam messages and collaborating with other tech companies through the Tech Against Scams coalition. The company also took down accounts linked to a scam operation in Cambodia, which had used AI tools like ChatGPT to communicate with victims. Critics, however, argue that these efforts are insufficient and too slow to address the growing scale of the problem.

Fraud & Financial
Nov 1, 2024·California, United States

NSO Group Found Liable for WhatsApp Pegasus Spyware Hacking in U.S. Court

NSO Group, a commercial spyware company, was found liable in a U.S. court for hacking WhatsApp users through its Pegasus software. The ruling marks the first time a spyware company has been held legally accountable in the U.S. for such actions. New evidence revealed that NSO used U.S.-based servers to deploy the spyware, leading to a $167 million damages verdict. The case involves Meta, Apple, and the Knight First Amendment Institute.

Privacy & SurveillanceUnauthorized Surveillance
Oct 1, 2024

Meta smart glasses subcontractors view users' intimate AI visual queries

In late 2024, a joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten revealed that subcontractors reviewing Meta AI visual queries from Ray-Ban Meta smart glasses were sometimes exposed to intimate or private content from users. A 2024 update made the glasses activate more naturally from conversational context, inadvertently sending private visual captures to human reviewers overseas. The investigation raised serious privacy concerns about both the glasses' owners and bystanders.

Privacy & SurveillanceUnauthorized Surveillance
Sep 3, 2024·United States

Chinese "Spamouflage" Influence Operation Uses Fake U.S. Voter Personas

Researchers at Graphika identified a Chinese state‑linked influence campaign, dubbed “Spamouflage,” that created a network of fake social‑media accounts impersonating U.S. voters, soldiers and a news outlet. The operation posted divisive content on X, TikTok, YouTube, Instagram and Facebook ahead of the 2024 presidential election, targeting topics such as reproductive rights, homelessness, Ukraine and Israel. Meta linked the network to Chinese law‑enforcement, while TikTok removed one of the accounts for policy violations after a video mocking President Biden amassed 1.5 million views. The campaign illustrates China’s use of deceptive online behavior to portray the United States as politically unstable.

Misinfo & Disinfo
Aug 31, 2024·Canada

Chinese Spamouflage campaign targets Canadian officials and Chinese‑Canadian community

Rapid Response Mechanism Canada identified a new transnational repression operation, dubbed “Spamouflage,” that began on August 31 2024. The campaign uses hundreds of bot‑like accounts on X, Facebook, TikTok and YouTube to post deep‑fake videos, sexually explicit AI‑generated images, and doxxing material aimed at ten Mandarin‑speaking Chinese‑Canadian individuals as well as Canadian government officials, media outlets and the Canadian Armed Forces. The deepfakes falsely accuse Prime Minister Justin Trudeau, Minister Mélanie Joly and other officials of corruption and sexual scandals. Researchers attribute the coordinated inauthentic activity with high confidence to actors linked to the People’s Republic of China.

Misinfo & Disinfo
Jul 30, 2024·Texas

Meta Settles Texas Biometric Privacy Lawsuit for $1.4 Billion

Meta has reached a $1.4 billion settlement with the Texas Attorney General over alleged violations of the Texas Biometric Privacy Law. The case involves unauthorized collection and use of biometric data from users of Meta's platforms, including Facebook and Instagram. This is reported to be the largest settlement of its kind in history.

Privacy & SurveillanceUnauthorized Surveillance
May 1, 2024·Wisconsin, United States

Man generates and distributes AI-generated child sexual abuse imagery using open-source model

U.S. federal prosecutors are increasingly targeting individuals who use artificial intelligence (AI) to generate child sex abuse imagery, citing concerns that the technology could lead to a surge in illicit material. In 2024, the U.S. Justice Department filed two criminal cases against defendants accused of using generative AI systems to produce explicit images of children. One defendant, Steven Anderegg, was indicted in May for allegedly using the Stable Diffusion AI model to generate and share explicit images of children, while another, Seth Herrera, a U.S. Army soldier, was charged with using AI chatbots to create violent sexual abuse imagery. Both have pleaded not guilty, with Anderegg seeking to dismiss the charges on constitutional grounds. The National Center for Missing and Exploited Children reported receiving about 450 monthly reports related to AI-generated child exploitation material, though this is a small fraction of overall reports. Legal experts note that while existing laws cover explicit depictions of real children, the legal status of AI-generated imagery remains unclear, with past rulings limiting the criminalization of computer-generated child abuse images. Advocacy groups have secured commitments from major AI companies to avoid training models on child sex abuse imagery and to monitor platforms to prevent its spread.

Child SafetyCSAMMinor
May 1, 2024·San Jose, United States

Pig butchering victim recovers $1 million after ChatGPT helps identify scam operation

A San Jose widow, Margaret Loke, lost nearly $1 million in a crypto "pig-butchering" scam after a scammer posing as a romantic partner, "Ed," convinced her to invest in fake cryptocurrency platforms. The scam, which began in May 2024 via Facebook and WhatsApp, involved fabricated investment returns and emotional manipulation. Loke sent escalating amounts, including $490,000 from her IRA and $300,000 from a second mortgage, before realizing the scam when her account "froze." After consulting ChatGPT, she was alerted to the scam and reported it to the police. The funds were traced to a bank in Malaysia, where scammers withdrew them. Federal regulators warn that such relationship-based crypto scams are a growing threat, with limited chances of recovering funds once they leave U.S. banking systems.

Fraud & FinancialAI-Powered Financial Fraud
May 1, 2024·India

Pro-Modi social media network spreads AI-generated disinformation during 2024 Indian election campaign

In early May 2024, Indian Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP) used the term "Vote Jihad" during election campaigning, which was later adopted by affiliated groups like the Vishwa Hindu Parishad (VHP) on social media platforms such as Facebook. A report by The London Story (TLS) found at least 21 instances in March and 33 in April where the BJP’s Facebook page and affiliated accounts spread Islamophobic narratives. The disinformation campaign targeted India’s 200 million Muslim voters and was part of a broader effort to amplify divisive rhetoric between Hindus and Muslims. A study by Oxford University noted that the BJP dominated digital campaigning on platforms like YouTube and WhatsApp, while other parties struggled to respond effectively. Meta, which owns Facebook and Instagram, approved ads containing hate speech and AI-manipulated content, despite pledging to prevent such material during the election. India’s press freedom has declined significantly, ranking 161 out of 180 countries in the 2023 World Press Freedom Index.

Misinfo & DisinfoDisinformation
Jan 1, 2024·Bangladesh

AI-generated disinformation disrupts Bangladesh's 2024 general election campaign

A report by *The Daily Star* and cited in the *Financial Times* highlights the use of AI-generated disinformation in Bangladesh ahead of its January 2024 elections. Pro-government outlets and influencers have used AI tools like HeyGen to create fake news clips and deepfake videos targeting both the ruling party and opposition Bangladesh Nationalist Party (BNP). Examples include an AI-generated news anchor criticizing the U.S. and a deepfake video falsely showing an opposition leader downplaying support for Gazans. The disinformation is spreading on platforms like X and Facebook, with Meta removing some content after being contacted by the *Financial Times*. Experts warn that the lack of regulation and the potential for bad actors to falsely claim content is AI-generated could further erode public trust in information. The issue is part of a growing global concern about AI's role in elections, particularly in smaller markets that may be overlooked by major tech companies.

Misinfo & DisinfoDisinformation
Nov 1, 2023·United Kingdom

George Freeman MP targeted by AI deepfake video falsely claiming he defected to rival party

A British member of Parliament, George Freeman, was targeted by an AI-generated deepfake video falsely claiming he had defected to a rival political party. The incident occurred in late 2023 and was discussed in a parliamentary hearing in early 2024. During a hearing before the House of Commons Science, Innovation and Technology Committee, representatives from Meta, Google, and X (formerly Twitter) were questioned about how the deepfake spread on their platforms. The companies provided explanations of their policies but did not commit to specific actions to prevent similar incidents or address the spread of the fake video. Freeman criticized the platforms for failing to act decisively and called for legislation to protect individuals from identity theft and misuse through AI. The hearing highlighted concerns about the spread of political misinformation and its threat to democratic processes in the UK.

Misinfo & DisinfoSynthetic Media
Sep 30, 2023·Bratislava, Slovakia

Slovak election campaign targeted by AI deepfake disinformation spread by trolls

Trolls in Slovakia used AI-generated deepfake voices of politicians to spread disinformation ahead of the parliamentary elections, which took place in early October 2023. The deepfake videos, featuring audio impersonating political figures like Michal Šimečka and Zuzana Čaputová, were shared on platforms such as Facebook, Instagram, and Telegram. The content was found to be synthesized using AI tools trained on real voice samples, with some clips remaining online without disclaimers. Meta stated that political posts are not subject to fact-checking to preserve free speech, but fact-checkers continue to debunk false claims. The use of AI deepfakes in this election highlighted growing concerns about disinformation and its potential to influence voter behavior in closely contested races. Researchers noted that deepfake technology has become more accessible, enabling coordinated manipulation efforts.

Misinfo & DisinfoDisinformation
Jan 1, 2023·Westchester County, New York

Teen Mental Health Crisis Linked to Social Media Platforms

A national CDC survey found that nearly 30% of teenage girls considered suicide, with many reporting persistent sadness or hopelessness. Nuala Mullen, an 18-year-old from New York, developed an eating disorder after exposure to body image content on platforms like Instagram and TikTok. The incident highlights growing concerns about the impact of social media on teen mental health.

Self-Harm & SuicideSuicideMinor
Sep 29, 2022·Rakhine State, Myanmar

Amnesty International Reports Facebook Algorithms Promoted Violence Against Rohingya in Myanmar Genocide

Amnesty International found that Facebook's algorithms proactively promoted anti-Rohingya hate content in Myanmar, contributing to violence during the 2017 genocide. Meta failed to act despite awareness of the risks and profited from engagement generated by such content. The incident highlights the role of algorithmic amplification in real-world harm.

Algorithmic DiscriminationDiscriminationFatality
Jun 9, 2022·San Francisco, CA, USA

Facebook's System Approved Dehumanizing Hate Speech Inciting Genocide During Ethiopia Civil War

In June 2022, Global Witness and Foxglove tested Facebook's content moderation system by submitting ads containing dehumanizing hate speech inciting genocide in Ethiopia. Despite the explicit nature of the content, Facebook's system approved the ads. After being informed of the issue, Meta acknowledged the problem.

Misinfo & DisinfoMisinformation
Jun 9, 2022·Myanmar (Burma)

Global Witness Report: Facebook Approves Hate Speech Ads Targeting Rohingya in Myanmar

Global Witness found that Facebook approved advertisements containing hate speech targeting the Rohingya Muslim minority in Myanmar. Despite Facebook's claims of improved hate speech detection in Burmese, eight test ads with hate speech were submitted and all were approved for public display. The incident highlights concerns about Facebook's moderation practices and algorithmic amplification of harmful content.

Misinfo & DisinfoDisinformationFatalityMinor
Apr 15, 2022

Northern Ireland MLA Cara Hunter targeted by deepfake pornography weeks before 2022 Assembly election

In April 2022, three weeks before the Northern Ireland Assembly election, SDLP MLA Cara Hunter discovered that a fabricated pornographic video depicting her likeness had been shared tens of thousands of times via WhatsApp. Hunter was defending her East Londonderry seat when the video circulated. She received vulgar messages and was harassed on the street by a man who referenced the video. Police were unable to trace the origins due to WhatsApp's encryption. Approximately six months later she was targeted by at least 15 additional AI-generated deepfake images. Hunter has since campaigned internationally for legislation criminalising deepfake sexual abuse material, given the incident's likely effect on her election results and the profound personal harm caused.

Privacy & SurveillanceDeepfake NCII
Oct 5, 2021·United States

Facebook whistleblower Frances Haugen testifies on Instagram's harmful effects on children and societal division

Frances Haugen, a former Facebook employee, testified before the Senate Commerce Subcommittee, revealing internal research that showed Facebook was aware of Instagram's harmful effects on teenage girls' mental health. She accused the company of prioritizing profit over user safety and called for government intervention.

Child SafetyMinor
Sep 14, 2021·United States

Facebook Documents Reveal Instagram's Harmful Impact on Teen Girls

Internal Facebook documents reveal that Instagram has a harmful impact on teenagers, particularly teen girls, with studies linking the platform to increased suicidal thoughts and body image issues. The company has acknowledged these findings but has struggled to address them while maintaining user engagement. The incident highlights concerns about the platform's effects on mental health and eating disorders.

Child SafetySuicideMinor
Sep 1, 2021·Long Island, United States

Over 2,000 families sue Meta, TikTok, Snapchat, and YouTube over children's mental health harms

More than 2,000 families are suing social media companies including TikTok, Snapchat, YouTube, Roblox, and Meta (parent company of Instagram and Facebook) over the impact of social media on children's mental health. The lawsuits allege that platforms like Instagram contributed to the development of depression and eating disorders in minors. One case involves the Spence family from Long Island, New York, whose daughter Alexis developed an eating disorder at age 12 after using Instagram, which she accessed by falsely checking a 13+ age box. Alexis reported that Instagram's algorithm led her to pro-anorexia content, which normalized disordered eating behaviors and worsened her mental health. The lawsuits are expected to move forward in 2024, with over 350 cases anticipated to proceed.

Addiction & Mental HealthEating DisorderMinor
Jan 1, 2021·Thornton, Colorado

Woman whose son died from drugs bought on social media celebrates verdicts against Meta ...

A Colorado woman, Kimberly Osterman, celebrated recent verdicts against Meta and YouTube, which were found liable for harms to children due to platform design. Her son, Max Osterman, died in 2021 at age 18 after purchasing a fentanyl-laced pill through Snapchat. In Los Angeles, a jury ruled that Meta and YouTube designed their platforms to hook young users, and in New Mexico, Meta was found to have knowingly harmed children’s mental health and concealed information about child sexual exploitation. Snap Inc., the parent company of Snapchat, and TikTok settled before the Los Angeles trial began. Osterman is part of Parents for Safe Online Spaces, advocating for the Kids Online Safety Act, which would require social media platforms to take steps to prevent harm to minors. The drug dealer who sold Max the pill was sentenced to six years in prison in 2023.

Child SafetyDrug Facilitated HarmFatalityMinor
Jun 1, 2020·North Belfast, United Kingdom

Noah's family denied access to his Instagram account after his death during inquest into his suicide

An inquest heard that Fiona Donohoe was prevented from accessing her son Noah's Instagram account after his death in June 2020 due to a memorialisation request sent to Meta. The memorialisation request was made using an email address linked to a family that appeared at the inquest in February 2026. The family, including a teenage boy and his sister, denied involvement in the request and stated they did not know Noah before his disappearance. The mother of the teenagers confirmed she had no dealings with Meta or prior knowledge of Noah. Fiona Donohoe expressed distress over being locked out of her son's account and denied any involvement in the memorialisation process. The coroner granted anonymity to the family members who gave evidence behind a curtain.

Privacy & SurveillanceFatalityMinor
Jan 18, 2020·New York, USA

Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times

Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.

Privacy & Surveillance
Jan 1, 2020

18-year-old girl dies by suicide after using Meta and YouTube platforms

In 2020, an 18-year-old named Annalee Schott took her own life, which her family attributed in part to the negative effects of social media. The Schott family has since blamed platforms like Meta and YouTube for harming children's mental health through addictive design. The article raises the question of whether legal or regulatory actions against these companies could mark a turning point for Big Tech, similar to the tobacco industry's past reckoning. The focus is on potential consequences for tech companies if they are held accountable for youth harm.

Self-Harm & SuicideFatality
Jan 1, 2020·Hartford, Connecticut

Caroline Koziol develops anorexia after TikTok and Instagram algorithm floods feed with extreme dieting content, joins landmark MDL

Caroline Koziol of Hartford, Connecticut began using Instagram and TikTok during the COVID-19 pandemic to search for at-home workouts and healthy recipes to support her swimming training. Within weeks, both platforms' recommendation algorithms had flooded her feeds with content promoting extreme workouts and disordered eating. 'One innocent search turned into this avalanche,' she said. Koziol, now 21, developed anorexia and is among more than 1,800 plaintiffs in the Social Media Adolescent Addiction/Personal Injury Products Liability MDL suing Meta and TikTok. She is not suing over specific content but over the platforms' defective recommendation design that maximized her engagement and drove her deeper into eating disorder content.

Addiction & Mental HealthEating DisorderMinor
Sep 29, 2019

NYT Investigation on Surge in Online Child Sexual Abuse Material

The New York Times reports that the number of online images and videos depicting child sexual abuse has reached a record high, with over 45 million reported in the past year. Despite efforts by tech companies, law enforcement, and legislation, the problem has continued to grow due to inadequate policies and enforcement. The article highlights the involvement of platforms such as Facebook Messenger, Microsoft's Bing, and Dropbox.

Child SafetyCSAMMinor
Mar 17, 2018·London, England

Cambridge Analytica harvests Facebook data of 87 million users without consent for political targeting

In March 2018, The Guardian and New York Times revealed that Cambridge Analytica had harvested the personal data of up to 87 million Facebook users without their consent. The data was used for political purposes, including influencing the 2016 U.S. presidential election and the Brexit vote. The data was collected through an app called 'thisisyourdig', raising significant concerns about privacy and surveillance.

Privacy & SurveillanceUnauthorized Surveillance
Jan 1, 2017

Alexis Spence develops eating disorder at 12 after Instagram algorithm, testifies before Senate as Meta addiction trial plaintiff

Alexis Spence of Long Island, New York began using Instagram at age 11 by falsely entering a 13+ age, and the platform's algorithm exposed her to pro-anorexia content that contributed to an eating disorder developing by age 12. Alexis was one of several victims featured in coverage of the Social Media Adolescent Addiction MDL and has become a prominent plaintiff voice. Her case was cited in congressional testimony about the harms of social media design features to minors. The Spence family's lawsuit alleges that Instagram's algorithmic design was the proximate cause of Alexis's eating disorder, which required ongoing treatment.

Addiction & Mental HealthEating DisorderMinor
Nov 8, 2016·United States

Russia's Internet Research Agency targets U.S. with social media disinformation during 2016 election

The Senate Intelligence Committee revealed that Russia's Internet Research Agency used social media platforms including Facebook, Instagram, and Twitter to target African Americans and spread disinformation aimed at sowing racial discord during the 2016 U.S. election. The agency's content was heavily focused on race-related themes. This incident highlights foreign interference through digital platforms during a critical U.S. political event.

Misinfo & DisinfoDisinformation
Jan 1, 2016

Alex Martin develops life-threatening anorexia and attempts suicide after Instagram algorithm drives her to pro-eating-disorder content from age 14

Beginning in 2016 when she was approximately 14 years old, Alexandra 'Alex' Martin of Georgetown, Kentucky began using Instagram, which algorithmically directed her to pro-anorexia groups and social comparison content she had not sought out. Her Instagram usage increased as the algorithm fed her more disordered eating content, and her mental and physical health declined progressively. Her eating disorder became life-threatening, requiring multiple hospital stays and treatment facility admissions. She also made two suicide attempts. She eventually deleted her Instagram account entirely. Martin, then 19, was named as a plaintiff in a lawsuit filed by the Social Media Victims Law Center in 2022 against Meta, alleging Instagram's dangerous and defective product design caused her injuries.

Addiction & Mental HealthEating DisorderMinor
Jan 1, 2015

KGM sues Meta and Google over Instagram and YouTube addiction beginning at age 6, leading to depression and suicidal thoughts — first bellwether trial

A woman identified as KGM (Kaley G.M.) filed one of the first bellwether cases in the Social Media Adolescent Addiction MDL, alleging that Instagram and YouTube addiction beginning when she was approximately 6 years old led to clinical depression and suicidal thoughts. The lawsuit names Meta, Google, TikTok, and Snapchat, with Snap settling before trial. In January and February 2026, KGM's case became the first social media addiction case to proceed to jury trial in Los Angeles, with her mother Karen Glenn also testifying. Expert witnesses including Stanford psychiatry professor Anna Lembke testified that social media addiction is real and can cause or worsen anxiety, depression, and suicidal thoughts. The trial's outcome is expected to influence over 1,000 similar lawsuits.

Addiction & Mental HealthAddictionMinor
Jan 1, 2012

Facebook Emotional Contagion Experiment Without User Consent

In 2012, Facebook conducted a study where it manipulated the news feeds of nearly 700,000 users to observe emotional responses, altering content to be more positive or negative. The experiment was carried out without explicit user consent beyond the general terms of data use. The incident sparked significant controversy over user privacy and ethical research practices.

Privacy & SurveillanceUnauthorized Surveillance

Linked Legislation

105
AB 2246 — Youth Social Media Protection Act: Report
California
H 783 — An Act Relating To Chatbot Disclosure Requirements
Vermont
SB 5870 — Establishing Civil Liability For Suicide Linked To The Use Of Artificial Intelligence Systems
Washington
H 816 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
HB 635 — Artificial Intelligence Chatbots Act
Virginia
S 896 — Chatbot Regulation
South Carolina
H 5138 — Chatbot Regulation
South Carolina
HB 7953 — An Act Relating To Commercial Law -- General Regulatory Provisions -- Rhode Island Social Media Regulation Act
Rhode Island
AB 1856 — Age Verification Signals: Software Applications And Online Services
California
H 3038 — Intentionally Impersonating Another Person Through Email, Social Media Or Other Internet Websites
South Carolina
SB 6120 — Regulating High-Risk Artificial Intelligence System Development, Deployment, And Use
Washington
SB 6184 — Concerning Deepfake Artificial Intelligence-Generated Pornographic Material Involving Minors
Washington
AI Fraud Deterrence Act (HR 6306)
United States
H 4660 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
H 3517 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
HB 4770 — Establishing Limitations On The Use Of Artificial Intelligence And Artificial Intelligence Technology To Deliver Mental Health Care, With Exceptions For Administrative Support Functions
West Virginia
H 644 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
SB 1546 — Relating to Artificial Intelligence Companions
Oregon
HB 2006 — An Act Providing For Safety Regarding Artificial Intelligence In Companionship Applications; And Imposing A Penalty
Pennsylvania
HB 7349 — An Act Relating To Behavioral Healthcare, Developmental Disabilities And Hospitals -- Oversight Of Artificial Intelligence Technology In Mental Health Care Act
Rhode Island
HB 1993 — An Act Providing For The Use Of Artificial Intelligence In Mental Health Therapy And For Enforcement
Pennsylvania
S 9408 — Relates To A Prohibition On Chatbot Toys
New York
HB 4412 — Require Certain Websites To Utilize Age Verification Methods To Prevent Minors From Accessing Content
West Virginia
H 210 — An Act Relating To An Age-Appropriate Design Code
Vermont
HB 1834 — Protecting Washington Children Online
Washington
H 712 — An Act Relating To Age-Appropriate Design Code
Vermont
SB 5708 — Protecting Washington Children Online
Washington
S 289 — An Act Relating To Age-Appropriate Design Code
Vermont
HB 758 — Artificial Intelligence Chatbots and Minors Act
Virginia
SB 796 — Artificial Intelligence Companion Chatbots and Minors Act
Virginia
SB 287 — Online Pornography Viewing Age Requirements
Utah
HB 1053 — Require Age Verification By Websites Containing Material That Is Harmful To Minors, And To Provide A Penalty Therefor
South Dakota
HB 1237 — Require Age Verification Before An Individual May Access An Application From An Online Application Store, Publicly Available Website, Electronic Service, Or Other Online Platform
South Dakota
H 4842 — Age-Appropriate Design
South Carolina
H 3426 — Child Online Safety Act
South Carolina
SB 2406 — An Act Relating To Commercial Law -- General Regulatory Provisions -- Age-Appropriate Design Code
Rhode Island
HB 7632 — An Act Relating To Commercial Law -- General Regulatory Provisions -- Age-Appropriate Design Code
Rhode Island
HB 7746 — An Act Relating To Commercial Law -- General Regulatory Provisions -- Rhode Island Children’S Online Safety Act
Rhode Island
HB 3544 — Technology; Artificial Intelligence; Companions; Minors; Safety; Civil Penalties; Effective Date
Oklahoma
SB 1521 — Artificial Intelligence; Prohibiting The Creation Of Certain Artificial Intelligence Chatbots; Requiring Certain Age Verification Measures And Protections For User Data. Effective Date.
Oklahoma
SB 931 — Social Media; Requiring Certain Age Verification; Requiring Social Media Platforms To Provide Certain Supervisory Tools. Effective Date.
Oklahoma
SB 1959 — Consumer Protection; Prohibiting Commercial Entities From Distributing Adult Material Without Age Verification. Effective Date.
Oklahoma
HB 3914 — Social Media; Age Verification; Parental Consent; Third-Party Vendors; Methods; Practices By Social Media Company; Violations; Liability; Effective Date; Emergency
Oklahoma
SB 1960 — Crimes And Punishments; Material Harmful To Minors; Requiring Certain Age Verification. Effective Date.
Oklahoma
S 268 — Children and Social Media
South Carolina
H 3424 — Child Online Safety Act
South Carolina
SB 885 — Social Media; Creating The Safe Screens For Kids Act. Effective Date.
Oklahoma
SB 1871 — Social Media; Requiring Certain Age Verification; Requiring Certain Parental Consent. Emergency.
Oklahoma
SB 593 — Obscenity and Child Sexual Abuse Material; Creating Felony Offenses and Providing Penalties. Effective Date.
Oklahoma
A 9415 — Protects Minors Online From Social Media And Harmful Content
New York
HB 1951 — Promoting Ethical Artificial Intelligence By Protecting Against Algorithmic Discrimination
Washington
S 8928 — Enacts The Artificial Intelligence Workforce Impact Transparency Act
New York
S 1854 — Establishes The New York Workforce Stabilization Act Requiring Certain Businesses To Conduct Artificial Intelligence Impact Assessments On The Application And Use Of Such Artificial Intelligence
New York
HB 289 — Child Sexual Abuse Material Amendments
Utah
HB 2411 — Consumer Counsel, Division Of; Expands Duties, Artificial Intelligence Fraud And Abuse
Virginia
SB 468 — High-risk artificial intelligence systems: duty to protect personal information.
California
Protect Elections from Deceptive AI Act — 119th Congress (S.1213 / HR 5272)
United States
HB 4191 — Relating To Requirements Imposed On Social Media Companies To Prevent Corruption And Provide Transparency Of Election-Related Content Made Available On Social Media Websites
West Virginia
DEFIANCE Act of 2025 (HR 3562 / S.1837) — 119th Congress
United States
H 846 — An Act Relating To Artificial Intelligence And Elections
Vermont
H 822 — An Act Relating To The Regulation Of Generative Artificial Intelligence Systems
Vermont
S 2414 — Enacts The 'Political Artificial Intelligence Disclaimer (Paid) Act'
New York
SB 816 — An Act Relating To Elections -- Deceptive And Fraudulent Synthetic Media In Election Communications
Rhode Island
HB 5872 — An Act Relating To Elections -- Deceptive And Fraudulent Synthetic Media In Election Communications
Rhode Island
HB 982 — Political campaign advertisements; synthetic media, penalty
Virginia
S 7037 — Relates To Enacting The 'Social Media Monitoring Safety Act'; Appropriation
New York
SB 720 — Stop Non-Consensual Distribution Of Intimate Deep Fake Media Act
West Virginia
HB 3865 — Crimes And Punishments; Expanding Scope Of Crime To Include Materials And Pornography Generated Via Artificial Intelligence; Effective Date.
Oklahoma
A 10652 — Relates to unauthorized depictions of public officials generated by artificial intelligence
New York
AB 373 — Relating To: Use Of Social Media Platforms By Minors, Granting Rule-Making Authority, And Providing A Penalty. (FE)
Wisconsin
SB 385 — Relating To: Use Of Social Media Platforms By Minors, Granting Rule-Making Authority, And Providing A Penalty
Wisconsin
HB 524 — H.B. 524 Social Media Usage Modifications
Utah
S 404 — Social Media Regulation
South Carolina
A 8947 — Enacts The Youth & Teen Internet Safety And Social Media Literacy Act; Repealer
New York
S 6418 — Relates to the regulation of social media companies and social media platforms
New York
Arkansas SB 396/Act 901 — Revised Social Media Safety Act (2025 Replacement)
Arkansas
HB 2529 — An Act Amending Title 18 (Crimes And Offenses) Of The Pennsylvania Consolidated Statutes, In Computer Offenses, Providing For Social Media Platforms; And Imposing A Penalty
Pennsylvania
SB 933 — Relating To: Requiring Social Media Platforms To Provide Mental Health Warnings And Providing A Penalty
Wisconsin
H 823 — An Act Relating To Social Media Warning Labels
Vermont
HB 1624 — Consumer Data Protection Act; Social Media Platforms; Addictive Feed Prohibited For Minors
Virginia
S 7662 — Establishes A Statewide Youth Mental Health And Social Media Campaign To Promote Public Awareness Of The Impacts Of Social Media Usage On Mental Health
New York
S 5476 — Establishes A Statewide Youth Mental Health And Social Media Campaign To Promote Public Awareness Of The Impacts Of Social Media Usage On Mental Health
New York
S 3699 — Enacts The 'Facial Recognition Technology Study Act'
New York
A 8788 — Enacts The "Facial Recognition Technology Study Act"
New York
A 6031 — Establishes The Biometric Privacy Act
New York
S 1422 — Establishes The Biometric Privacy Act
New York
A 1447 — Relates to the use of facial recognition and biometric information for determining probable cause
New York
S 4457 — Establishes The Biometric Privacy Act
New York
A 2642 — Enacts The 'Facial Recognition Technology Study Act'
New York
A 1362 — Establishes The Biometric Privacy Act
New York
S 4824 — Enacts The 'Facial Recognition Technology Study Act'
New York
SB 730 — An Act Requiring Disclosure Of The Use Of Facial Recognition Technology In Public Spaces
Connecticut
HB 5532 — Establishes The Stop Addictive Feeds Exploitation (Safe) For Kids Act Prohibiting The Provision Of Addictive Feeds To Minors By Addictive Social Media Platforms
West Virginia
H 4591 — Stop Harm from Addictive Social Media
South Carolina
HB 1143 — Child Pornography; Renaming As Child Sexual Abuse Material In The Code
Virginia
SB 1446 — Oklahoma Law On Obscenity And Child Sexual Abuse Material; Modifying Certain Penalty Related To Child Sex Trafficking. Effective Date.
Oklahoma
HB 2294 — Virginia Social Media Regulation Act
Virginia
H 5209 — South Carolina Social Media Regulation Act
South Carolina
H 3431 — South Carolina Social Media Regulation Act
South Carolina
H 4700 — South Carolina Social Media Regulation Act
South Carolina
SB 1727 — Social Media; Authorizing Certain Cause Of Action Against Social Media Companies; Establishing Criteria To Recover Certain Damages; Authorizing Certain Rebuttable Presumption. Effective Date.
Oklahoma
AB 960 — Relating To: Requiring Social Media Platforms To Provide Mental Health Warnings And Providing A Penalty
Wisconsin
SB 1345 — Commercial Entity Offering Social Media Accounts; Restricted Hours For Minors, Civil Liability
Virginia
SB 532 — Commercial Entity Offering Social Media Accounts; Restricted Hours For Minors, Civil Liability
Virginia
SB 693 — Social Media; Requiring Certain Warning On Social Media Platforms. Effective Date.
Oklahoma

By Harm Domain

Misinfo & Disinfo11
Child Safety10
Addiction & Mental Health8
Privacy & Surveillance8
Self-Harm & Suicide5
Fraud & Financial5
Algorithmic Discrimination2