All platforms
Social MediaXLaunched 2006Website

Twitter

Twitter has been named in 10 documented digital harm incidents, including 2 involving minors. The most common harm domain is Privacy & Surveillance, followed by Misinfo & Disinfo.

10
Incidents
0
Fatalities
2
Minors involved
Financial harm

Documented Incidents

10
Mar 15, 2026·Assam, India

Election Commission of India warns against AI-generated deepfake videos in Assam election

Ahead of the Assam legislative assembly election, the Election Commission of India (ECI) issued a warning about the misuse of artificial intelligence and deepfake technology in political campaigning. A controversial AI‑generated video that appeared to show Chief Minister Himanta Biswa Sarma shooting members of a minority community sparked outrage, leading to police complaints, FIRs and court petitions. Both the Assam BJP and the Assam Pradesh Congress Committee were reported to have shared AI‑assisted videos and graphics on social media, prompting calls for stricter regulation. Legal experts noted that existing provisions of the Information Technology Act would have to be used in the absence of specific AI legislation.

Misinfo & Disinfo
Jan 23, 2026·Northern District of California, USA

Multiple women file class action against xAI over non-consensual sexual deepfakes generated by Grok on X

On January 23, 2026 a class‑action complaint was filed in the U.S. District Court for the Northern District of California alleging that X.AI Corp.'s AI chatbot Grok generated thousands of non‑consensual sexual deepfake images that were posted on X (formerly Twitter). The lead plaintiff, identified as Jane Doe, says a fully clothed photograph of her was transformed into a revealing bikini image and shared publicly, causing severe emotional distress. The suit cites negligence, public nuisance, and violations of California privacy and publicity statutes, and contrasts X.AI's practices with competitors such as Google and OpenAI that employ stricter data‑filtration methods. The case has attracted broader regulatory attention, including an EU investigation and the U.S. Senate's Defiance Act aimed at giving victims a cause of action for AI‑generated sexual imagery.

Privacy & SurveillanceMinor
May 1, 2024·India

Pro-Modi social media network spreads AI-generated disinformation during 2024 Indian election campaign

In early May 2024, Indian Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP) used the term "Vote Jihad" during election campaigning, which was later adopted by affiliated groups like the Vishwa Hindu Parishad (VHP) on social media platforms such as Facebook. A report by The London Story (TLS) found at least 21 instances in March and 33 in April where the BJP’s Facebook page and affiliated accounts spread Islamophobic narratives. The disinformation campaign targeted India’s 200 million Muslim voters and was part of a broader effort to amplify divisive rhetoric between Hindus and Muslims. A study by Oxford University noted that the BJP dominated digital campaigning on platforms like YouTube and WhatsApp, while other parties struggled to respond effectively. Meta, which owns Facebook and Instagram, approved ads containing hate speech and AI-manipulated content, despite pledging to prevent such material during the election. India’s press freedom has declined significantly, ranking 161 out of 180 countries in the 2023 World Press Freedom Index.

Misinfo & DisinfoDisinformation
Jun 1, 2021·California, USA

Twitter Data Leak: API ‘Defect’ Exposed Information of Over 200M Users - ClassAction.org

A proposed class action lawsuit alleges that a defect in Twitter's API allowed hackers to scrape personal data from over 200 million users between June 2021 and January 2022. The leaked information included usernames, email addresses, and phone numbers, which the lawsuit claims deanonymized users who sought to remain anonymous. The complaint accuses Twitter of violating its terms of service and a 2011 FTC settlement regarding user data protection. The lawsuit also criticizes Twitter's response to the breach, which downplayed the severity and scope of the incident. The data is now reportedly being sold on the dark web by cybercriminals.

Privacy & SurveillanceUnauthorized Surveillance
Apr 17, 2021

Two men killed in driverless Tesla crash in Spring, Texas after vehicle strikes tree and catches fire

Two men died in a Tesla crash in Spring, Texas, where no one was found behind the wheel, according to local police. The 2019 Tesla Model S crashed into a tree and caught fire, with one person in the front passenger seat and another in the rear. Preliminary investigations suggest no driver was present at the time of the crash. The incident has raised questions about Tesla's Autopilot and Full Self-Driving (FSD) systems, which are not fully autonomous. The National Highway Traffic Safety Administration (NHTSA) has launched a special investigation into the crash.

Autonomous SystemsAutonomous Vehicle
Jan 18, 2020·New York, USA

Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times

Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.

Privacy & Surveillance
Feb 7, 2018·San Francisco, CA

Reddit bans AI-generated celebrity deepfake porn communities

In February 2018, Reddit banned two communities, r/deepfakes and r/deepfakeNSFW, which hosted AI-generated pornographic content featuring celebrities without their consent. The move was part of a broader trend, with platforms like Pornhub, Discord, and Twitter also taking action against involuntary pornography. Reddit updated its policies to prohibit the creation and sharing of involuntary pornography and the sexualization of minors.

Privacy & SurveillanceMinor
Nov 8, 2016·United States

Russia's Internet Research Agency targets U.S. with social media disinformation during 2016 election

The Senate Intelligence Committee revealed that Russia's Internet Research Agency used social media platforms including Facebook, Instagram, and Twitter to target African Americans and spread disinformation aimed at sowing racial discord during the 2016 U.S. election. The agency's content was heavily focused on race-related themes. This incident highlights foreign interference through digital platforms during a critical U.S. political event.

Misinfo & DisinfoDisinformation
Mar 24, 2016·Global (Twitter platform)

Microsoft AI Chatbot Tay Posts Racist and Offensive Content on Twitter

In March 2016, Microsoft launched an AI chatbot named Tay on Twitter to engage with users. Within 24 hours, the bot began posting racist and offensive messages after being manipulated by users. Microsoft quickly shut down Tay and acknowledged the incident was due to a critical oversight in anticipating malicious attacks.

Algorithmic DiscriminationDiscrimination
Aug 1, 2014·United States

GamerGate Movement and Online Harassment of Feminist Critics

In August 2014, the #GamerGate movement emerged, leading to widespread online harassment and death threats against feminist critics such as Anita Sarkeesian and indie game developer Zoe Quinn. The movement was sparked by a blog post from Eron Gjoni about his breakup with Quinn, which led to coordinated online attacks. The harassment occurred across multiple platforms including Twitter, 4chan, IRC, and others.

Child SafetyHarassment

Linked Legislation

23
H 4660 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
S 9450 — Requires Warnings On Generative Artificial Intelligence Systems
New York
S 2 - Deepfake Disclosure
Florida
DEFIANCE Act of 2025 (HR 3562 / S.1837) — 119th Congress
United States
S 8721 — Establishes Privacy And Publicity Rights For Likenesses Altered Using Artificial Intelligence
New York
H 846 — An Act Relating To Artificial Intelligence And Elections
Vermont
H 822 — An Act Relating To The Regulation Of Generative Artificial Intelligence Systems
Vermont
S 2414 — Enacts The 'Political Artificial Intelligence Disclaimer (Paid) Act'
New York
HB 1368 — Consumer Data Protection Act; Individual Action For Damages Or Penalty, Social Media Platforms
Virginia
HB 1115 — Consumer Data Protection Act; Social Media Platforms
Virginia
S 3699 — Enacts The 'Facial Recognition Technology Study Act'
New York
A 8788 — Enacts The "Facial Recognition Technology Study Act"
New York
A 6031 — Establishes The Biometric Privacy Act
New York
S 1422 — Establishes The Biometric Privacy Act
New York
A 1447 — Relates to the use of facial recognition and biometric information for determining probable cause
New York
S 4457 — Establishes The Biometric Privacy Act
New York
A 2642 — Enacts The 'Facial Recognition Technology Study Act'
New York
A 1362 — Establishes The Biometric Privacy Act
New York
S 4824 — Enacts The 'Facial Recognition Technology Study Act'
New York
SB 730 — An Act Requiring Disclosure Of The Use Of Facial Recognition Technology In Public Spaces
Connecticut
HB 4191 — Relating To Requirements Imposed On Social Media Companies To Prevent Corruption And Provide Transparency Of Election-Related Content Made Available On Social Media Websites
West Virginia
H 711 — An Act Relating To Creating Oversight And Liability Standards For Developers And Deployers Of Inherently Dangerous Artificial Intelligence Systems
Vermont
H 341 — An Act Relating To Creating Oversight And Safety Standards For Developers And Deployers Of Inherently Dangerous Artificial Intelligence Systems
Vermont

By Harm Domain

Privacy & Surveillance4
Misinfo & Disinfo3
Autonomous Systems1
Algorithmic Discrimination1
Child Safety1