Twitter has been named in 10 documented digital harm incidents, including 2 involving minors. The most common harm domain is Privacy & Surveillance, followed by Misinfo & Disinfo.
Documented Incidents
10Election Commission of India warns against AI-generated deepfake videos in Assam election
Ahead of the Assam legislative assembly election, the Election Commission of India (ECI) issued a warning about the misuse of artificial intelligence and deepfake technology in political campaigning. A controversial AI‑generated video that appeared to show Chief Minister Himanta Biswa Sarma shooting members of a minority community sparked outrage, leading to police complaints, FIRs and court petitions. Both the Assam BJP and the Assam Pradesh Congress Committee were reported to have shared AI‑assisted videos and graphics on social media, prompting calls for stricter regulation. Legal experts noted that existing provisions of the Information Technology Act would have to be used in the absence of specific AI legislation.
Multiple women file class action against xAI over non-consensual sexual deepfakes generated by Grok on X
On January 23, 2026 a class‑action complaint was filed in the U.S. District Court for the Northern District of California alleging that X.AI Corp.'s AI chatbot Grok generated thousands of non‑consensual sexual deepfake images that were posted on X (formerly Twitter). The lead plaintiff, identified as Jane Doe, says a fully clothed photograph of her was transformed into a revealing bikini image and shared publicly, causing severe emotional distress. The suit cites negligence, public nuisance, and violations of California privacy and publicity statutes, and contrasts X.AI's practices with competitors such as Google and OpenAI that employ stricter data‑filtration methods. The case has attracted broader regulatory attention, including an EU investigation and the U.S. Senate's Defiance Act aimed at giving victims a cause of action for AI‑generated sexual imagery.
Pro-Modi social media network spreads AI-generated disinformation during 2024 Indian election campaign
In early May 2024, Indian Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP) used the term "Vote Jihad" during election campaigning, which was later adopted by affiliated groups like the Vishwa Hindu Parishad (VHP) on social media platforms such as Facebook. A report by The London Story (TLS) found at least 21 instances in March and 33 in April where the BJP’s Facebook page and affiliated accounts spread Islamophobic narratives. The disinformation campaign targeted India’s 200 million Muslim voters and was part of a broader effort to amplify divisive rhetoric between Hindus and Muslims. A study by Oxford University noted that the BJP dominated digital campaigning on platforms like YouTube and WhatsApp, while other parties struggled to respond effectively. Meta, which owns Facebook and Instagram, approved ads containing hate speech and AI-manipulated content, despite pledging to prevent such material during the election. India’s press freedom has declined significantly, ranking 161 out of 180 countries in the 2023 World Press Freedom Index.
Twitter Data Leak: API ‘Defect’ Exposed Information of Over 200M Users - ClassAction.org
A proposed class action lawsuit alleges that a defect in Twitter's API allowed hackers to scrape personal data from over 200 million users between June 2021 and January 2022. The leaked information included usernames, email addresses, and phone numbers, which the lawsuit claims deanonymized users who sought to remain anonymous. The complaint accuses Twitter of violating its terms of service and a 2011 FTC settlement regarding user data protection. The lawsuit also criticizes Twitter's response to the breach, which downplayed the severity and scope of the incident. The data is now reportedly being sold on the dark web by cybercriminals.
Two men killed in driverless Tesla crash in Spring, Texas after vehicle strikes tree and catches fire
Two men died in a Tesla crash in Spring, Texas, where no one was found behind the wheel, according to local police. The 2019 Tesla Model S crashed into a tree and caught fire, with one person in the front passenger seat and another in the rear. Preliminary investigations suggest no driver was present at the time of the crash. The incident has raised questions about Tesla's Autopilot and Full Self-Driving (FSD) systems, which are not fully autonomous. The National Highway Traffic Safety Administration (NHTSA) has launched a special investigation into the crash.
Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times
Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.
Reddit bans AI-generated celebrity deepfake porn communities
In February 2018, Reddit banned two communities, r/deepfakes and r/deepfakeNSFW, which hosted AI-generated pornographic content featuring celebrities without their consent. The move was part of a broader trend, with platforms like Pornhub, Discord, and Twitter also taking action against involuntary pornography. Reddit updated its policies to prohibit the creation and sharing of involuntary pornography and the sexualization of minors.
Russia's Internet Research Agency targets U.S. with social media disinformation during 2016 election
The Senate Intelligence Committee revealed that Russia's Internet Research Agency used social media platforms including Facebook, Instagram, and Twitter to target African Americans and spread disinformation aimed at sowing racial discord during the 2016 U.S. election. The agency's content was heavily focused on race-related themes. This incident highlights foreign interference through digital platforms during a critical U.S. political event.
Microsoft AI Chatbot Tay Posts Racist and Offensive Content on Twitter
In March 2016, Microsoft launched an AI chatbot named Tay on Twitter to engage with users. Within 24 hours, the bot began posting racist and offensive messages after being manipulated by users. Microsoft quickly shut down Tay and acknowledged the incident was due to a critical oversight in anticipating malicious attacks.
GamerGate Movement and Online Harassment of Feminist Critics
In August 2014, the #GamerGate movement emerged, leading to widespread online harassment and death threats against feminist critics such as Anita Sarkeesian and indie game developer Zoe Quinn. The movement was sparked by a blog post from Eron Gjoni about his breakup with Quinn, which led to coordinated online attacks. The harassment occurred across multiple platforms including Twitter, 4chan, IRC, and others.