All platforms
Social MediaXLaunched 2023Website

X

X has been named in 18 documented digital harm incidents, including 3 involving minors. The most common harm domain is Misinfo & Disinfo, followed by Privacy & Surveillance.

18
Incidents
0
Fatalities
3
Minors involved
$10.0M
Financial harm

Documented Incidents

18
Mar 26, 2026·Thiruvananthapuram, India

Kerala Police file FIR against X Corp over AI-generated deepfake video of PM Modi during election period

Kerala Police filed an FIR against social media platform X Corp and an unidentified user for circulating an AI-generated video featuring Prime Minister Narendra Modi. The Election Commission of India (ECI) flagged the video as a potential misinformation risk during the election period. The 77-second video is alleged to have been designed to mislead viewers and undermine democratic institutions. The case was registered in Thiruvananthapuram under the Bharatiya Nyaya Sanhita and IT Act provisions related to forgery, public mischief, and identity theft. Authorities warned of strict action against those attempting to disrupt the electoral process and urged the public not to share unverified content.

Misinfo & DisinfoMisinformation
Mar 15, 2026·Israel

False death rumors about Israeli PM Benjamin Netanyahu spread on X in March 2026

In March 2026, online posts claimed that Israeli Prime Minister Benjamin Netanyahu had died, suggesting a confirming tweet had been deleted. Israeli officials and Turkish news agency Anadolu Ajansı refuted the claim, and Netanyahu posted a video on X showing he was alive. X’s AI chatbot Grok and fact‑checkers verified that no tweet deletion occurred and that the video was unaltered. The misinformation was amplified amid heightened Israel‑Iran tensions but caused no physical harm.

Misinfo & DisinfoMisinformation
Mar 15, 2026

AI‑generated Iran‑US war deepfakes spread on X despite new policy

AI‑generated videos showing false scenes of an Iran‑US conflict have been circulating on X, the platform owned by Elon Musk. X announced a policy that suspends revenue‑sharing for creators who post undisclosed AI‑generated war content, imposing a 90‑day suspension for first‑time violators and permanent bans for repeat offenders. Researchers say the flood of deepfake videos continues, with premium verified accounts posting clips that garner millions of views, and fact‑checking efforts struggling to keep pace. X’s own AI chatbot Grok has even mislabeled some fabricated visuals as real.

Misinfo & DisinfoDisinformation
Mar 7, 2026

Deepfake video falsely depicts Indian army chief sharing Iranian ship coordinates with Israel

A deepfake video of General Upendra Dwivedi, Chief of the Indian Army Staff, falsely claimed he admitted to sharing coordinates of an Iranian naval ship with Israel. The video was posted on an X account named "PLA Military Updates" and gained 15,900 views. Thai PBS Verify confirmed the video was AI-generated using Hive Moderation and AI Video Detector tools. A reverse image search traced the visuals to a Getty Images photo from February 26, 2026, showing Indian and Israeli prime ministers shaking hands. The original footage, from a March 7, 2026, YouTube video, showed General Dwivedi discussing military strategy and modernization, with no mention of Iran or Israel. Thai PBS Verify concluded the claim was fake news.

Misinfo & DisinfoSynthetic Media
Feb 1, 2026·Global

Olympic figure skater targeted with non-consensual AI-generated deepfake imagery

Female Olympic athletes, including Alysa Liu, Amber Glenn, Isabeau Levito, Mikaela Shiffrin, and Eileen Gu, have been victims of non-consensual AI-generated deepfakes, which have circulated on platforms like 4Chan. In late 2025, users on X prompted Grok AI to generate sexualized images of women and girls, including 14-year-old actress Nell Fisher. The issue of deepfake pornography has grown significantly, with 98% of deepfake content targeting women, and a 55% increase in online deepfakes from 2019 to 2023. Students at El Camino College expressed concerns about the impact of AI-generated content on women's mental health and social interactions. Despite the passage of the TAKE IT DOWN Act in Congress to combat deepfakes, such content remains a persistent problem.

Privacy & SurveillanceDeepfake NCII
Jan 23, 2026·Northern District of California, USA

Multiple women file class action against xAI over non-consensual sexual deepfakes generated by Grok on X

On January 23, 2026 a class‑action complaint was filed in the U.S. District Court for the Northern District of California alleging that X.AI Corp.'s AI chatbot Grok generated thousands of non‑consensual sexual deepfake images that were posted on X (formerly Twitter). The lead plaintiff, identified as Jane Doe, says a fully clothed photograph of her was transformed into a revealing bikini image and shared publicly, causing severe emotional distress. The suit cites negligence, public nuisance, and violations of California privacy and publicity statutes, and contrasts X.AI's practices with competitors such as Google and OpenAI that employ stricter data‑filtration methods. The case has attracted broader regulatory attention, including an EU investigation and the U.S. Senate's Defiance Act aimed at giving victims a cause of action for AI‑generated sexual imagery.

Privacy & SurveillanceMinor
Dec 31, 2025·Penarth, Wales

UK woman targeted by Grok-generated deepfake images, government criticised for slow response

Jess Davies, a Welsh presenter, accused the UK government of delaying action that could have prevented the spread of deepfake sexual images created by Grok AI, an AI chatbot developed by X (owned by Elon Musk). The incident occurred in the UK, with Davies based in Penarth, Vale of Glamorgan, and involved the non-consensual creation and sharing of explicit images of her via X. The UK government announced new legislation this week to criminalize the creation of such AI-generated content, although the law had been ready since June 2025. X has restricted access to Grok AI’s image function to paying users, but free access via the app is still reported. The UK's online regulator, Ofcom, is investigating whether Grok AI violated online safety laws. The incident has sparked wider concerns about misogyny and the impact of AI on privacy and digital safety.

Privacy & SurveillanceDeepfake NCII
Oct 1, 2025·Phnom Penh, Cambodia

Chinese Social Media Influencer 'Sister Orange' Arrested in Cambodia for Pig Butchering and Human Trafficking

Zhang Mucheng, a Chinese social media influencer known as 'Sister Orange' with over 100,000 followers, was arrested in Phnom Penh, Cambodia on charges of fraud and human trafficking. Cambodian authorities stated she worked with criminal gangs in Cambodia and China to traffic victims into scam compounds between October and November 2025. Her social media accounts were suspended following the arrest. The case drew international attention as a rare instance of an influencer-linked figure being held accountable in the transnational pig butchering ecosystem.

Fraud & Financial
Oct 1, 2025·Spain

Spain opens investigation into X, Meta, and TikTok over AI-generated child sexual abuse material

Spain has launched an investigation into X, Meta, and TikTok for their involvement in the distribution of AI-generated child sexual abuse material. The probe focuses on the platforms' handling of such content. The investigation is part of broader efforts to address digital harms and protect children online. The companies are being scrutinized for their policies and responses to AI-generated abuse material. The investigation is ongoing, with potential consequences including regulatory action or legal penalties.

Child SafetyCSAMMinor
Sep 4, 2024·United States

DOJ indicts RT employees and US firms for Russian interference in 2024 U.S. election

The U.S. Department of Justice announced criminal charges against two Russia Today employees and several U.S.-based companies for a coordinated scheme that funneled roughly $10 million from Russia to influence the 2024 presidential election. The indictment alleges the use of shell companies, including Tenet Media and its parent Roaming USA Corp, to pay American influencers and create disinformation on social media platforms, while Russian entities such as the Social Design Agency and Doppelganger created fake news sites and a bot farm. Dozens of domains and nearly 1,000 social‑media accounts were seized, and officials warned the operation, though smaller than the 2016 effort, represents a significant foreign‑influence campaign.

Misinfo & Disinfo
Sep 3, 2024·United States

Chinese "Spamouflage" Influence Operation Uses Fake U.S. Voter Personas

Researchers at Graphika identified a Chinese state‑linked influence campaign, dubbed “Spamouflage,” that created a network of fake social‑media accounts impersonating U.S. voters, soldiers and a news outlet. The operation posted divisive content on X, TikTok, YouTube, Instagram and Facebook ahead of the 2024 presidential election, targeting topics such as reproductive rights, homelessness, Ukraine and Israel. Meta linked the network to Chinese law‑enforcement, while TikTok removed one of the accounts for policy violations after a video mocking President Biden amassed 1.5 million views. The campaign illustrates China’s use of deceptive online behavior to portray the United States as politically unstable.

Misinfo & Disinfo
Aug 31, 2024·Canada

Chinese Spamouflage campaign targets Canadian officials and Chinese‑Canadian community

Rapid Response Mechanism Canada identified a new transnational repression operation, dubbed “Spamouflage,” that began on August 31 2024. The campaign uses hundreds of bot‑like accounts on X, Facebook, TikTok and YouTube to post deep‑fake videos, sexually explicit AI‑generated images, and doxxing material aimed at ten Mandarin‑speaking Chinese‑Canadian individuals as well as Canadian government officials, media outlets and the Canadian Armed Forces. The deepfakes falsely accuse Prime Minister Justin Trudeau, Minister Mélanie Joly and other officials of corruption and sexual scandals. Researchers attribute the coordinated inauthentic activity with high confidence to actors linked to the People’s Republic of China.

Misinfo & Disinfo
Aug 17, 2024

Donald Trump posts deepfakes of Taylor Swift, Kamala Harris, and Elon Musk to manipulate voters

Donald Trump shared AI-generated deepfake images of Taylor Swift, Kamala Harris, and Elon Musk on his Truth Social platform in an effort to boost his 2024 presidential campaign. The images, including Swift in a "Swifties for Trump" T-shirt and Harris at a communist rally, were reposted from rightwing X accounts and falsely presented as endorsements. Trump also shared a deepfake video of himself dancing with Musk, who has endorsed him. These posts occurred in late July 2024 and reflect a growing trend of AI-generated disinformation in the U.S. election cycle. The use of AI imagery has raised concerns among researchers about the spread of election-related misinformation and the "liar’s dividend" effect, where authentic content is dismissed as fake. The AI images were created using tools like Musk’s Grok image generator, which lacks some of the safety measures found in other AI platforms.

Misinfo & DisinfoSynthetic Media
Mar 13, 2024

Misinformation about Israeli Prime Minister Benjamin Netanyahu’s whereabouts debunked

On March 13, 2024, social media users circulated false claims that Israeli Prime Minister Benjamin Netanyahu had been assassinated or was missing, citing a video alleged to show a six‑finger deep‑fake frame. The rumors spread on platforms such as X and YouTube. Netanyahu’s office, referencing a statement to Anadolu Ajansi, issued a clarification that the Prime Minister is alive and well, refuting the deep‑fake allegations. The incident highlights the rapid propagation of political disinformation during the West Asia conflict.

Misinfo & Disinfo
Jan 29, 2024

Taylor Swift non-consensual AI deepfake pornography spreads on X, prompting legislative action

In early 2026, AI‑generated pornographic deepfake images of singer Taylor Swift were widely shared on the social media platform X, with one post reaching over 47 million views before the account was suspended. X temporarily blocked searches for Swift’s name and reinstated content‑moderation measures, while the White House and Swift’s fans condemned the abuse. The incident spurred bipartisan congressional efforts, including the No AI FRAUD Act, to criminalize the creation and distribution of non‑consensual deepfake imagery. State lawmakers also highlighted the patchwork of existing protections, citing California and New York laws that already provide civil remedies for deepfake victims.

Privacy & SurveillanceDeepfake NCII
Jan 1, 2024·Bangladesh

AI-generated disinformation disrupts Bangladesh's 2024 general election campaign

A report by *The Daily Star* and cited in the *Financial Times* highlights the use of AI-generated disinformation in Bangladesh ahead of its January 2024 elections. Pro-government outlets and influencers have used AI tools like HeyGen to create fake news clips and deepfake videos targeting both the ruling party and opposition Bangladesh Nationalist Party (BNP). Examples include an AI-generated news anchor criticizing the U.S. and a deepfake video falsely showing an opposition leader downplaying support for Gazans. The disinformation is spreading on platforms like X and Facebook, with Meta removing some content after being contacted by the *Financial Times*. Experts warn that the lack of regulation and the potential for bad actors to falsely claim content is AI-generated could further erode public trust in information. The issue is part of a growing global concern about AI's role in elections, particularly in smaller markets that may be overlooked by major tech companies.

Misinfo & DisinfoDisinformation
Nov 1, 2023·United Kingdom

George Freeman MP targeted by AI deepfake video falsely claiming he defected to rival party

A British member of Parliament, George Freeman, was targeted by an AI-generated deepfake video falsely claiming he had defected to a rival political party. The incident occurred in late 2023 and was discussed in a parliamentary hearing in early 2024. During a hearing before the House of Commons Science, Innovation and Technology Committee, representatives from Meta, Google, and X (formerly Twitter) were questioned about how the deepfake spread on their platforms. The companies provided explanations of their policies but did not commit to specific actions to prevent similar incidents or address the spread of the fake video. Freeman criticized the platforms for failing to act decisively and called for legislation to protect individuals from identity theft and misuse through AI. The hearing highlighted concerns about the spread of political misinformation and its threat to democratic processes in the UK.

Misinfo & DisinfoSynthetic Media
Apr 8, 2021

Secretive global network of nonconsensual deepfake pornography sites revealed

A Bellingcat investigation uncovered a global network of nonconsensual deepfake pornography sites, including Clothoff, Nudify, Undress, and DrawNudes, which evade bans by disguising their activities. Tokens for Clothoff were being sold on G2A, a gaming marketplace, which later suspended the accounts involved. The incident highlights the involvement of multiple platforms and companies in facilitating the distribution of nonconsensual deepfake pornography.

Child SafetyDeepfake NCIIMinor

Linked Legislation

48
HB 4496 — To Force Any Media/Internet Creator Providing Artificial Intelligence Created Videos To Have An Identifying Marker That Allows Viewers To Know That The Video Is Not Real
West Virginia
A 3411 — Requires Notices On Generative Artificial Intelligence Systems
New York
SB 894 — Artificial Intelligence; Prohibiting Distribution Of Certain Media And Requiring Certain Disclosures. Effective Date.
Oklahoma
A 9091 — Requires Search Engines Inform Users When Showing Information Which Was Generated Using Artificial Intelligence
New York
S 9236 — Relates To Falsely Reporting An Incident Through The Use Of Artificial Intelligence
New York
HB 5548 — Stop Non-Consensual Distribution Of Intimate Deep Fake Media Act
West Virginia
DEFIANCE Act of 2025 (HR 3562 / S.1837) — 119th Congress
United States
SB 720 — Stop Non-Consensual Distribution Of Intimate Deep Fake Media Act
West Virginia
SB 256 — Identity Protection Modifications
Utah
HB 2252 — An Act Amending Title 18 (Crimes And Offenses) Of The Pennsylvania Consolidated Statutes, In Sexual Offenses, Further Providing For The Offense Of Unlawful Dissemination Of Intimate Image
Pennsylvania
HB 3865 — Crimes And Punishments; Expanding Scope Of Crime To Include Materials And Pornography Generated Via Artificial Intelligence; Effective Date.
Oklahoma
S 1822 — Prohibits Speech-Based Defenses To Actions Brought Against An Individual For The Unlawful Dissemination Of Publication Of An Intimate Image
New York
S 8721 — Establishes Privacy And Publicity Rights For Likenesses Altered Using Artificial Intelligence
New York
SB 6184 — Concerning Deepfake Artificial Intelligence-Generated Pornographic Material Involving Minors
Washington
HB 4191 — Relating To Requirements Imposed On Social Media Companies To Prevent Corruption And Provide Transparency Of Election-Related Content Made Available On Social Media Websites
West Virginia
Protect Elections from Deceptive AI Act — 119th Congress (S.1213 / HR 5272)
United States
SB 484 — Relating To Disclosures And Penalties Associated With Use Of Synthetic Media And Artificial Intelligence
West Virginia
HB 4963 — Prohibiting The Use Of Deep Fake Technology To Influence An Election
West Virginia
SB 644 — Relating To: Disclosures Regarding Content Generated By Artificial Intelligence In Political Advertisements, Granting Rule-Making Authority, And Providing A Penalty
Wisconsin
AB 664 — Relating To: Disclosures Regarding Content Generated By Artificial Intelligence In Political Advertisements, Granting Rule-Making Authority, And Providing A Penalty. (FE)
Wisconsin
HB 1442 — Defining Synthetic Media In Campaigns For Elective Office, And Providing Relief For Candidates And Campaigns.
Washington
SB 5152 — Defining Synthetic Media In Campaigns For Elective Office, And Providing Relief For Candidates And Campaigns
Washington
H 846 — An Act Relating To Artificial Intelligence And Elections
Vermont
H 822 — An Act Relating To The Regulation Of Generative Artificial Intelligence Systems
Vermont
HB 982 — Political campaign advertisements; synthetic media, penalty
Virginia
HB 868 — Political campaign advertisements; synthetic media, penalty
Virginia
SB 775 — Political Campaign Advertisements; Synthetic Media, Penalty
Virginia
HB 2479 — Political Campaign Advertisements; Synthetic Media, Penalty
Virginia
SB 96 — Prohibit The Use Of A Deepfake To Influence An Election And To Provide A Penalty Therefor
South Dakota
H 3517 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
H 4660 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
SB 1571 — Relating To The Use Of Artificial Intelligence In Campaign Communications; Declaring An Emergency
Oregon
HB 3299 — Crimes And Punishments; Creating And Disseminating A Digitization Or Synthetic Media; Making Certain Acts Unlawful; Emergency
Oklahoma
SB 746 — Artificial Intelligence; Requiring Certain Disclosure For Certain Media. Effective Date.
Oklahoma
A 3327 — Relates to Political Communication Utilizing Artificial Intelligence
New York
S 6748 — Requires Publications To Identify When The Use Of Artificial Intelligence Is Present Within Such Publication
New York
S 2414 — Enacts The 'Political Artificial Intelligence Disclaimer (Paid) Act'
New York
A 6491 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
S 8400 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
A 7106 — Enacts The "Political Artificial Intelligence Disclaimer (PAID) Act"
New York
A 6790 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
SB 1295 — An Act Concerning Broadband Internet, Gaming, Social Media, Online Services And Consumer Contracts
Connecticut
HSB 294
Iowa
S 2 - Deepfake Disclosure
Florida
A 9103 — Relates to Political Communication Utilizing Artificial Intelligence
New York
SB 568 — An Act Providing For The Removal Of Nonconsenting Intimate Depictions From Social Media Platforms
Pennsylvania
SB 816 — An Act Relating To Elections -- Deceptive And Fraudulent Synthetic Media In Election Communications
Rhode Island
HB 5872 — An Act Relating To Elections -- Deceptive And Fraudulent Synthetic Media In Election Communications
Rhode Island

By Harm Domain

Misinfo & Disinfo11
Privacy & Surveillance4
Child Safety2
Fraud & Financial1