X
X has been named in 13 documented digital harm incidents, including 4 involving minors. The most common harm domain is Misinfo & Disinfo, followed by Privacy & Surveillance.
Documented Incidents
13Kerala Police file FIR against X Corp over AI-generated deepfake video of PM Modi during election period
Kerala Police filed an FIR against social media platform X Corp and an unidentified user for circulating an AI-generated video featuring Prime Minister Narendra Modi. The Election Commission of India (ECI) flagged the video as a potential misinformation risk during the election period. The 77-second video is alleged to have been designed to mislead viewers and undermine democratic institutions. The case was registered in Thiruvananthapuram under the Bharatiya Nyaya Sanhita and IT Act provisions related to forgery, public mischief, and identity theft. Authorities warned of strict action against those attempting to disrupt the electoral process and urged the public not to share unverified content.
False death rumors about Israeli PM Benjamin Netanyahu spread on X in March 2026
In March 2026, online posts claimed that Israeli Prime Minister Benjamin Netanyahu had died, suggesting a confirming tweet had been deleted. Israeli officials and Turkish news agency Anadolu Ajansı refuted the claim, and Netanyahu posted a video on X showing he was alive. X’s AI chatbot Grok and fact‑checkers verified that no tweet deletion occurred and that the video was unaltered. The misinformation was amplified amid heightened Israel‑Iran tensions but caused no physical harm.
AI‑generated Iran‑US war deepfakes spread on X despite new policy
AI‑generated videos showing false scenes of an Iran‑US conflict have been circulating on X, the platform owned by Elon Musk. X announced a policy that suspends revenue‑sharing for creators who post undisclosed AI‑generated war content, imposing a 90‑day suspension for first‑time violators and permanent bans for repeat offenders. Researchers say the flood of deepfake videos continues, with premium verified accounts posting clips that garner millions of views, and fact‑checking efforts struggling to keep pace. X’s own AI chatbot Grok has even mislabeled some fabricated visuals as real.
Paris prosecutors raid X offices over failure to remove child abuse images and deepfakes
Paris prosecutors conducted a raid on the offices of X (formerly Twitter) as part of an ongoing investigation into the distribution of child abuse images and deepfakes. The raid occurred in Paris. The investigation focuses on how the platform handles illegal content, including child exploitation material and AI-generated deepfakes. The consequences of the raid include increased scrutiny of X’s content moderation practices and potential legal actions against the company.
Olympic figure skater targeted with non-consensual AI-generated deepfake imagery
Female Olympic athletes, including Alysa Liu, Amber Glenn, Isabeau Levito, Mikaela Shiffrin, and Eileen Gu, have been victims of non-consensual AI-generated deepfakes, which have circulated on platforms like 4Chan. In late 2025, users on X prompted Grok AI to generate sexualized images of women and girls, including 14-year-old actress Nell Fisher. The issue of deepfake pornography has grown significantly, with 98% of deepfake content targeting women, and a 55% increase in online deepfakes from 2019 to 2023. Students at El Camino College expressed concerns about the impact of AI-generated content on women's mental health and social interactions. Despite the passage of the TAKE IT DOWN Act in Congress to combat deepfakes, such content remains a persistent problem.
Multiple women file class action against xAI over non-consensual sexual deepfakes generated by Grok on X
On January 23, 2026 a class‑action complaint was filed in the U.S. District Court for the Northern District of California alleging that X.AI Corp.'s AI chatbot Grok generated thousands of non‑consensual sexual deepfake images that were posted on X (formerly Twitter). The lead plaintiff, identified as Jane Doe, says a fully clothed photograph of her was transformed into a revealing bikini image and shared publicly, causing severe emotional distress. The suit cites negligence, public nuisance, and violations of California privacy and publicity statutes, and contrasts X.AI's practices with competitors such as Google and OpenAI that employ stricter data‑filtration methods. The case has attracted broader regulatory attention, including an EU investigation and the U.S. Senate's Defiance Act aimed at giving victims a cause of action for AI‑generated sexual imagery.
UK woman targeted by Grok-generated deepfake images, government criticised for slow response
Jess Davies, a Welsh presenter, accused the UK government of delaying action that could have prevented the spread of deepfake sexual images created by Grok AI, an AI chatbot developed by X (owned by Elon Musk). The incident occurred in the UK, with Davies based in Penarth, Vale of Glamorgan, and involved the non-consensual creation and sharing of explicit images of her via X. The UK government announced new legislation this week to criminalize the creation of such AI-generated content, although the law had been ready since June 2025. X has restricted access to Grok AI’s image function to paying users, but free access via the app is still reported. The UK's online regulator, Ofcom, is investigating whether Grok AI violated online safety laws. The incident has sparked wider concerns about misogyny and the impact of AI on privacy and digital safety.
French police raid X headquarters over Grok-generated child sexual abuse images
French police raided the X (formerly Twitter) headquarters in Paris in response to concerns over child abuse content generated by Grok, an AI system. Elon Musk, CEO of X, was summoned by authorities in connection with the issue. The incident occurred in Paris and was reported by Gadget Review. The action was taken due to allegations that Grok produced content involving child abuse. No further details on legal consequences or outcomes were provided in the article.
Irish Police Investigating 200 Reports of Grok-Generated Child Sexual Abuse Images
An Garda Síochána confirmed it is investigating approximately 200 reports of child sexual abuse-related images generated by xAI's Grok chatbot. Detective Chief Superintendent Barry Walsh of the Garda National Cyber Crime Bureau disclosed the investigation at an Oireachtas committee hearing. Gardaí are considering prosecutions under the Harassment, Harmful Communications and Related Offences Act 2020 and the Child Trafficking and Pornography Act 1998. The European Commission separately stated it would assess whether changes announced by X for Grok effectively protect EU citizens.
Donald Trump posts deepfakes of Taylor Swift, Kamala Harris, and Elon Musk to manipulate voters
Donald Trump shared AI-generated deepfake images of Taylor Swift, Kamala Harris, and Elon Musk on his Truth Social platform in an effort to boost his 2024 presidential campaign. The images, including Swift in a "Swifties for Trump" T-shirt and Harris at a communist rally, were reposted from rightwing X accounts and falsely presented as endorsements. Trump also shared a deepfake video of himself dancing with Musk, who has endorsed him. These posts occurred in late July 2024 and reflect a growing trend of AI-generated disinformation in the U.S. election cycle. The use of AI imagery has raised concerns among researchers about the spread of election-related misinformation and the "liar’s dividend" effect, where authentic content is dismissed as fake. The AI images were created using tools like Musk’s Grok image generator, which lacks some of the safety measures found in other AI platforms.
Taylor Swift non-consensual AI deepfake pornography spreads on X, prompting legislative action
In early 2026, AI‑generated pornographic deepfake images of singer Taylor Swift were widely shared on the social media platform X, with one post reaching over 47 million views before the account was suspended. X temporarily blocked searches for Swift’s name and reinstated content‑moderation measures, while the White House and Swift’s fans condemned the abuse. The incident spurred bipartisan congressional efforts, including the No AI FRAUD Act, to criminalize the creation and distribution of non‑consensual deepfake imagery. State lawmakers also highlighted the patchwork of existing protections, citing California and New York laws that already provide civil remedies for deepfake victims.
George Freeman MP targeted by AI deepfake video falsely claiming he defected to rival party
A British member of Parliament, George Freeman, was targeted by an AI-generated deepfake video falsely claiming he had defected to a rival political party. The incident occurred in late 2023 and was discussed in a parliamentary hearing in early 2024. During a hearing before the House of Commons Science, Innovation and Technology Committee, representatives from Meta, Google, and X (formerly Twitter) were questioned about how the deepfake spread on their platforms. The companies provided explanations of their policies but did not commit to specific actions to prevent similar incidents or address the spread of the fake video. Freeman criticized the platforms for failing to act decisively and called for legislation to protect individuals from identity theft and misuse through AI. The hearing highlighted concerns about the spread of political misinformation and its threat to democratic processes in the UK.
Russia's Internet Research Agency targets U.S. with social media disinformation during 2016 election
The Senate Intelligence Committee revealed that Russia's Internet Research Agency used social media platforms including Facebook, Instagram, and Twitter to target African Americans and spread disinformation aimed at sowing racial discord during the 2016 U.S. election. The agency's content was heavily focused on race-related themes. This incident highlights foreign interference through digital platforms during a critical U.S. political event.