All platforms
OtherX Holdings CorpLaunched 2023Website

X Corp.

X Corp. has been named in 13 documented digital harm incidents, including 3 involving minors. The most common harm domain is Privacy & Surveillance, followed by Addiction & Mental Health.

13
Incidents
0
Fatalities
3
Minors involved
$0.0M
Financial harm

Documented Incidents

13
Jan 23, 2026·Northern District of California, USA

Woman's clothed photo transformed into non-consensual sexual deepfake by X.AI Grok chatbot leading to emotional distress and class-action lawsuit

On January 23, 2026 a class‑action complaint was filed in the U.S. District Court for the Northern District of California alleging that X.AI Corp.'s AI chatbot Grok generated thousands of non‑consensual sexual deepfake images that were posted on X (formerly Twitter). The lead plaintiff, identified as Jane Doe, says a fully clothed photograph of her was transformed into a revealing bikini image and shared publicly, causing severe emotional distress. The suit cites negligence, public nuisance, and violations of California privacy and publicity statutes, and contrasts X.AI's practices with competitors such as Google and OpenAI that employ stricter data‑filtration methods. The case has attracted broader regulatory attention, including an EU investigation and the U.S. Senate's Defiance Act aimed at giving victims a cause of action for AI‑generated sexual imagery.

Privacy & SurveillanceCSAMMinor
Dec 31, 2025·Penarth, Wales

Welsh presenter exposed to non-consensual deepfake sexual images via X Grok AI chatbot in UK

Jess Davies, a Welsh presenter, accused the UK government of delaying action that could have prevented the spread of deepfake sexual images created by Grok AI, an AI chatbot developed by X (owned by Elon Musk). The incident occurred in the UK, with Davies based in Penarth, Vale of Glamorgan, and involved the non-consensual creation and sharing of explicit images of her via X. The UK government announced new legislation this week to criminalize the creation of such AI-generated content, although the law had been ready since June 2025. X has restricted access to Grok AI’s image function to paying users, but free access via the app is still reported. The UK's online regulator, Ofcom, is investigating whether Grok AI violated online safety laws. The incident has sparked wider concerns about misogyny and the impact of AI on privacy and digital safety.

Privacy & SurveillanceDeepfake NCII
Jun 14, 2025·Cardiff, United Kingdom

23-year-old woman with anorexia relapses after exposure to "skinnytok" content on TikTok algorithm

A 23-year-old woman from Cardiff, Eve Jones, deleted TikTok to avoid triggering an eating disorder relapse after being exposed to "skinnytok" content promoting restrictive eating and extreme weight loss. Despite TikTok blocking the search term "skinnytok" in 2024, Eve said the ban was too late, as harmful content was already widely accessible and could be found using alternative hashtags or search terms. The content, which encourages disordered eating habits, led to her feed being flooded with similar videos after interacting with just one post. Eve explained that the messaging normalizes unhealthy weight loss under the guise of "healthy" self-control, which conflicts with her recovery from anorexia. A 2022 survey by eating disorder charity Beat found that 91% of respondents with eating disorders had encountered harmful content online. TikTok stated it continues to restrict videos and provide health resources, but users like Eve argue that harmful content remains easily accessible despite platform efforts.

Addiction & Mental HealthEating Disorder
May 30, 2025·United Kingdom

21-year-old photographer exposed to AI-generated sexualized image on Grok without consent

A 21-year-old photographer named Evie had an AI-generated, sexualised image of herself created and shared without her consent by Grok, X's AI-powered chatbot. The image was generated in response to a user prompt asking Grok to recreate one of Evie’s selfies with hot glue dripping down her face and her tongue sticking out. The incident occurred on X, the social media platform owned by Elon Musk, and the image was posted publicly on Grok’s official X account. Evie reported the incident to X, but only the comment prompting the image was given a visibility limit, while the image and the user’s account remained visible. Grok, developed by xAI, claimed it does not generate or post images without consent and suggested the post may have been a spoof or account compromise. The incident highlights the lack of legal protections for victims of AI-generated image-based sexual abuse, as such acts are not currently illegal in England and Wales.

Privacy & SurveillanceDeepfake NCII
May 12, 2025·India

Senior Indian journalist targeted with doxxing and Islamophobic harassment via X account "Hindutva Knight" leading to widespread condemnation

Arfa Khanum Sherwani, a senior Indian journalist and editor of The Wire, became a victim of online harassment after her personal contact information was allegedly leaked. The harassment followed a social media post she made urging peace and de-escalation between India and Pakistan. Over the next 24 hours, she received a continuous stream of threatening messages and calls, many containing Islamophobic content and inflammatory remarks. Some messages falsely accused her of sympathizing with Pakistan and included offensive depictions of the Holy Quran and disparaging references to the Prophet Muhammad (PBUH). The doxxing was linked to an X account called "Hindutva Knight," allegedly operated by Chandan Sharma, who had previously targeted journalist Rana Ayyub. The incident has drawn condemnation from fellow journalists, with calls for government action to address online harassment.

Privacy & SurveillanceUnauthorized Surveillance
Feb 27, 2025·Oxford, United States

University of Mississippi student defrauded of online reputation via doxxing and AI-generated content on X

Ole Miss student Mary Kate Cornett was the victim of a cyber attack in which false and defamatory sexual allegations were spread online, including on the platform X (formerly Twitter). The attack involved doxxing, fake AI-generated videos, manipulated photos, and thousands of harassing messages, some containing threats. The Cornett family, from Texas, filed police reports with local Oxford police, University of Mississippi campus police, and the FBI. They also hired a forensic data investigation team and contacted Texas Congressman Wesley Hunt’s office for assistance, though Hunt’s office clarified it was not involved in the investigation. The family created a GoFundMe to support other victims of cyber attacks. The incident became a top trending topic on X in the U.S. and was linked to an unauthorized meme coin using Cornett’s name and likeness.

Misinfo & DisinfoCyberbullying
Jan 27, 2025

38-year-old British journalist exposed to non-consensual image-based sexual abuse content online

Vicky Pattison discovered real deepfake porn of herself online while filming her Channel 4 documentary *Vicky Pattison: My Deepfake Sex Tape*. During the production, takedown experts found 1,700 concerning results, including graphic content involving her image, some of which involved users masturbating over her bikini photos. The content was identified as image-based sexual abuse, not AI-generated but still harmful. Pattison had previously leaked a fake deepfake sex tape as part of the documentary, using an actor with AI-generated visuals. The documentary aimed to highlight the issue of deepfake porn, though some survivors criticized the approach as offensive. Pattison stated she wrestled with the decision to post the fake video and acknowledged the potential upset it could cause.

Privacy & Surveillance
Jan 23, 2025·Petaling Jaya, Malaysia

Malaysian cosplayer's photos edited into explicit AI-generated images sold on Tumblr leading to police investigation

A Malaysian cosplayer discovered her photos were being edited into explicit images using AI and sold online without her consent. The edited images were being sold for RM2 per image or RM18 for an album on Tumblr. The cosplayer, @elyanasparks, identified the perpetrator by posing as a customer and obtaining his full name and banking details. The case was reported to the police and is being investigated under Section 292 of the Penal Code. The Malaysian Communications and Multimedia Commission (MCMC) has also been involved in the investigation.

Privacy & SurveillanceDeepfake NCIIMinor
Jan 1, 2025·United Kingdom

Woman discovers AI-generated "glue" images of herself on X via Grok chatbot leading to feelings of violation and lack of platform response

A woman known online as @EFCEvie discovered AI-generated images of herself covered in "glue," a term often used to refer to semen images, in response to one of her posts on X. The images were created using Grok, an AI chatbot developed by xAI, Elon Musk’s company, and integrated into the X platform. The incident left her feeling violated and unsafe, prompting her to report the tweet immediately. Despite reporting the abuse, she found that authorities and social media platforms have been ineffective in addressing the issue. Other female online personalities, including streamers Valkyrae and BrookeAB, have also been targeted with AI-generated sexualized images created via Grok. Legal experts note that while distributing non-consensual intimate images is a criminal offense in the UK, "glue" images are not currently covered under the law.

Privacy & SurveillanceDeepfake NCII
Jul 8, 2024·Belfast, United Kingdom

Belfast man sends threatening online messages and damages office windows leading to 31-month prison sentence and restraining order

A Belfast man, Aaron Thomas Curragh, sent threatening online messages to Northern Ireland's deputy first minister, Emma Little-Pengelly, and smashed the windows of a party colleague's office. The threatening posts were posted on Twitter in July 2024 and included a video implying a death threat against Little-Pengelly. Curragh also damaged Joanne Bunting’s office in December 2023 and again in July 2024, with both incidents captured on CCTV. Both Little-Pengelly and Bunting submitted victim impact statements describing the fear and distress caused by the attacks. Curragh was sentenced to 31 months in prison, with half to be served in custody and half on licence, and received a seven-year restraining order against Little-Pengelly. The court heard that Curragh exhibited irrational thinking and rejected a mental health assessment.

Addiction & Mental Health
Jan 29, 2024

Adult entertainment industry worker defrauded of $1.2M via deepfake pornography platform in 2025 investigation

In early 2026, AI‑generated pornographic deepfake images of singer Taylor Swift were widely shared on the social media platform X, with one post reaching over 47 million views before the account was suspended. X temporarily blocked searches for Swift’s name and reinstated content‑moderation measures, while the White House and Swift’s fans condemned the abuse. The incident spurred bipartisan congressional efforts, including the No AI FRAUD Act, to criminalize the creation and distribution of non‑consensual deepfake imagery. State lawmakers also highlighted the patchwork of existing protections, citing California and New York laws that already provide civil remedies for deepfake victims.

Privacy & SurveillanceDeepfake NCII
Nov 1, 2023·United Kingdom

British member of Parliament defamed by AI-generated deepfake video on social platforms leading to parliamentary hearing

A British member of Parliament, George Freeman, was targeted by an AI-generated deepfake video falsely claiming he had defected to a rival political party. The incident occurred in late 2023 and was discussed in a parliamentary hearing in early 2024. During a hearing before the House of Commons Science, Innovation and Technology Committee, representatives from Meta, Google, and X (formerly Twitter) were questioned about how the deepfake spread on their platforms. The companies provided explanations of their policies but did not commit to specific actions to prevent similar incidents or address the spread of the fake video. Freeman criticized the platforms for failing to act decisively and called for legislation to protect individuals from identity theft and misuse through AI. The hearing highlighted concerns about the spread of political misinformation and its threat to democratic processes in the UK.

Misinfo & DisinfoSynthetic Media
Jan 1, 2017·Massachusetts, United States

University professor cyberstalked via AI chatbots and fake social media accounts for seven years

A Massachusetts man used AI chatbots to impersonate a university professor and lure strangers to her home for sex as part of a seven-year cyberstalking campaign. James Florence, 36, programmed chatbots on platforms like CrushOn.ai and JanitorAI to use the professor’s personal information—including her home address, family details, and stolen underwear—to engage users in sexual dialogue. The chatbots were designed to suggest, “Why don’t you come over?” leading to strangers parking outside the professor’s home. Florence also created fake social media accounts and websites to harass the professor and distribute manipulated images of her, and he stole and shared her personal information online. The stalking occurred between 2017 and 2024, during which the professor and her husband installed surveillance cameras, carried self-defense tools, and received over 60 harassing communications. Florence has agreed to plead guilty to seven counts of cyberstalking and one count of possession of child pornography.

Privacy & SurveillanceDeepfake NCIIMinor

Linked Legislation

8
DEFIANCE Act of 2025 (HR 3562 / S.1837) — 119th Congress
United States
S 8721 — Establishes Privacy And Publicity Rights For Likenesses Altered Using Artificial Intelligence
New York
HB 5548 — Stop Non-Consensual Distribution Of Intimate Deep Fake Media Act
West Virginia
SB 720 — Stop Non-Consensual Distribution Of Intimate Deep Fake Media Act
West Virginia
SB 256 — Identity Protection Modifications
Utah
SB 568 — An Act Providing For The Removal Of Nonconsenting Intimate Depictions From Social Media Platforms
Pennsylvania
HB 3865 — Crimes And Punishments; Expanding Scope Of Crime To Include Materials And Pornography Generated Via Artificial Intelligence; Effective Date.
Oklahoma
S 1822 — Prohibits Speech-Based Defenses To Actions Brought Against An Individual For The Unlawful Dissemination Of Publication Of An Intimate Image
New York

By Harm Domain

Privacy & Surveillance9
Addiction & Mental Health2
Misinfo & Disinfo2