All platforms
AI ChatbotxAILaunched 2023Website

Grok

Grok has been named in 6 documented digital harm incidents, including 3 involving minors. The most common harm domain is Child Safety, followed by Privacy & Surveillance.

6
Incidents
0
Fatalities
3
Minors involved
Financial harm

Documented Incidents

6
Feb 1, 2026·Global

Olympic figure skater targeted with non-consensual AI-generated deepfake imagery

Female Olympic athletes, including Alysa Liu, Amber Glenn, Isabeau Levito, Mikaela Shiffrin, and Eileen Gu, have been victims of non-consensual AI-generated deepfakes, which have circulated on platforms like 4Chan. In late 2025, users on X prompted Grok AI to generate sexualized images of women and girls, including 14-year-old actress Nell Fisher. The issue of deepfake pornography has grown significantly, with 98% of deepfake content targeting women, and a 55% increase in online deepfakes from 2019 to 2023. Students at El Camino College expressed concerns about the impact of AI-generated content on women's mental health and social interactions. Despite the passage of the TAKE IT DOWN Act in Congress to combat deepfakes, such content remains a persistent problem.

Privacy & SurveillanceDeepfake NCII
Jan 1, 2026·Tennessee, USA

Teens allege Musk's Grok chatbot made sexual images of them as minors

Multiple teenagers alleged that xAI's Grok chatbot generated sexual images of them as minors when prompted. The incidents raised immediate calls for regulatory action against the platform, with lawmakers citing Grok's comparatively weak content safeguards relative to other major AI systems.

Child SafetyCSAMMinor
Dec 31, 2025·Penarth, Wales

UK woman targeted by Grok-generated deepfake images, government criticised for slow response

Jess Davies, a Welsh presenter, accused the UK government of delaying action that could have prevented the spread of deepfake sexual images created by Grok AI, an AI chatbot developed by X (owned by Elon Musk). The incident occurred in the UK, with Davies based in Penarth, Vale of Glamorgan, and involved the non-consensual creation and sharing of explicit images of her via X. The UK government announced new legislation this week to criminalize the creation of such AI-generated content, although the law had been ready since June 2025. X has restricted access to Grok AI’s image function to paying users, but free access via the app is still reported. The UK's online regulator, Ofcom, is investigating whether Grok AI violated online safety laws. The incident has sparked wider concerns about misogyny and the impact of AI on privacy and digital safety.

Privacy & SurveillanceDeepfake NCII
Nov 1, 2025·Paris, France

French police raid X headquarters over Grok-generated child sexual abuse images

French police raided the X (formerly Twitter) headquarters in Paris in response to concerns over child abuse content generated by Grok, an AI system. Elon Musk, CEO of X, was summoned by authorities in connection with the issue. The incident occurred in Paris and was reported by Gadget Review. The action was taken due to allegations that Grok produced content involving child abuse. No further details on legal consequences or outcomes were provided in the article.

Child SafetyCSAMMinor
Apr 1, 2025·Ireland

Irish Police Investigating 200 Reports of Grok-Generated Child Sexual Abuse Images

An Garda Síochána confirmed it is investigating approximately 200 reports of child sexual abuse-related images generated by xAI's Grok chatbot. Detective Chief Superintendent Barry Walsh of the Garda National Cyber Crime Bureau disclosed the investigation at an Oireachtas committee hearing. Gardaí are considering prosecutions under the Harassment, Harmful Communications and Related Offences Act 2020 and the Child Trafficking and Pornography Act 1998. The European Commission separately stated it would assess whether changes announced by X for Grok effectively protect EU citizens.

Child SafetyCSAMMinor
Aug 17, 2024

Donald Trump posts deepfakes of Taylor Swift, Kamala Harris, and Elon Musk to manipulate voters

Donald Trump shared AI-generated deepfake images of Taylor Swift, Kamala Harris, and Elon Musk on his Truth Social platform in an effort to boost his 2024 presidential campaign. The images, including Swift in a "Swifties for Trump" T-shirt and Harris at a communist rally, were reposted from rightwing X accounts and falsely presented as endorsements. Trump also shared a deepfake video of himself dancing with Musk, who has endorsed him. These posts occurred in late July 2024 and reflect a growing trend of AI-generated disinformation in the U.S. election cycle. The use of AI imagery has raised concerns among researchers about the spread of election-related misinformation and the "liar’s dividend" effect, where authentic content is dismissed as fake. The AI images were created using tools like Musk’s Grok image generator, which lacks some of the safety measures found in other AI platforms.

Misinfo & DisinfoSynthetic Media

Linked Legislation

44
HB 5548 — Stop Non-Consensual Distribution Of Intimate Deep Fake Media Act
West Virginia
DEFIANCE Act of 2025 (HR 3562 / S.1837) — 119th Congress
United States
SB 720 — Stop Non-Consensual Distribution Of Intimate Deep Fake Media Act
West Virginia
SB 256 — Identity Protection Modifications
Utah
HB 2252 — An Act Amending Title 18 (Crimes And Offenses) Of The Pennsylvania Consolidated Statutes, In Sexual Offenses, Further Providing For The Offense Of Unlawful Dissemination Of Intimate Image
Pennsylvania
HB 3865 — Crimes And Punishments; Expanding Scope Of Crime To Include Materials And Pornography Generated Via Artificial Intelligence; Effective Date.
Oklahoma
S 1822 — Prohibits Speech-Based Defenses To Actions Brought Against An Individual For The Unlawful Dissemination Of Publication Of An Intimate Image
New York
AB 965 — Relating to artificial intelligence systems that simulate humanlike relationships with children and providing a penalty
Wisconsin
HB 289 — Child Sexual Abuse Material Amendments
Utah
SB 1521 — Artificial Intelligence; Prohibiting The Creation Of Certain Artificial Intelligence Chatbots; Requiring Certain Age Verification Measures And Protections For User Data. Effective Date.
Oklahoma
HB 4496 — To Force Any Media/Internet Creator Providing Artificial Intelligence Created Videos To Have An Identifying Marker That Allows Viewers To Know That The Video Is Not Real
West Virginia
Protect Elections from Deceptive AI Act — 119th Congress (S.1213 / HR 5272)
United States
SB 484 — Relating To Disclosures And Penalties Associated With Use Of Synthetic Media And Artificial Intelligence
West Virginia
HB 4963 — Prohibiting The Use Of Deep Fake Technology To Influence An Election
West Virginia
HB 4191 — Relating To Requirements Imposed On Social Media Companies To Prevent Corruption And Provide Transparency Of Election-Related Content Made Available On Social Media Websites
West Virginia
SB 644 — Relating To: Disclosures Regarding Content Generated By Artificial Intelligence In Political Advertisements, Granting Rule-Making Authority, And Providing A Penalty
Wisconsin
AB 664 — Relating To: Disclosures Regarding Content Generated By Artificial Intelligence In Political Advertisements, Granting Rule-Making Authority, And Providing A Penalty. (FE)
Wisconsin
HB 1442 — Defining Synthetic Media In Campaigns For Elective Office, And Providing Relief For Candidates And Campaigns.
Washington
SB 5152 — Defining Synthetic Media In Campaigns For Elective Office, And Providing Relief For Candidates And Campaigns
Washington
H 846 — An Act Relating To Artificial Intelligence And Elections
Vermont
H 822 — An Act Relating To The Regulation Of Generative Artificial Intelligence Systems
Vermont
HB 982 — Political campaign advertisements; synthetic media, penalty
Virginia
HB 868 — Political campaign advertisements; synthetic media, penalty
Virginia
SB 775 — Political Campaign Advertisements; Synthetic Media, Penalty
Virginia
HB 2479 — Political Campaign Advertisements; Synthetic Media, Penalty
Virginia
SB 96 — Prohibit The Use Of A Deepfake To Influence An Election And To Provide A Penalty Therefor
South Dakota
H 3517 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
H 4660 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
SB 1571 — Relating To The Use Of Artificial Intelligence In Campaign Communications; Declaring An Emergency
Oregon
HB 3299 — Crimes And Punishments; Creating And Disseminating A Digitization Or Synthetic Media; Making Certain Acts Unlawful; Emergency
Oklahoma
SB 894 — Artificial Intelligence; Prohibiting Distribution Of Certain Media And Requiring Certain Disclosures. Effective Date.
Oklahoma
SB 746 — Artificial Intelligence; Requiring Certain Disclosure For Certain Media. Effective Date.
Oklahoma
A 3411 — Requires Notices On Generative Artificial Intelligence Systems
New York
S 9236 — Relates To Falsely Reporting An Incident Through The Use Of Artificial Intelligence
New York
A 3327 — Relates to Political Communication Utilizing Artificial Intelligence
New York
S 6748 — Requires Publications To Identify When The Use Of Artificial Intelligence Is Present Within Such Publication
New York
S 2414 — Enacts The 'Political Artificial Intelligence Disclaimer (Paid) Act'
New York
A 6491 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
S 8400 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
A 7106 — Enacts The "Political Artificial Intelligence Disclaimer (PAID) Act"
New York
A 6790 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
SB 1295 — An Act Concerning Broadband Internet, Gaming, Social Media, Online Services And Consumer Contracts
Connecticut
HSB 294
Iowa
S 2 - Deepfake Disclosure
Florida

By Harm Domain

Child Safety3
Privacy & Surveillance2
Misinfo & Disinfo1