Connecting Digital Harms to Policy Response

Evidence-focused, not advocacy-driven.

This platform is in beta, data is actively being expanded and methodology refined.

How we define harm and incident →

598
Incidents Tracked
79
Platforms
74
Fatalities
105
Actors Named
1,618
Policies Tracked

Latest entries

Recently documented incidents

View all 598
Fraud & Financial

Middle-aged couple in Gujarat defrauded via AI voice cloning of son's voice

Apr 7, 2026Ahmedabad, India

A middle-aged couple in Gujarat reported a fraud in which scammers used artificial intelligence to clone their son’s voice and request money. The incident occurred on April 7, 2026, when the couple received a distress call from an unknown number claiming their son in Canada had an accident and needed $300. Police confirmed the fraudsters had cloned the son’s voice, likely using audio from his social media posts. Investigators noted that AI voice cloning is an emerging and rapidly growing cyber scam, with fraudsters targeting multiple families at once. Parents in similar cases have received ransom calls using cloned voices of their children. Authorities advised verifying suspicious calls through known numbers and reporting incidents to the National Cyber Crime Helpline.

Privacy & SurveillanceDeepfake NCII

Yuzvendra Chahal targeted by AI deepfake ahead of IPL 2026 match

Apr 6, 2026Chandigarh, India

Yuzvendra Chahal, a cricketer for Punjab Kings (PBKS), was targeted by an AI-generated deepfake video ahead of the IPL 2026 match. The incident occurred in 2026, as reported by MSN. The deepfake was designed to deceive and spread misinformation about Chahal. The harm domain of this incident is privacy and surveillance. The consequences included the potential for reputational damage and public confusion due to the AI-generated content.

Privacy & SurveillanceDeepfake NCII

Actress subjected to AI deepfake video impersonating her likeness distributed via YouTube

Mar 31, 2026South Korea

Veteran actress Yeom Hye Ran became a victim of an AI deepfake rights violation when an unauthorized AI-generated video using her likeness was uploaded to YouTube on March 31. Her agency, Ace Factory, confirmed the video was produced without consent and was later removed. The incident followed a previous controversy involving the AI film 'The Inspector,' which used Yeom Hye Ran’s likeness without proper authorization. The misuse of AI in film production has raised concerns about portrait rights violations, a topic that gained global attention during the 2023 Hollywood strikes. The Hollywood strikes, which lasted 118 days, led to agreements on AI usage regulations, wage increases, and improved residuals, but similar issues are now emerging in the Korean film industry. The incident highlights the urgent need for proactive measures to prevent AI-related privacy and rights violations.

Companies: Ace Factory, Writers Guild of America, Hollywood studios
Platforms: YouTube

What you can do with it

Not just data, a working tool for the policymakers, researchers, and advocates who need it.

Draft a policy brief

Find documented cases of AI voice cloning fraud in the US. See which bills address it. Done in 30 seconds.

Research cross-platform patterns

Filter 598 incidents by platform, harm type, severity, and date range. Every entry cites the original source for verification.

Identify policy gaps

See which harms have no legislative response. Compare enacted vs. proposed policies across jurisdictions. Find where protection is missing.

Know something we don't?

Flag an incident, a policy, or a court case worth tracking, or request an analysis on a topic you care about.

Submit a tip

Built by an independent research team with a published methodology. Currently unfunded and preparing to fundraise in the coming months.

This project is a work in progress, we're building in the open.