All Analysis
Child SafetyAlgorithmic DiscriminationAutonomous Systems

Introducing the Digital Harms Tracker

·Digital Harms Tracker

We built the Digital Harms Tracker with heavy use of AI tools, and we think it’s important to say that upfront.

The platform runs on an automated pipeline that monitors global news sources every hour, classifies incidents using large language models, extracts structured data, and queues everything for human review before publication. We believe in the power of this technology. It lets a small team do work that would otherwise require dozens of researchers monitoring thousands of sources around the clock.

But believing in the power of a technology is not the same as believing it should operate without oversight. We've seen this pattern before. Cars transformed how people live and work, and we also require seatbelts, crash testing, and emissions standards. Food production feeds billions, and we also inspect factories and regulate what goes on labels. The technology works. The question is always what happens when it fails, who gets hurt, and whether the rules have kept up.

And just like those domains, not all regulation is helpful. Some legislation is well-targeted and evidence-based. Some is reactive, poorly scoped, or creates more problems than it solves. The tension between age verification mandates and privacy rights, the recurring debates over Section 230, the attempted federal moratorium on state AI laws: these are genuinely hard problems where reasonable people disagree.

Our job is not to take sides. We are taking as scientific an approach as we can to building the evidence base: documenting what happened, tracking what lawmakers are doing about it, and making the connections visible so the people whose job it is to make policy can work from the same set of facts.

This project answers that question for the digital world through evidence, not advocacy.

What we've built

The Digital Harms Tracker is a structured, searchable database connecting documented digital harm incidents to the policies that address them. As of today, and rapidly growing, it contains:

  • Nearly 500 verified incidents across 8 harm domains, from AI-powered fraud to child exploitation to algorithmic discrimination
  • Close to 400 policies across more than 50 jurisdictions, including federal legislation, state laws, regulations, court rulings, and executive orders
  • Over 1,200 incident-to-policy linkages connecting the evidence to the regulatory response
  • Roughly 40 litigation records tracking lawsuits and enforcement actions

New incidents are ingested hourly through an automated pipeline, classified by AI, and reviewed by human editors before publication. The incident database grows every day.

The policy database is actively being built out. We are working state by state through the US legislative landscape, identifying what passed, what failed, what's pending, and what's under discussion. We're backfilling historical legislation and expanding coverage of federal regulatory actions, court rulings, and enforcement proceedings. The current collection is a foundation we're adding to continuously. We expect the policy count to grow substantially over the coming months as we fill in gaps across jurisdictions.

Every incident is sourced from credible reporting. Every policy is tagged by status: enacted, proposed, under review, rejected, or repealed. And the connections between them are structured and searchable.

What March 2026 looks like

In the first three weeks of March, we documented nearly 50 new incidents. Here's what the data shows.

Self-harm and suicide was the most active domain this month, with 13 incidents, 9 involving fatalities. Multiple lawsuits were filed against AI chatbot companies, including Google, OpenAI, and Character.AI, alleging their products contributed to users' deaths. OpenAI delayed the release of a new feature following suicide cases linked to ChatGPT. The lawsuits now span multiple platforms: ChatGPT, Google Gemini, Microsoft Copilot, Character.AI, and Replika all appear in March incident records. This is a domain where the pace of harm is outrunning the pace of regulation.

Child safety remained urgent, with 9 incidents. Australia's eSafety Commissioner found that nearly 80% of children surveyed had used an AI chatbot, with platforms like Character.AI, Chub AI, and Nomi lacking basic age verification. Florida opened an investigation into Discord over child safety failures. A 9-year-old died after participating in the "Blackout" social media challenge. Across the full database, more than one in four incidents involve minors.

Fraud is evolving faster than enforcement. New fraud incidents in March included a Singapore finance director who lost nearly $500,000 to a deepfake Zoom call and a report documenting over $1 billion in deepfake-enabled corporate fraud in the US in 2025. AI voice cloning scams continue to target elderly victims across multiple states.

Misinformation is going global. Nine new disinfo incidents this month included deepfakes of political figures in the UK, India, and the Netherlands. In the Netherlands, AI chatbots were found using the likenesses of politicians for sexually explicit conversations, with no legal framework to prevent it.

The policy landscape in numbers. Of the policies we currently track, roughly 220 have been enacted and over 130 remain proposed, sitting in committees or awaiting votes. About 25 have been formally rejected. Three have been repealed, most notably the Biden administration's AI safety executive order. These numbers will shift as we continue building out coverage, but patterns are already emerging around where legislative energy is focused, where enforcement is active, and where documented harms have no policy response at all.

What's next

The Digital Harms Tracker is currently an independent, unfunded research project. We are building in the open: expanding incident coverage, filling in the policy landscape state by state, and strengthening the linkages between documented harms and legislative responses.

We are preparing to establish a formal organizational structure and begin fundraising in the coming months. In the meantime, the data is growing daily at digitalharmstracker.org.

If you work in digital safety, tech policy, or related research, we want to hear from you. We're looking for early partners: organizations willing to use the data, pressure-test the methodology, and help shape what this becomes.

Contact: contact@digitalharmstracker.org

launchmethodologyAIpolicydigital harms