Definition of Harm & Incident Criteria
What We Mean by "Harm"
A digital harm is an adverse outcome, experienced by an identifiable person, defined group, or institution, that is caused or materially enabled by the design, operation, or use of a digital platform, algorithmic system, or AI technology.
Drawing on established academic and regulatory frameworks, the Tracker recognizes harm across six dimensions of impact:
| Impact Type | Description | Source Tradition |
|---|---|---|
| Physical | Bodily injury or death | Agrafiotis et al. 2018; Citron & Solove 2022 |
| Psychological | Emotional distress, trauma, mental health deterioration | Scheuerman et al. 2021; Citron & Solove 2022 |
| Economic | Financial loss, property damage, livelihood disruption | Agrafiotis et al. 2018; OECD 2024 |
| Reputational | Damage to standing, dignity, or public perception | Solove 2006; Agrafiotis et al. 2018 |
| Autonomy | Loss of agency, manipulation, coercion, or denial of self-determination | Citron & Solove 2022; EU DSA Art. 34 |
| Discriminatory | Unequal treatment, denial of opportunity, or reinforcement of unjust hierarchies based on identity | Shelby et al. 2023; Citron & Solove 2022 |
A single incident may produce harm across multiple dimensions simultaneously.
What We Mean by "Incident"
An incident is a discrete, documented event in which a digital platform or AI system caused or materially enabled real-world harm to an identifiable harm recipient. It is the foundational unit of the Tracker's evidence base.
The Three-Part Test
Every incident must satisfy all three of the following criteria. If any one is absent, the record is not an incident.
1. A discrete real-world event
Something that happened: a specific occurrence with identifiable circumstances, not a trend, pattern, or ongoing condition described in the abstract. The event must be situated in time, even if the exact date requires estimation.
Qualifies: A 14-year-old in the UK died by suicide after prolonged exposure to self-harm content on Instagram.
Does not qualify: Teen suicide rates are rising due to social media.
2. An identifiable harm recipient
The person, organization, or group who experienced the harm must be identifiable through credible reporting. Three categories qualify:
- Named individual: A specific person identified by name or sufficiently detailed description in credible reporting (e.g., Molly Russell; a Tennessee grandmother identified as Angela Lipps).
- Named organization or institution: A specific entity that suffered documented harm (e.g., the Enschede municipality's welfare system; Horizon Healthcare Services).
- Defined group constituted by the harm mechanism itself: A group whose membership is defined by the platform action or algorithmic process that caused the harm. The group must be bounded by the incident, not by pre-existing demographics alone.
Qualifies: Black applicants screened out by Workday's AI hiring tool between 2020–2023. The algorithm created the affected class through its discriminatory function; membership is documented through the litigation record.
Does not qualify: Teenage girls on Instagram. This is a demographic category, not a group constituted by a specific harmful platform action. It becomes valid only when tied to a specific mechanism, time period, and documented impact, for example, underage users served eating disorder content by Instagram's recommendation algorithm in the period documented by the 2021 Wall Street Journal investigation.
3. Platform causation
The digital platform or AI system must be a proximate cause or necessary enabler of the harm. The platform must be more than incidentally present, it must be a but-for cause, meaning the specific harm would not have occurred, or would not have occurred in this form or at this scale, without the platform's involvement.
Qualifies: Sextortion conducted via Instagram DMs targeting a specific minor. The platform's messaging infrastructure was the necessary vehicle.
Does not qualify: A person described their depression on Twitter. The platform was present but not causally implicated in the depression.
What We Mean by "Platform Mechanism"
Following the World Economic Forum's Typology of Online Harms (2023), every incident is tagged with the mechanism through which harm occurred:
| Mechanism | Description | Examples |
|---|---|---|
| Content | Harm from exposure to problematic material produced, distributed, or amplified by the platform | Algorithmic recommendation of self-harm content; AI-generated CSAM; deepfake disinformation |
| Contact | Harm from interactions with other users enabled by platform infrastructure | Grooming via DMs; sextortion; cyberbullying campaigns |
| Conduct | Harm from behaviors enabled or amplified by platform design and affordances | Coordinated harassment; platform-facilitated fraud; unauthorized surveillance via data collection |
What Does Not Qualify as an Incident
The following are not incidents under this definition, regardless of their importance or newsworthiness. They belong elsewhere in the database or are outside its scope.
- Policy and regulatory actions: Legislation, regulation, executive orders, court rulings — policies collection
- Litigation events: Lawsuits, enforcement actions, AG investigations — litigation collection (the underlying harm they describe may qualify separately)
- Company-knew accountability stories: Platform awareness of harm patterns without a specific documented victim
- Research and audits: Academic studies, algorithmic audits, expert reviews (may inform incident records but are not incidents themselves)
- Trend and pattern reporting: Sextortion is rising or deepfake fraud is increasing, without a specific victim event
- Opinion, analysis, and commentary: Editorials, explainers, advocacy pieces
- Societal and democratic harms without a discrete event: Erosion of trust, polarization, epistemic degradation (these are real and important harms recognized in the literature, but they resist the incident model; they emerge from the connections between incidents, policies, and litigation in the database)
A Note on Societal Harm
Academic literature and regulation recognize a category of social system and societal harms. Shelby et al. (2023) names it as a standalone harm category, with sub-themes including erosion of democracy, election interference, and information harms. The EU Digital Services Act (Recital 82, Article 34) requires platforms to assess systemic risks to democratic processes, civic discourse, and public security. Digital Action's taxonomy arrives at the same substantive concern through a different route: it treats societal and democratic damage as cumulative effects of its five core harm types — disinformation, hate speech, harassment, censorship, and privacy violations — rather than a standalone category.
Together, these frameworks confirm that harms to democratic processes, civic discourse, and information ecosystems are real, well-documented, and central to the policy landscape the Tracker serves. Where we refer to effects on institutional trust, we use the term as a recognized downstream consequence, not a formally named harm category in any of the three sources.
However, societal harms are diffuse by nature, they lack discrete events, identifiable victims, and clear start dates. The Tracker captures them not as incidents but as emergent patterns visible through the data: clusters of incidents linked to the same platform, the same harm type, and the same jurisdiction reveal systemic harm that no individual record could express alone. The junction tables connecting incidents to policies and litigation are where societal harm becomes visible.
This is a deliberate architectural choice: individual incidents stay evidentiarily tight, while systemic harm emerges from the structure of the database itself.
Key Sources
- Agrafiotis, I. et al. (2018). "A Taxonomy of Cyber-Harms." Journal of Cybersecurity, 4(1). Oxford.
- Citron, D.K. & Solove, D.J. (2022). "Privacy Harms." Boston University Law Review, 102(3), 793.
- OECD (2024). "Defining AI Incidents and Related Terms." OECD Artificial Intelligence Papers, No. 16.
- Scheuerman, M. et al. (2021). "A Framework of Severity for Harmful Content Online." arXiv:2108.04401.
- Shelby, R. et al. (2023). "Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction." AAAI/ACM AIES 2023.
- Solove, D.J. (2006). "A Taxonomy of Privacy." University of Pennsylvania Law Review, 154(3), 477.
- World Economic Forum (2023). "Typology of Online Harms." Global Coalition for Digital Safety.