All actors
CompanyUnited StatesEst. 1994Website

Amazon

Amazon has been named in 14 documented digital harm incidents, including 1 fatality and 2 involving minors. The most common harm domain is Algorithmic Discrimination, followed by Privacy & Surveillance.

14
Incidents
1
Fatalities
2
Minors involved
Financial harm

Documented Incidents

14
Feb 14, 2026·Los Angeles, California, USA

Ringleader sentenced to 20 years for $73.6 million romance‑fraud and money‑laundering scheme

A 42‑year‑old man, Daren Li, was sentenced in absentia to 20 years in prison for leading a romance‑fraud operation that laundered about $73.6 million from elderly victims. The scheme used dating and professional platforms such as BLK, Tinder, CashApp and Bitcoin ATMs to convince victims like Kate Kleinert, Beth Hyland and Jackie Crenshaw to send gift cards, loans and cryptocurrency. Federal agents from the FBI and the FTC highlighted the case as an example of the growing link between romance scams and larger crypto‑money‑muling operations, which have caused severe financial and emotional harm to victims.

Fraud & Financial
Dec 1, 2025·New Delhi, India

Gautam Gambhir files lawsuit seeking ₹2.5 crore after deepfake used to impersonate him

India's cricket head coach Gautam Gambhir filed a civil suit in the Delhi High Court in late 2025, seeking ₹2.5 crore in damages for the unauthorized use of his name, image, and voice in deepfake content. The case involves 16 defendants, including social media accounts, e-commerce platforms like Amazon and Flipkart, and tech companies such as Meta, Google, and YouTube. Gambhir's legal team claims that fabricated videos, including one falsely showing his resignation, have circulated widely on social media and been used for financial gain. The case is being heard under the Copyright Act, 1957, the Trade Marks Act, 1999, and the Commercial Courts Act, 2015, and seeks immediate removal of the content and a permanent injunction against future misuse. Legal experts suggest the case could set a precedent for protecting digital personality rights in India amid rising concerns over AI-driven fraud and misinformation.

Fraud & FinancialDeepfake Fraud
Nov 11, 2025·Columbus, United States

Victims across the US defrauded by AI voice cloning scams impersonating family members

Patty Greiner lost $15,000 after receiving a text claiming her Amazon account was hacked and later being contacted by individuals impersonating IRS agents and law enforcement. Scammers are using AI to clone voices by extracting personal information from social media platforms like TikTok, Instagram, and Facebook. Cybersecurity expert Dave Hatter demonstrated how easily a voice can be cloned using free software, warning that this could lead to a surge in crime. Impersonators range from individuals to organized criminal gangs and nation-state actors from countries like China, Russia, and Iran. Experts advise not to use links or numbers provided by suspicious callers and to verify the legitimacy of requests directly with the organization or person involved.

Fraud & FinancialVoice Cloning Fraud
Nov 4, 2025

Amazon workers with disabilities file suit against tech giant alleging systematic discrimination

Amazon workers with disabilities filed a lawsuit against the company, alleging systematic discrimination in hiring and employment practices. The workers claim that Amazon failed to accommodate their disabilities and subjected them to unfair treatment. The lawsuit seeks to address what they describe as ongoing discriminatory behavior.

Algorithmic DiscriminationHiring Bias
Aug 4, 2025·Michigan, USA

Harper v. Sirius XM AI Hiring Discrimination Lawsuit

On August 4, 2025, plaintiff Arshon Harper filed a class‑action suit in the U.S. District Court for the Eastern District of Michigan against Sirius XM Radio, alleging that the company’s AI‑driven applicant screening tool systematically rejected Black candidates. The complaint cites 149 out of 150 of Harper’s applications being denied after the tool used proxies such as zip codes and schools, constituting disparate treatment and impact under Title VII. The lawsuit seeks class certification for other Black applicants rejected since January 2024 and aims to halt the use of the discriminatory AI system. It adds to a growing wave of litigation targeting algorithmic decision‑making in employment.

Algorithmic Discrimination
Jun 1, 2025·United States

Amazon faces EEOC and NLRB complaints alleging AI-based disability accommodation denials and return‑to‑office mandate violate ADA

Two Amazon employees have filed discrimination complaints with the Equal Employment Opportunity Commission and the National Labor Relations Board, asserting that the company’s strict return‑to‑office policy and its use of artificial‑intelligence tools to evaluate disability accommodation requests violate the Americans with Disabilities Act. The employees claim the AI system lacks nuance, leading to unjust denials, and that internal Slack groups discussing accommodations have been censored. Amazon maintains that its Disability and Leave Services team provides personalized support and that remote work is permitted where appropriate. The dispute highlights growing concerns about algorithmic decision‑making in workplace accommodation processes.

Algorithmic DiscriminationDiscrimination
Feb 20, 2025

Stalkerware apps Cocospy and Spyic data breach exposes 2.65 million user accounts

Security researchers discovered a vulnerability in the stalkerware apps Cocospy and Spyic that allowed anyone to download personal data, including messages, photos, call logs and the email addresses of registered users. By exploiting the flaw, they scraped roughly 1.81 million Cocospy and 880,000 Spyic email addresses (about 2.65 million unique accounts) and shared the list with the Have I Been Pwned service. The apps route traffic through Cloudflare and store data on Amazon Web Services, and the breach is linked to the China‑based developer 711.icu; the operators have not responded to requests for comment and the bug remains unpatched.

Privacy & Surveillance
Oct 24, 2024·Prestatyn, North Wales

Lost in a toxic online world, depraved teen who killed his mother with hammer after chilling AI chat

Tristan Roberts, an 18-year-old with autism and ADHD, killed his mother, Angela Shellis, in Prestatyn, North Wales, on October 24. Roberts, who was deeply involved in violent online communities and had expressed misogynistic views on Discord, became fixated on blaming his mother for his personal struggles. He used an AI tool to seek advice on how to commit the murder, which he carried out using a hammer purchased online. The attack lasted over four hours and was recorded by Roberts. He was later arrested at his home and sentenced to life in prison. The case has raised concerns about the influence of online platforms and AI in facilitating violent acts.

Self-Harm & SuicideSelf-HarmFatality
May 31, 2024·California

Workday Hit With Lawsuit Claiming Its AI Shuts Out Black, Disabled And Older Jobseekers - International Business Times UK

Workday, a provider of HR software, is facing a collective action lawsuit alleging that its AI-based job-screening system discriminates against Black, disabled, and older jobseekers. The lawsuit, filed in California, was expanded into a collective action following a ruling by a district judge in late 2025. The plaintiffs, including Derek Mobley and four others over the age of 40, claim they were repeatedly rejected from hundreds of job applications through Workday’s platform, often within minutes of applying. Court filings allege that the AI disproportionately disqualifies individuals over 40 and reinforces existing biases by learning from historical hiring data. Workday denies the claims, calling the ruling preliminary and based on allegations rather than evidence. The case has drawn attention from civil rights advocates and could set legal precedents for AI accountability in hiring.

Algorithmic DiscriminationHiring Bias
May 1, 2024·Wisconsin, United States

Man generates and distributes AI-generated child sexual abuse imagery using open-source model

U.S. federal prosecutors are increasingly targeting individuals who use artificial intelligence (AI) to generate child sex abuse imagery, citing concerns that the technology could lead to a surge in illicit material. In 2024, the U.S. Justice Department filed two criminal cases against defendants accused of using generative AI systems to produce explicit images of children. One defendant, Steven Anderegg, was indicted in May for allegedly using the Stable Diffusion AI model to generate and share explicit images of children, while another, Seth Herrera, a U.S. Army soldier, was charged with using AI chatbots to create violent sexual abuse imagery. Both have pleaded not guilty, with Anderegg seeking to dismiss the charges on constitutional grounds. The National Center for Missing and Exploited Children reported receiving about 450 monthly reports related to AI-generated child exploitation material, though this is a small fraction of overall reports. Legal experts note that while existing laws cover explicit depictions of real children, the legal status of AI-generated imagery remains unclear, with past rulings limiting the criminalization of computer-generated child abuse images. Advocacy groups have secured commitments from major AI companies to avoid training models on child sex abuse imagery and to monitor platforms to prevent its spread.

Child SafetyCSAMMinor
Apr 1, 2024·Illinois, United States

Amazon, Target, and other retailers collect customer biometric data without consent, face class action lawsuits

Consumers filed class action lawsuits against Amazon.com Services, Target Corp., Wingstop, Domino’s Pizza, and ConverseNow Technologies for allegedly violating Illinois’ Biometric Information Privacy Act (BIPA). The lawsuits claim the companies collected biometric data—such as facial scans, voiceprints, and fingerprints—without obtaining consent or properly scheduling destruction of the data. The lawsuits were filed in Illinois, where BIPA requires companies to inform individuals, obtain consent, and establish data retention policies for biometric information. Amazon was accused of using facial recognition in timecard systems, while Target was alleged to have used cameras to collect customers’ biometric data. Wingstop and Domino’s Pizza were accused of using AI to record customers’ voiceprints during phone orders. In related settlements, BNSF Railway Co. agreed to pay $75 million and Graphic Packaging International agreed to pay nearly $1 million to resolve claims of BIPA violations involving employee biometric data collection.

Privacy & SurveillanceUnauthorized Surveillance
Aug 3, 2020·Illinois

Ring collects customer facial biometric data without consent, class action survives dismissal

A class action lawsuit was filed against Amazon’s Ring video doorbell service by plaintiff Michelle Wise, alleging violations of the Illinois Biometric Information Privacy Act (BIPA) due to the collection and storage of facial biometric data without consent. The lawsuit, filed in federal court in Seattle, claims Ring captures and stores facial recognition data from visitors and passersby without their knowledge or consent. On August 3, 2020, U.S. District Judge John C. Coughenour denied Ring’s motion to dismiss the case, stating it was too early to dismiss given the legal uncertainty surrounding the application of BIPA in such cases. The lawsuit also alleges that Ring shares video footage with employees in an unencrypted manner and previously partnered with law enforcement to match faces with databases, raising privacy concerns. The case follows a precedent set by a $550 million Facebook settlement related to similar biometric data practices.

Privacy & SurveillanceUnauthorized SurveillanceMinor
Jan 18, 2020·New York, USA

Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times

Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.

Privacy & Surveillance
Jan 1, 2015·Seattle, WA, United States

Amazon Scraps AI Recruiting Tool Found to Be Biased Against Women

In 2015, Amazon developed an AI recruiting tool to automate resume evaluation but discovered it exhibited bias against women. The system was trained on historical resumes, predominantly from men, leading the AI to penalize resumes with terms like 'women's'. Amazon ultimately scrapped the tool due to these discriminatory outcomes.

Algorithmic Discrimination

Linked Legislation

38
SB 5838 — Establishing An Artificial Intelligence Task Force
Washington
SB 332 — Artificial Intelligence Revisions
Utah
HB 1514 — Employment Decisions; Automated Decision Systems, Civil Penalty
Virginia
SB 2499 — An Act Relating To Labor And Labor Relations -- Artificial Intelligence Use And Fair Employment Practices
Rhode Island
S 8831 — Relates to the use of automated employment decision-making tools and artificial intelligence systems by certain state and local entities; repealer
New York
AB 2027 — Worker Data: Prohibitions: Artificial Intelligence
California
SB 719 — Department Of Technology: Inventory: High-Risk Automated Decision Systems
California
SB 6120 — Regulating High-Risk Artificial Intelligence System Development, Deployment, And Use
Washington
HB 1951 — Promoting Ethical Artificial Intelligence By Protecting Against Algorithmic Discrimination
Washington
SB 365 — Fostering Access, Innovation, And Responsibility In Artificial Intelligence Act
Virginia
HB 1917 — Artificial Intelligence Act of 2025
Oklahoma
HB 1899 — Artificial Intelligence Act Of 2025
Oklahoma
S 1169 — Relates to the development and use of certain artificial intelligence systems
New York
A 9219 — Requires Artificial Intelligence Technology Used In Professional Fields To Be Developed And Maintained In Consultation With Experts In Such Fields
New York
S 8928 — Enacts The Artificial Intelligence Workforce Impact Transparency Act
New York
A 7278 — Prohibits The Use Of Certain Artificial Intelligence Models
New York
A 8833 — Establishes Understanding Artificial Intelligence Responsibility Act
New York
A 8195 — Relates to enacting the "Advanced Artificial Intelligence Licensing Act"
New York
AB 2148 — Local Educational Agency Employees: Public Postsecondary Education Employees: Artificial Intelligence, Automated Decision Systems, And Educational Technology: Discipline
California
SB 1161 — Artificial Intelligence Transparency Act
Virginia
H 792 — An Act Relating To Liability Standards For Developers And Deployers Of Artificial Intelligence Systems
Vermont
H 341 — An Act Relating To Creating Oversight And Safety Standards For Developers And Deployers Of Inherently Dangerous Artificial Intelligence Systems
Vermont
S 8706 — Requires Covered Businesses To Annually Report To The Department Of Labor Regarding The Impact Of Artificial Intelligence On Hiring And The Nature Of Artificial Intelligence Use
New York
A 9581 — Requires Covered Businesses To Annually Report To The Department Of Labor Regarding The Impact Of Artificial Intelligence On Hiring And The Nature Of Artificial Intelligence Use
New York
AB 1405 — Artificial Intelligence: Auditors: Enrollment
California
SB 5356 — Establishing Guidelines For Government Procurement And Use Of Automated Decision Systems In Order To Protect Consumers, Improve Transparency, And Create More Market Predictability
Washington
DEFIANCE Act of 2025 (HR 3562 / S.1837) — 119th Congress
United States
SB 6184 — Concerning Deepfake Artificial Intelligence-Generated Pornographic Material Involving Minors
Washington
S 3699 — Enacts The 'Facial Recognition Technology Study Act'
New York
A 8788 — Enacts The "Facial Recognition Technology Study Act"
New York
A 6031 — Establishes The Biometric Privacy Act
New York
S 1422 — Establishes The Biometric Privacy Act
New York
A 1447 — Relates to the use of facial recognition and biometric information for determining probable cause
New York
S 4457 — Establishes The Biometric Privacy Act
New York
A 2642 — Enacts The 'Facial Recognition Technology Study Act'
New York
A 1362 — Establishes The Biometric Privacy Act
New York
S 4824 — Enacts The 'Facial Recognition Technology Study Act'
New York
SB 730 — An Act Requiring Disclosure Of The Use Of Facial Recognition Technology In Public Spaces
Connecticut

By Harm Domain

Algorithmic Discrimination5
Privacy & Surveillance4
Fraud & Financial3
Self-Harm & Suicide1
Child Safety1