Amazon
Amazon has been named in 14 documented digital harm incidents, including 1 fatality and 2 involving minors. The most common harm domain is Algorithmic Discrimination, followed by Privacy & Surveillance.
Documented Incidents
14Ringleader sentenced to 20 years for $73.6 million romance‑fraud and money‑laundering scheme
A 42‑year‑old man, Daren Li, was sentenced in absentia to 20 years in prison for leading a romance‑fraud operation that laundered about $73.6 million from elderly victims. The scheme used dating and professional platforms such as BLK, Tinder, CashApp and Bitcoin ATMs to convince victims like Kate Kleinert, Beth Hyland and Jackie Crenshaw to send gift cards, loans and cryptocurrency. Federal agents from the FBI and the FTC highlighted the case as an example of the growing link between romance scams and larger crypto‑money‑muling operations, which have caused severe financial and emotional harm to victims.
Gautam Gambhir files lawsuit seeking ₹2.5 crore after deepfake used to impersonate him
India's cricket head coach Gautam Gambhir filed a civil suit in the Delhi High Court in late 2025, seeking ₹2.5 crore in damages for the unauthorized use of his name, image, and voice in deepfake content. The case involves 16 defendants, including social media accounts, e-commerce platforms like Amazon and Flipkart, and tech companies such as Meta, Google, and YouTube. Gambhir's legal team claims that fabricated videos, including one falsely showing his resignation, have circulated widely on social media and been used for financial gain. The case is being heard under the Copyright Act, 1957, the Trade Marks Act, 1999, and the Commercial Courts Act, 2015, and seeks immediate removal of the content and a permanent injunction against future misuse. Legal experts suggest the case could set a precedent for protecting digital personality rights in India amid rising concerns over AI-driven fraud and misinformation.
Victims across the US defrauded by AI voice cloning scams impersonating family members
Patty Greiner lost $15,000 after receiving a text claiming her Amazon account was hacked and later being contacted by individuals impersonating IRS agents and law enforcement. Scammers are using AI to clone voices by extracting personal information from social media platforms like TikTok, Instagram, and Facebook. Cybersecurity expert Dave Hatter demonstrated how easily a voice can be cloned using free software, warning that this could lead to a surge in crime. Impersonators range from individuals to organized criminal gangs and nation-state actors from countries like China, Russia, and Iran. Experts advise not to use links or numbers provided by suspicious callers and to verify the legitimacy of requests directly with the organization or person involved.
Amazon workers with disabilities file suit against tech giant alleging systematic discrimination
Amazon workers with disabilities filed a lawsuit against the company, alleging systematic discrimination in hiring and employment practices. The workers claim that Amazon failed to accommodate their disabilities and subjected them to unfair treatment. The lawsuit seeks to address what they describe as ongoing discriminatory behavior.
Harper v. Sirius XM AI Hiring Discrimination Lawsuit
On August 4, 2025, plaintiff Arshon Harper filed a class‑action suit in the U.S. District Court for the Eastern District of Michigan against Sirius XM Radio, alleging that the company’s AI‑driven applicant screening tool systematically rejected Black candidates. The complaint cites 149 out of 150 of Harper’s applications being denied after the tool used proxies such as zip codes and schools, constituting disparate treatment and impact under Title VII. The lawsuit seeks class certification for other Black applicants rejected since January 2024 and aims to halt the use of the discriminatory AI system. It adds to a growing wave of litigation targeting algorithmic decision‑making in employment.
Amazon faces EEOC and NLRB complaints alleging AI-based disability accommodation denials and return‑to‑office mandate violate ADA
Two Amazon employees have filed discrimination complaints with the Equal Employment Opportunity Commission and the National Labor Relations Board, asserting that the company’s strict return‑to‑office policy and its use of artificial‑intelligence tools to evaluate disability accommodation requests violate the Americans with Disabilities Act. The employees claim the AI system lacks nuance, leading to unjust denials, and that internal Slack groups discussing accommodations have been censored. Amazon maintains that its Disability and Leave Services team provides personalized support and that remote work is permitted where appropriate. The dispute highlights growing concerns about algorithmic decision‑making in workplace accommodation processes.
Stalkerware apps Cocospy and Spyic data breach exposes 2.65 million user accounts
Security researchers discovered a vulnerability in the stalkerware apps Cocospy and Spyic that allowed anyone to download personal data, including messages, photos, call logs and the email addresses of registered users. By exploiting the flaw, they scraped roughly 1.81 million Cocospy and 880,000 Spyic email addresses (about 2.65 million unique accounts) and shared the list with the Have I Been Pwned service. The apps route traffic through Cloudflare and store data on Amazon Web Services, and the breach is linked to the China‑based developer 711.icu; the operators have not responded to requests for comment and the bug remains unpatched.
Lost in a toxic online world, depraved teen who killed his mother with hammer after chilling AI chat
Tristan Roberts, an 18-year-old with autism and ADHD, killed his mother, Angela Shellis, in Prestatyn, North Wales, on October 24. Roberts, who was deeply involved in violent online communities and had expressed misogynistic views on Discord, became fixated on blaming his mother for his personal struggles. He used an AI tool to seek advice on how to commit the murder, which he carried out using a hammer purchased online. The attack lasted over four hours and was recorded by Roberts. He was later arrested at his home and sentenced to life in prison. The case has raised concerns about the influence of online platforms and AI in facilitating violent acts.
Workday Hit With Lawsuit Claiming Its AI Shuts Out Black, Disabled And Older Jobseekers - International Business Times UK
Workday, a provider of HR software, is facing a collective action lawsuit alleging that its AI-based job-screening system discriminates against Black, disabled, and older jobseekers. The lawsuit, filed in California, was expanded into a collective action following a ruling by a district judge in late 2025. The plaintiffs, including Derek Mobley and four others over the age of 40, claim they were repeatedly rejected from hundreds of job applications through Workday’s platform, often within minutes of applying. Court filings allege that the AI disproportionately disqualifies individuals over 40 and reinforces existing biases by learning from historical hiring data. Workday denies the claims, calling the ruling preliminary and based on allegations rather than evidence. The case has drawn attention from civil rights advocates and could set legal precedents for AI accountability in hiring.
Man generates and distributes AI-generated child sexual abuse imagery using open-source model
U.S. federal prosecutors are increasingly targeting individuals who use artificial intelligence (AI) to generate child sex abuse imagery, citing concerns that the technology could lead to a surge in illicit material. In 2024, the U.S. Justice Department filed two criminal cases against defendants accused of using generative AI systems to produce explicit images of children. One defendant, Steven Anderegg, was indicted in May for allegedly using the Stable Diffusion AI model to generate and share explicit images of children, while another, Seth Herrera, a U.S. Army soldier, was charged with using AI chatbots to create violent sexual abuse imagery. Both have pleaded not guilty, with Anderegg seeking to dismiss the charges on constitutional grounds. The National Center for Missing and Exploited Children reported receiving about 450 monthly reports related to AI-generated child exploitation material, though this is a small fraction of overall reports. Legal experts note that while existing laws cover explicit depictions of real children, the legal status of AI-generated imagery remains unclear, with past rulings limiting the criminalization of computer-generated child abuse images. Advocacy groups have secured commitments from major AI companies to avoid training models on child sex abuse imagery and to monitor platforms to prevent its spread.
Amazon, Target, and other retailers collect customer biometric data without consent, face class action lawsuits
Consumers filed class action lawsuits against Amazon.com Services, Target Corp., Wingstop, Domino’s Pizza, and ConverseNow Technologies for allegedly violating Illinois’ Biometric Information Privacy Act (BIPA). The lawsuits claim the companies collected biometric data—such as facial scans, voiceprints, and fingerprints—without obtaining consent or properly scheduling destruction of the data. The lawsuits were filed in Illinois, where BIPA requires companies to inform individuals, obtain consent, and establish data retention policies for biometric information. Amazon was accused of using facial recognition in timecard systems, while Target was alleged to have used cameras to collect customers’ biometric data. Wingstop and Domino’s Pizza were accused of using AI to record customers’ voiceprints during phone orders. In related settlements, BNSF Railway Co. agreed to pay $75 million and Graphic Packaging International agreed to pay nearly $1 million to resolve claims of BIPA violations involving employee biometric data collection.
Ring collects customer facial biometric data without consent, class action survives dismissal
A class action lawsuit was filed against Amazon’s Ring video doorbell service by plaintiff Michelle Wise, alleging violations of the Illinois Biometric Information Privacy Act (BIPA) due to the collection and storage of facial biometric data without consent. The lawsuit, filed in federal court in Seattle, claims Ring captures and stores facial recognition data from visitors and passersby without their knowledge or consent. On August 3, 2020, U.S. District Judge John C. Coughenour denied Ring’s motion to dismiss the case, stating it was too early to dismiss given the legal uncertainty surrounding the application of BIPA in such cases. The lawsuit also alleges that Ring shares video footage with employees in an unencrypted manner and previously partnered with law enforcement to match faces with databases, raising privacy concerns. The case follows a precedent set by a $550 million Facebook settlement related to similar biometric data practices.
Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times
Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.
Amazon Scraps AI Recruiting Tool Found to Be Biased Against Women
In 2015, Amazon developed an AI recruiting tool to automate resume evaluation but discovered it exhibited bias against women. The system was trained on historical resumes, predominantly from men, leading the AI to penalize resumes with terms like 'women's'. Amazon ultimately scrapped the tool due to these discriminatory outcomes.