ACLU
ACLU has been named in 4 documented digital harm incidents. The most common harm domain is Algorithmic Discrimination, followed by Privacy & Surveillance.
Documented Incidents
4ACLU Files Discrimination Complaint Against HireVue AI Interview Tool Over Deaf Indigenous Applicant
On March 19, 2025, the ACLU of Colorado filed a discrimination complaint with the EEOC and the Colorado Civil Rights Division against Intuit and its AI vendor HireVue. The complaint alleges that HireVue’s video interview platform is inaccessible to deaf users and yields worse outcomes for non‑white speakers, violating Title VII, the ADA, and state anti‑discrimination laws. The filing highlights growing regulatory scrutiny of AI‑driven hiring tools and potential liability for companies that deploy biased algorithms.
Clearview AI biometric privacy class-action settlement approved in Illinois
In March 2024 a federal judge in the Northern District of Illinois approved a settlement of a nationwide class‑action lawsuit against facial‑recognition firm Clearview AI for alleged violations of the Illinois Biometric Information Privacy Act and related statutes. The agreement grants the class a 23% equity stake in Clearview, valued at roughly $51.75 million, to be paid upon trigger events such as an IPO or liquidation. Although attorneys general from 22 states objected, citing a lack of injunctive relief, the settlement was upheld, and Vermont subsequently re‑filed its own lawsuit under state consumer‑protection law.
Wrongful Arrest: Robert Williams — Detroit Police Department
The Detroit Police Department (DPD) used facial recognition technology to falsely identify and arrest Robert Williams in 2018 for a theft case. The wrongful arrest led to significant emotional distress, legal fees, and a settlement reached in 2024 with the ACLU of Michigan. The settlement included monetary damages, a ban on arrests based solely on facial recognition, and policy reforms to prevent future misuse.
Amazon Scraps AI Recruiting Tool Found to Be Biased Against Women
In 2015, Amazon developed an AI recruiting tool to automate resume evaluation but discovered it exhibited bias against women. The system was trained on historical resumes, predominantly from men, leading the AI to penalize resumes with terms like 'women's'. Amazon ultimately scrapped the tool due to these discriminatory outcomes.