Clearview AI
Clearview AI has been named in 8 documented digital harm incidents, including 1 involving minor. The most common harm domain is Privacy & Surveillance, followed by Algorithmic Discrimination.
Documented Incidents
8Woman Wrongfully Jailed After Facial Recognition Misidentification in Tennessee
In July 2025, 50‑year‑old Angela Lipps was arrested by U.S. Marshals in Tennessee after facial‑recognition software mistakenly identified her as a suspect in a North Dakota bank‑fraud case. A detective linked her social‑media profile and driver’s license to the suspect, leading to her extradition and multiple charges. Lipps proved she was in Tennessee during the crimes, resulting in the dismissal of charges and her release after nearly six months in custody, without compensation. The incident underscores concerns about wrongful arrests caused by algorithmic errors.
NYC man wrongfully arrested after Clearview AI facial recognition match
Trevis Williams was detained by the NYPD on suspicion of a sex crime after a false match generated by Clearview AI's facial‑recognition system. Cell‑phone location data later proved he was miles away from the alleged crime scene, leading to the dismissal of charges after two days in custody. The incident prompted legal challenges from the Legal Aid Society and criticism from civil‑rights groups, who called for stricter oversight and a ban on police use of the technology. The case also highlighted alleged cooperation between the NYPD and the FDNY to circumvent facial‑recognition regulations.
Wrongful Arrest: Angela Lipps Wrongly Arrested by Clearview AI Misidentification — Clearview AI
Angela Lipps was wrongfully arrested and detained for months after a facial‑recognition system by Clearview AI incorrectly identified her as a suspect in a North Dakota bank‑fraud case. The error led to her being held in Tennessee custody from mid‑2025 until her release, when the charges were dismissed. The incident highlights algorithmic discrimination and the risk of relying on unverified AI identification.
Clearview AI biometric privacy class-action settlement approved in Illinois
In March 2024 a federal judge in the Northern District of Illinois approved a settlement of a nationwide class‑action lawsuit against facial‑recognition firm Clearview AI for alleged violations of the Illinois Biometric Information Privacy Act and related statutes. The agreement grants the class a 23% equity stake in Clearview, valued at roughly $51.75 million, to be paid upon trigger events such as an IPO or liquidation. Although attorneys general from 22 states objected, citing a lack of injunctive relief, the settlement was upheld, and Vermont subsequently re‑filed its own lawsuit under state consumer‑protection law.
Detroit woman sues police over wrongful arrest tied to faulty facial recognition
In January 2024, LaDonna Crutchfield was arrested in Detroit after police mistakenly linked her to an attempted murder using a facial recognition database that was never actually run. The arrest, made in front of her children, involved handcuffing, fingerprinting, and DNA sampling before she was released when the error was uncovered. Crutchfield filed a federal lawsuit against the Detroit Police Department and her attorneys alleging algorithmic discrimination, emotional distress, and wrongful arrest. The case highlights concerns over the use of facial recognition technology in law‑enforcement.
Georgia man wrongfully arrested after Clearview AI facial recognition error settles for $200K
In November 2022, Randal “Quran” Reid, a Georgia resident, was arrested in Atlanta after the Jefferson Parish Sheriff’s Office used a Clearview AI facial‑recognition match to link him to a purse‑theft in Louisiana. The match was presented as a credible source without verification, leading to Reid’s detention for six days before phone records proved he was in Georgia at the time. Reid sued for false arrest and constitutional violations, and the sheriff’s office settled the civil‑rights lawsuit in May 2024 for $200,000. The case underscores the risks of relying on biometric surveillance tools without proper oversight.
Kohl's collects customer biometric data via facial recognition without consent, faces class action
A class action lawsuit was filed in April 2022 in the U.S. District Court for the Northern District of Illinois by Terrell Terry against Kohl’s Inc. The lawsuit alleges that Kohl’s collects and stores customer biometric data without consent or notification, violating the Illinois Biometric Information Privacy Act (BIPA). The suit claims Kohl’s uses advanced video surveillance systems and software from Clearview AI to capture and match biometric data with facial scans in a large database. The plaintiff seeks to represent an Illinois class of consumers affected by the alleged unauthorized collection and use of biometric data. The case, Terry v. Kohl’s Inc., Case No. 1:22-cv-04625, requests statutory damages, injunctive relief, and a jury trial. The plaintiff is represented by multiple law firms, including Scott+Scott Attorneys at Law LLP and Milberg Coleman Bryson Phillips Grossman, PLLC.
Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times
Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.