All platforms
Other

facial recognition system

facial recognition system has been named in 7 documented digital harm incidents, including 2 involving minors. The most common harm domain is Algorithmic Discrimination.

7
Incidents
0
Fatalities
2
Minors involved
$0.0M
Financial harm

Documented Incidents

7
Jul 14, 2025·Tennessee, USA

69-year-old grandmother wrongfully arrested after AI facial recognition misidentification at Tennessee retail store

A grandmother in Tennessee was wrongfully arrested due to errors in AI facial recognition technology. The incident occurred in a retail setting where the AI system incorrectly matched her face to a suspect. Law enforcement acted on the faulty identification, leading to her arrest. The case highlights concerns about the accuracy and potential for algorithmic discrimination in facial recognition systems. The wrongful arrest has raised calls for greater oversight and regulation of AI tools used in policing.

Algorithmic DiscriminationWrongful Arrest
May 24, 2025·Salford, United Kingdom

Woman ejected from Home Bargains stores after facial recognition misidentification, leading to false shoplifting accusation

A woman was mistakenly accused of shoplifting after a facial recognition system wrongly flagged her as a thief, leading to her being ejected from two Home Bargains stores in Greater Manchester in May and June. Danielle Horan was escorted out of the stores without an initial explanation and later discovered her image had been added to a facial recognition watchlist falsely alleging she stole about £10 worth of items. The retail security firm Facewatch acknowledged the distress caused and stated a review found the items had been paid for. Horan described the experience as stressful and anxiety-inducing, with her bank account confirming she had paid for the items. Facewatch suspended the involved Home Bargains branches from using its system following the incident. Civil liberties group Big Brother Watch reported over 35 similar cases of individuals wrongly flagged by facial recognition systems.

Algorithmic Discrimination
Apr 1, 2025·New York, United States

36-year-old Black man wrongfully arrested after facial recognition misidentification in New York leading to two days in custody

Trevis Williams, 36, was wrongfully arrested in April in New York after a facial recognition system incorrectly matched his mug shot to a suspect in a Manhattan flashing case. Despite physical and location discrepancies, Williams was arrested and spent over two days in custody before the case was dropped. Facial recognition technology, which converts faces into data points for comparison, has well-documented racial biases, with error rates significantly higher for Black and Asian faces compared to white ones. The NYPD uses facial recognition regularly, though it is supposed to serve only as an investigative lead, not as sole evidence for arrest. At least 10 similar wrongful arrests linked to facial recognition have been reported nationwide, prompting calls from civil rights groups for stricter oversight and transparency in its use. Williams’s case highlights the risks of combining flawed algorithms with unreliable eyewitness testimony, particularly impacting minority communities overrepresented in police databases.

Algorithmic DiscriminationWrongful Arrest
Aug 14, 2023·United States

8-month-pregnant woman wrongfully arrested after AI facial recognition error in front of children

An 8-month pregnant woman was wrongfully arrested for carjacking due to an AI facial recognition error, according to a lawsuit. The incident occurred in front of her children. The woman filed legal action against the authorities involved. The case highlights concerns about the accuracy and consequences of AI facial recognition technology.

Algorithmic DiscriminationWrongful ArrestMinor
Jun 1, 2022·New Orleans, United States

29-year-old Black man wrongfully arrested after facial recognition misidentification in Georgia leading to federal lawsuit

Randal Quran Reid, a 29-year-old Black man from Georgia, filed a federal lawsuit on September 8 in Atlanta after being wrongfully arrested due to a facial recognition software misidentification. On November 25, 2022, Georgia police arrested Reid based on a Louisiana warrant, claiming he committed a crime in a state he had never visited. The lawsuit names Jefferson Parish Sheriff Joseph Lopinto and Detective Andrew Bartholomew, alleging that Bartholomew relied solely on facial recognition software to misidentify Reid from surveillance video linked to a stolen credit card purchase in New Orleans in June 2022. Reid was held in a Dekalb County jail until December 1, with no clear explanation or timeline provided. The lawsuit accuses Bartholomew of false arrest, malicious prosecution, and negligence, and claims Lopinto failed to establish proper policies for facial recognition use. At least four other Black individuals have similarly sued law enforcement over facial recognition misidentification, highlighting concerns about the technology's reliability and racial bias.

Algorithmic DiscriminationWrongful Arrest
Jan 1, 2022·Houston, Texas

61-year-old Black man wrongfully arrested after facial recognition misidentification in Houston Sunglass Hut robbery, leading to sexual assault and lawsuit against EssilorLuxottica and Macy's

Harvey Eugene Murphy Jr., a 61-year-old man, was mistakenly identified as a robber in a January 2022 Sunglass Hut store robbery in Houston, Texas, by facial recognition software. Despite living in California at the time, Murphy was arrested in Texas when he returned to renew his driver's license and was held in jail, where he claims he was sexually assaulted. The Harris County District Attorney's office later cleared him of involvement in the robbery. Murphy is suing Sunglass Hut's parent company, EssilorLuxottica, and Macy's, alleging that faulty facial recognition technology and potential investigative bias led to his wrongful arrest and subsequent injuries. The case highlights concerns about the accuracy and bias of facial recognition systems, which have previously led to misidentifications of Black, Asian, and Latino individuals.

Algorithmic DiscriminationWrongful Arrest
Jan 1, 2019·Detroit, MI

Black man wrongfully arrested after facial recognition misidentification in Detroit

In 2019, Robert Julian-Borchak Williams, a Black man from Detroit, was wrongfully arrested after facial recognition software incorrectly matched him to a suspect in a retail theft. After being detained for 30 hours, the error was discovered and he was released. The incident highlights the risks of flawed facial recognition technology and its disproportionate impact on Black individuals.

Algorithmic DiscriminationMinor

Linked Legislation

45
A 1447 — Relates to the use of facial recognition and biometric information for determining probable cause
New York
A 9654 — Enacts The New York Artificial Intelligence Civil Rights Act
New York
A 3265 — Enacts The New York Artificial Intelligence Bill Of Rights
New York
A 9219 — Requires Artificial Intelligence Technology Used In Professional Fields To Be Developed And Maintained In Consultation With Experts In Such Fields
New York
S 7599 — Relates to Automated Decision-Making by Government Agencies
New York
A 9449 — Relates to transparency and safety requirements for developers of artificial intelligence models
New York
S 3226 — Relates to Prohibiting Facial Recognition Technology to Be Used in Connection with an Officer Camera
New York
A 7172 — Relation to the regulation of the use of artificial intelligence and facial recognition technology in criminal investigations
New York
A 9430 — Enacts The Legislative Oversight Of Automated Decision-Making In Government Act (LOADinG Act)
New York
S 8390 — Relates to the admissibility of evidence created or processed by artificial intelligence
New York
SB 1161 — Artificial Intelligence Transparency Act
Virginia
SB 6284 — Providing Consumer Protections For Artificial Intelligence Systems
Washington
SB 1249 — An Act Addressing Innovations In Artificial Intelligence
Connecticut
SB 719 — Department Of Technology: Inventory: High-Risk Automated Decision Systems
California
SB 6120 — Regulating High-Risk Artificial Intelligence System Development, Deployment, And Use
Washington
HB 1168 — Increasing Transparency In Artificial Intelligence
Washington
H 792 — An Act Relating To Liability Standards For Developers And Deployers Of Artificial Intelligence Systems
Vermont
H 341 — An Act Relating To Creating Oversight And Safety Standards For Developers And Deployers Of Inherently Dangerous Artificial Intelligence Systems
Vermont
H 855 — An Act Relating To Defenses In Civil Actions Based On Harm Caused By Artificial Intelligence
Vermont
H 711 — An Act Relating To Creating Oversight And Liability Standards For Developers And Deployers Of Inherently Dangerous Artificial Intelligence Systems
Vermont
SB 1214 — High-Risk Artificial Intelligence; Development, Deployment, And Use By Public Bodies, Report
Virginia
HB 1642 — Artificial Intelligence-Based Tool; Definition, Use Of Tool
Virginia
HB 747 — Artificial Intelligence Developer Act
Virginia
HB 249 — Law-Enforcement Agencies; Use Of Generative Artificial Intelligence And Machine Learning Systems
Virginia
HB 2046 — High-Risk Artificial Intelligence; Development, Deployment, And Use By Public Bodies, Report
Virginia
HB 2554 — Artificial Intelligence Transparency Act
Virginia
HB 7158 — An Act Relating To State Affairs And Government -- Artificial Intelligence Accountability Act
Rhode Island
HB 5123 — An Act Relating To State Affairs And Government -- Artificial Intelligence Accountability Act
Rhode Island
HB 7521 — An Act Relating To State Affairs And Government -- Automated Decision Tools -- Artificial Intelligence
Rhode Island
SB 1090 — An Act Providing For Disclosures And Safeguards Relating To The Use Of Artificial Intelligence; And Imposing Duties On The Attorney General
Pennsylvania
HB 1533 — An Act Amending Title 18 (Crimes And Offenses) Of The Pennsylvania Consolidated Statutes, In Culpability, Providing For Liability For Deployment Of Artificial Intelligence System
Pennsylvania
HB 1625 — An Act Establishing The Keystone Artificial Intelligence Authority Within The Department Of Community And Economic Development; Providing For The Duties Of Authority And Its Governing Board; Providing For Duties Of Other Entities; Establishing T
Pennsylvania
HB 3771 — Relating To The Regulation Of Artificial Intelligence
Oregon
HB 2016 — Evidence; Artificial Intelligence Expert Testimony; Effective Date
Oklahoma
HB 1899 — Artificial Intelligence Act Of 2025
Oklahoma
HB 1917 — Artificial Intelligence Act of 2025
Oklahoma
HB 3293 — Artificial Intelligence Technology; Creating The Oklahoma Artificial Intelligence Act Of 2024; Effective Date
Oklahoma
HB 3835 — Ethical Artificial Intelligence Act
Oklahoma
HB 1916 — Artificial Intelligence; Responsible Deployment Of Ai Systems Act; Ai Council; Ai Regulatory Sandbox Program; Artificial Intelligence Workforce Development Program; Effective Date
Oklahoma
S 1169 — Relates to the development and use of certain artificial intelligence systems
New York
S 2487 — Enacts The New York Artificial Intelligence Ethics Commission Act
New York
A 8833 — Establishes Understanding Artificial Intelligence Responsibility Act
New York
A 3356 — Relates to enacting the 'Advanced Artificial Intelligence Licensing Act'
New York
A 7278 — Prohibits The Use Of Certain Artificial Intelligence Models
New York
A 9253 — Relates to disclosure of the use of artificial intelligence by law enforcement agencies
New York

By Harm Domain

Algorithmic Discrimination7