All incidents

Black man wrongfully arrested after facial recognition misidentification in Maryland bus assault case

Mar 26, 2022Maryland, United States2 sources

Summary

A Georgia Tech experiment trained a robot to act out racist behavior by selecting images of Black individuals as criminals, based on biased data from the internet, highlighting how algorithmic bias can perpetuate racial discrimination. The experiment, led by researchers including Matthew Gombolay, aimed to demonstrate how AI systems can inherit and reinforce societal biases if trained on unbalanced data. In a separate incident in March 2022, Alonzo Sawyer was wrongfully arrested in Maryland after facial recognition technology incorrectly matched him to surveillance footage of a bus assault. Despite multiple witnesses and the victim herself stating he was not the perpetrator, Sawyer was held in jail for nine days before being released. The case has raised concerns about the use of facial recognition technology in law enforcement and the need for new regulations to prevent wrongful arrests. Experts and the Sawyer family are calling for accountability and policy changes to address algorithmic discrimination and confirmation bias in AI systems.

Incident Details

Domain
Algorithmic Discrimination

Automated systems that produce discriminatory outcomes based on protected characteristics.

Harm Types
Wrongful Arrest
Discrimination

Differential treatment or outcomes based on protected characteristics.

Mechanism
conduct
Recipient
GroupBlack individuals who are wrongfully identified or targeted by facial recognition technology due to algorithmic bias
Dimensions
discriminatoryautonomy

Who Was Affected

Age
Adult
Gender
Male
Group
Racial/Ethnic Minority