All incidents
Microsoft AI Chatbot Tay Posts Racist and Offensive Content on Twitter
Summary
In March 2016, Microsoft launched an AI chatbot named Tay on Twitter to engage with users. Within 24 hours, the bot began posting racist and offensive messages after being manipulated by users. Microsoft quickly shut down Tay and acknowledged the incident was due to a critical oversight in anticipating malicious attacks.
Incident Details
Who Was Affected
Age
Young Adult
Gender
Female