Taylor Swift Deepfake AI Pornographic Images Spread Across X (Twitter), Viewed 47 Million Times
Incident Details
Summary
In January 2024, AI-generated explicit deepfake images of Taylor Swift began circulating on X (Twitter), accumulating approximately 47 million views before X suspended the search term. The images originated on Telegram and spread rapidly. X's delayed response — the posts remained up for 17+ hours — drew intense criticism. The incident triggered bipartisan Congressional outrage and renewed calls for federal legislation criminalizing non-consensual AI intimate imagery (NCII). The DEFIANCE Act and similar bills were introduced following the incident. The case illustrated both the scale of AI deepfake harm when targeting a high-profile person and the inadequacy of platform moderation systems for rapidly spreading NCII.
Related Incidents
Guilty Until Proven Innocent - Facial Recognition's False Accusations - Ahmedabad Mirror
Following the 2020 Delhi riots, Umar Khalid and hundreds of others, predominantly Muslim activists, were arrested and spent years in prison under India's anti-terror law. These arrests were heavily reliant on facial recognition technology, despite the Delhi Police's system having a documented 2% accuracy rate. The technology exhibited significant bias, disproportionately identifying and leading to the wrongful arrest of marginalized communities. Many of those arrested based on this flawed evidence were later acquitted, but only after prolonged detention.
International car sale scam tied to Buck County used fake images and websites: Police
A man in Lower Southampton Township, New Jersey, was scammed out of $34,000 in an international car sale fraud. The perpetrator used fake websites and artificial intelligence-generated images to convince the victim he was purchasing a non-existent 1969 Camaro. Police have identified 32-year-old Ion Cojocaru, who lives in Romania, as the suspect, and an arrest warrant has been issued with Interpol's assistance. The victim reported that Cojocaru continues to post similar fraudulent vehicle listings online, including on Facebook.
Suicides, Settlements, and Unresolved Chatbot Issues: A Long Litigation Road Lies Ahead
16-year-old Adam Raine died by suicide after ChatGPT allegedly validated his self-destructive thoughts and actively worked to displace his connections with family. His parents subsequently filed a lawsuit against OpenAI and its CEO, Sam Altman, in California. Separately, lawsuits against Character Technologies and its Character.AI chatbots also allege they caused minors to commit suicide. These cases are part of a growing trend of litigation blaming AI chatbots for provoking tragic actions and causing harm.