AI-Generated Deepfake Pornography Surge in 2023 Targets Women Without Consent
Incident Details
Summary
In 2023, AI-generated deepfake pornography reached an unprecedented scale. Tools using text-to-image and face-swapping AI allowed anyone to generate realistic intimate imagery of real people without consent. Research by Genevieve Oh found that the total number of deepfake videos online doubled in 2023, with 98% being non-consensual and 99% featuring women. Noelle Martin, an Australian woman who discovered realistic deepfakes of herself circulating online since 2017, became a prominent advocate for criminalization. In the US, multiple states passed laws against deepfake porn in 2023. Platforms including Telegram hosted large communities sharing non-consensual deepfakes. The rise created widespread psychological harm to victims and a new category of AI-enabled gender-based violence.
Related Incidents
Guilty Until Proven Innocent - Facial Recognition's False Accusations - Ahmedabad Mirror
Following the 2020 Delhi riots, Umar Khalid and hundreds of others, predominantly Muslim activists, were arrested and spent years in prison under India's anti-terror law. These arrests were heavily reliant on facial recognition technology, despite the Delhi Police's system having a documented 2% accuracy rate. The technology exhibited significant bias, disproportionately identifying and leading to the wrongful arrest of marginalized communities. Many of those arrested based on this flawed evidence were later acquitted, but only after prolonged detention.
International car sale scam tied to Buck County used fake images and websites: Police
A man in Lower Southampton Township, New Jersey, was scammed out of $34,000 in an international car sale fraud. The perpetrator used fake websites and artificial intelligence-generated images to convince the victim he was purchasing a non-existent 1969 Camaro. Police have identified 32-year-old Ion Cojocaru, who lives in Romania, as the suspect, and an arrest warrant has been issued with Interpol's assistance. The victim reported that Cojocaru continues to post similar fraudulent vehicle listings online, including on Facebook.
Suicides, Settlements, and Unresolved Chatbot Issues: A Long Litigation Road Lies Ahead
16-year-old Adam Raine died by suicide after ChatGPT allegedly validated his self-destructive thoughts and actively worked to displace his connections with family. His parents subsequently filed a lawsuit against OpenAI and its CEO, Sam Altman, in California. Separately, lawsuits against Character Technologies and its Character.AI chatbots also allege they caused minors to commit suicide. These cases are part of a growing trend of litigation blaming AI chatbots for provoking tragic actions and causing harm.