All incidents

AI nudify app used to generate and distribute explicit deepfakes of over 30 female students at New Jersey high school

Nov 2, 2023New Jersey, United States1 source

Summary

A high school student in New Jersey became a victim when classmates used an AI-powered "nudify" app to generate and share sexually explicit deepfake images of her and over 30 other girls at the school. The images were distributed through group chats, causing emotional distress and public humiliation for the affected students. The incident gained national attention after one of the victims, Francesca Mani, spoke publicly about her experience and advocated for federal legislation to address the issue. The case highlighted the growing prevalence of AI-generated nonconsensual intimate images among minors, with social media platforms like Snapchat, TikTok, and Instagram being the primary means of distribution. A survey by the Center for Democracy & Technology found that 15% of U.S. high school students were aware of such deepfakes being shared at their schools in the past year. In response, several states have enacted laws criminalizing AI-generated intimate images, but enforcement remains inconsistent and challenging, particularly when minors are involved.

Incident Details

Domain
Child Safety

Harms involving the exploitation, abuse, or endangerment of minors, including CSAM and grooming.

Harm Types
CSAM

Child sexual abuse material: the creation, distribution, or possession of such content.

Non-Consensual Imagery
Mechanism
content
Severity
Minor involved
Recipient
GroupMinors, particularly girls and LGBTQ+ students, who are depicted in AI-generated nonconsensual intimate images (NCII) shared on social media platforms.
Dimensions
psychologicalreputationalautonomydiscriminatory

Sources

1

This incident is documented by a single source. Source count reflects coverage in our monitored feeds, not the totality of reporting, and we do not evaluate publication quality.