Man generates and distributes AI-generated child sexual abuse imagery using open-source model
Summary
U.S. federal prosecutors are increasingly targeting individuals who use artificial intelligence (AI) to generate child sex abuse imagery, citing concerns that the technology could lead to a surge in illicit material. In 2024, the U.S. Justice Department filed two criminal cases against defendants accused of using generative AI systems to produce explicit images of children. One defendant, Steven Anderegg, was indicted in May for allegedly using the Stable Diffusion AI model to generate and share explicit images of children, while another, Seth Herrera, a U.S. Army soldier, was charged with using AI chatbots to create violent sexual abuse imagery. Both have pleaded not guilty, with Anderegg seeking to dismiss the charges on constitutional grounds. The National Center for Missing and Exploited Children reported receiving about 450 monthly reports related to AI-generated child exploitation material, though this is a small fraction of overall reports. Legal experts note that while existing laws cover explicit depictions of real children, the legal status of AI-generated imagery remains unclear, with past rulings limiting the criminalization of computer-generated child abuse images. Advocacy groups have secured commitments from major AI companies to avoid training models on child sex abuse imagery and to monitor platforms to prevent its spread.
Incident Details
Harms involving the exploitation, abuse, or endangerment of minors, including CSAM and grooming.
Child sexual abuse material — the creation, distribution, or possession of such content.