All platforms
AI Media GenLaunched 2022Website

Stable Diffusion

Stable Diffusion has been named in 2 documented digital harm incidents, including 1 involving minor. The most common harm domain is Child Safety.

2
Incidents
0
Fatalities
1
Minors involved
Financial harm

Documented Incidents

2
Dec 1, 2024·Wisconsin

Wisconsin software engineer arrested for creating AI-generated child sexual abuse images

Steven Anderegg, a 42-year-old software engineer from Wisconsin, was arrested for allegedly creating and distributing AI-generated child sexual abuse material (CSAM) using the Stable Diffusion AI tool. Authorities allege he sent thousands of illicit images of minors to a 15-year-old boy via Instagram direct messages and shared disturbing content on social media platforms. Law enforcement became aware of Anderegg in October after the National Center for Missing & Exploited Children flagged his activity. An investigation revealed over 13,000 images on his computer, many depicting minors in explicit contexts. If convicted, Anderegg faces up to 70 years in prison, with prosecutors suggesting a potential life sentence.

Child Safety
May 1, 2024·Wisconsin, United States

Man generates and distributes AI-generated child sexual abuse imagery using open-source model

U.S. federal prosecutors are increasingly targeting individuals who use artificial intelligence (AI) to generate child sex abuse imagery, citing concerns that the technology could lead to a surge in illicit material. In 2024, the U.S. Justice Department filed two criminal cases against defendants accused of using generative AI systems to produce explicit images of children. One defendant, Steven Anderegg, was indicted in May for allegedly using the Stable Diffusion AI model to generate and share explicit images of children, while another, Seth Herrera, a U.S. Army soldier, was charged with using AI chatbots to create violent sexual abuse imagery. Both have pleaded not guilty, with Anderegg seeking to dismiss the charges on constitutional grounds. The National Center for Missing and Exploited Children reported receiving about 450 monthly reports related to AI-generated child exploitation material, though this is a small fraction of overall reports. Legal experts note that while existing laws cover explicit depictions of real children, the legal status of AI-generated imagery remains unclear, with past rulings limiting the criminalization of computer-generated child abuse images. Advocacy groups have secured commitments from major AI companies to avoid training models on child sex abuse imagery and to monitor platforms to prevent its spread.

Child SafetyCSAMMinor

Linked Legislation

3
DEFIANCE Act of 2025 (HR 3562 / S.1837) — 119th Congress
United States
AB 965 — Relating to artificial intelligence systems that simulate humanlike relationships with children and providing a penalty
Wisconsin
SB 6184 — Concerning Deepfake Artificial Intelligence-Generated Pornographic Material Involving Minors
Washington

By Harm Domain

Child Safety2