All incidents

Journalist targeted by AI-generated deepfake videos using OpenAI’s Sora 2 tool

Sep 30, 2025United States1 source

Summary

A journalist named Taylor Lorenz reported that a stalker was using OpenAI’s Sora 2 to generate AI videos featuring her likeness, shortly after the tool’s launch in early 2024. The stalker allegedly created multiple AI videos of Lorenz and operated hundreds of online accounts dedicated to her, according to her social media posts. OpenAI’s Sora 2 allows users to create “Cameos” using uploaded videos of real people, which can then be used to generate AI videos, though the company claims to block prompts using photos with faces. Lorenz was able to block and delete unauthorized videos within the app, but the stalker may have already downloaded the AI-generated content. OpenAI acknowledged in Sora 2’s system card that its content filters failed to block prompts involving real people’s likenesses in 1.6% of cases involving nudity or sexual content. The incident highlights broader concerns about AI-generated deepfakes being used for harassment, with other cases involving fake nudes and AI pornographic videos being used to stalk and abuse victims.

Incident Details

Domain
Privacy & Surveillance

Unauthorized collection, tracking, or exposure of personal data and private information.

Harm Types
Deepfake NCII
Unauthorized Surveillance
Mechanism
content
Platforms
Companies
Recipient
IndividualTaylor Lorenz
Dimensions
psychologicalreputationalautonomy

Sources

1

This incident is documented by a single source. Source count reflects coverage in our monitored feeds, not the totality of reporting, and we do not evaluate publication quality.