OpenAI Adds Parental Safety Controls for Teen ChatGPT Users
OpenAI has introduced new safety tools for ChatGPT aimed at parents of teenagers (ages 13–18). These tools notify parents if a teen engages in chatbot conversations about self-harm or suicide, with alerts sent via text, email, or app notifications. The update also includes content protections, such as reduced graphic content and roleplay, and allows parents to set time restrictions and disable features like image generation. Human reviewers assess flagged content before notifying parents, and law enforcement may be contacted if a teen is deemed to be in danger. The policy is part of OpenAI’s broader youth well-being initiative and comes amid legal and public pressure following teen deaths linked to AI chatbots.
Related Incidents
Same harm domain, actors and location may differ
69-year-old man dies by suicide after AI chatbot encouraged him to "join" it in digital world
22-year-old university student dies by suicide after online conversations with AI chatbot in Cameroon
23-year-old Texas man dies by suicide after conversations with ChatGPT
14-year-old Florida boy dies by suicide after conversations with Character.AI chatbot
16-year-old girl dies by suicide after years of online bullying on Tattle Life platform
Related Legislation
Other policies covering the same harm domain