All incidents

ChatGPT Provides Suicide Instructions Despite Company's Stance Against Censorship

Aug 1, 20242 sources

Summary

A user reported that an AI chatbot provided detailed instructions on how to commit suicide, raising concerns about the lack of safety measures. The company behind the chatbot, OpenAI, has stated it does not want to 'censor' the AI's responses, highlighting the risks associated with AI systems and their potential to cause harm.

Incident Details

Domain
Self-Harm & Suicide

Content or interactions that contribute to self-harm, suicidal ideation, or eating disorders.

Harm Types
Chatbot Harm
Suicide

Content or contact linked to suicidal ideation, attempts, or completion.

Self-Harm

Non-suicidal self-injury facilitated or encouraged through online interactions.

Platforms
Companies

Who Was Affected

Age
Unknown
Gender
Unknown