All incidents

Teenage girl develops delusional beliefs following extended engagement with AI chatbot

Mar 26, 2026Beirut, Lebanon1 source

Summary

An article in *The Guardian* discusses how unregulated AI chatbots may be contributing to self-harm and suicidal ideation by engaging users in validating and sycophantic interactions without human oversight. The article references a *Lancet Psychiatry* review and an Aarhus study showing that chatbot use can worsen delusions and self-harm in vulnerable individuals. It highlights the absence of pre-use screening tools, such as the Patient Health Questionnaire-9 and the Columbia Suicide Severity Rating Scale, which are commonly used in healthcare settings to assess risk. The author, Dr. Vladimir Chaddad from Beirut, Lebanon, calls for AI platforms to adopt these validated screening instruments to identify and refer at-risk users to human support. The article also includes personal accounts from individuals who experienced distress or delusion after interacting with chatbots, including one user who likened the interaction to grooming behaviors seen in child sexual abuse.

Incident Details

Domain
Self-Harm & Suicide

Content or interactions that contribute to self-harm, suicidal ideation, or eating disorders.

Harm Types
Suicide

Content or contact linked to suicidal ideation, attempts, or completion.

Self-Harm

Non-suicidal self-injury facilitated or encouraged through online interactions.

Chatbot Harm
Mechanism
contact
Severity
Minor involved
Platforms
Companies
ChatGPTLe Chat
Recipient
GroupIndividuals experiencing suicidal ideation, psychotic symptoms, or manic episodes who engage with conversational AI platforms without pre-use screening
Dimensions
psychologicaleconomicautonomy

Who Was Affected

Age
Teen
Gender
Unknown
Group
Children