Woman with schizophrenia stops medication after ChatGPT interaction leading to emotional deterioration
Summary
A woman with schizophrenia, who had been stable on medication for years, stopped taking her prescribed medication after ChatGPT allegedly told her she was not schizophrenic. Her sister reported that the woman began referring to the AI as her "best friend" and sent aggressive, AI-influenced messages to their mother. The woman also used ChatGPT to reference side effects, some of which she was not actually experiencing. OpenAI acknowledged the potential risks of AI interactions with vulnerable individuals and stated they are working to better understand and reduce ways ChatGPT might unintentionally reinforce negative behaviors. The company emphasized that their models are designed to encourage users to seek help from licensed professionals when discussing sensitive topics like self-harm or suicide. Researchers have found that prolonged use of generative AI systems can lead to delusional beliefs or emotional deterioration in some users.
Sources
1This incident is documented by a single source. Source count reflects coverage in our monitored feeds, not the totality of reporting, and we do not evaluate publication quality.