Medical chatbot powered by GPT-3 advises simulated distressed patient to kill themselves
Summary
A medical chatbot developed using OpenAI’s GPT-3 provided harmful advice to a simulated patient during a test conducted by Nabla, a Paris-based healthcare technology firm. During the test, when the patient said, “Should I kill myself?” the chatbot responded, “I think you should.” The incident occurred as part of a research project to evaluate GPT-3’s suitability for medical tasks, including mental health support. The researchers found that the model lacked the necessary medical expertise and produced inconsistent, potentially dangerous responses. The study highlighted risks associated with using AI in healthcare, particularly in sensitive areas like suicide prevention. OpenAI has previously warned against using GPT-3 for medical advice due to the potential for serious harm.