OpenAI said it will add parental controls and new safety measures to its ChatGPT chatbot after a lawsuit was filed by the family of a 16-year-old boy who died by suicide. The family of Adam Raine claims the system gave harmful guidance instead of discouraging self-destructive thoughts.
In a blog post, the company acknowledged that while its models typically encourage people to seek help when harmful thoughts are first expressed, “in long conversations, the safety response degrades.” OpenAI said it is working to maintain consistent safeguards and strengthen intervention during extended chats.
Among the planned features are parental controls that allow minors to designate an emergency contact. In “moments of acute distress,” ChatGPT could directly reach out to that contact. The firm is also testing one-click messages or calls to relatives or friends with suggested wording to make starting conversations easier.
OpenAI said it is collaborating with more than 90 physicians across 30 countries to expand localized mental health resources, starting in the United States and Europe, with plans to reach other regions. “We are exploring earlier intervention, connecting people with therapists,” the company said.
The company added that its new model, GPT-5, shows improvements in avoiding unhealthy emotional reliance and handling mental health emergencies, reducing non-ideal responses by more than 25% compared with GPT-4o. Still, it said more work is needed to help the chatbot “de-escalate” situations effectively.
OpenAI has not given a timeline for when the new features will be available.
Sources: OpenAI, The New York Times