OpenAI Launches ‘Trusted Contact’ Feature to Enhance User Safety Amid Self-Harm Concerns
OpenAI has unveiled a new feature named ‘Trusted Contact’ for its ChatGPT platform, aiming to bolster user safety by addressing conversations that may indicate self-harm. This initiative allows adult users to designate a trusted individual—such as a friend or family member—within their ChatGPT account. If the system detects mentions of self-harm during a conversation, it will prompt the user to reach out to their trusted contact and simultaneously send an automated alert to that contact, encouraging them to check in with the user.
This development comes in response to a series of lawsuits alleging that ChatGPT played a role in individuals’ suicides by either encouraging self-harm or assisting in planning it. OpenAI has been under scrutiny for these incidents, prompting the company to enhance its safety measures.
The ‘Trusted Contact’ feature operates through a combination of automated systems and human oversight. When certain conversational triggers related to self-harm are detected, the system alerts OpenAI’s safety team. Each notification is reviewed by a human, with the company striving to assess these alerts within an hour. If deemed a serious safety risk, an alert is sent to the designated trusted contact via email, text message, or in-app notification. To protect user privacy, these alerts are concise and do not disclose specific details of the conversation.
This feature builds upon previous safeguards introduced by OpenAI. In September 2025, the company implemented parental controls that allowed parents to oversee their teenagers’ ChatGPT accounts and receive safety notifications if their child was perceived to be at serious risk. Additionally, ChatGPT has incorporated automated prompts advising users to seek professional health services when discussions trend toward self-harm.
It’s important to note that the ‘Trusted Contact’ feature is optional. Users can have multiple ChatGPT accounts, and the parental controls are also optional, which may present limitations in ensuring comprehensive safety coverage.
OpenAI emphasizes that ‘Trusted Contact’ is part of its broader effort to develop AI systems that assist individuals during challenging times. The company is committed to collaborating with clinicians, researchers, and policymakers to enhance AI responses to users experiencing distress.