OpenAI has launched Trusted Contact, an optional ChatGPT safety feature that can notify a nominated person if a serious self-harm concern is detected.
OpenAI has introduced a new ChatGPT safety feature called Trusted Contact, an optional tool that lets adult users nominate one person who may be notified if the system detects a serious self-harm concern.
OpenAI announced the feature on May 7. The company says automated systems and trained human reviewers help determine whether a notification should be sent. It also says the alert is limited and does not share chat transcripts.
The company is positioning Trusted Contact as a crisis-response safeguard rather than a replacement for professional help or emergency services. The feature is available for personal ChatGPT accounts and is not offered in Business, Enterprise or Edu workspaces.
OpenAI’s help documentation says the setup is invitation-based and that users can choose who they want to nominate. The company also says the feature is designed to prompt a check-in when someone may be in danger.
The rollout comes as AI companies face growing scrutiny over how chatbots handle self-harm and other crisis situations. For now, OpenAI has not said when it will publish usage statistics or any broader effectiveness data.
Revision note
Initial automated publication.
