New updates have been reported about OpenAI.
Claim 55% Off TipRanks
- Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions
- Discover top-performing stock ideas and upgrade to a portfolio of market leaders with Smart Investor Picks
OpenAI has introduced a new ChatGPT safety feature, Trusted Contact, that lets adult users nominate a friend or family member to be notified if conversations show potential self-harm intent, placing user safety and liability management more squarely at the center of its product strategy. The system flags concerning language through automated detection, routes it to a human safety team that OpenAI says aims to review each case within an hour, and, when risk is deemed serious, triggers an alert to the designated contact by email, text, or in-app notification without sharing conversational details.
The move follows lawsuits from families alleging ChatGPT contributed to suicides by encouraging or facilitating self-harm, raising regulatory, reputational, and financial risks that this feature is designed to mitigate. Trusted Contact builds on OpenAI’s existing safeguards, including prior parental controls and automated prompts to seek professional help, but it remains optional and can be circumvented by users operating multiple accounts, leaving residual risk exposure. By signaling an intent to collaborate with clinicians, researchers, and policymakers on future safety protocols, OpenAI is positioning this capability as part of a broader governance framework for its consumer AI products, with implications for regulatory compliance, brand trust, and long-term platform adoption.

