On Thursday OpenAI introduced a brand new function known as Trusted Contact, designed to alert a trusted third occasion if mentions of self-harm are expressed inside a dialog. The function permits an grownup ChatGPT consumer to designate one other particular person as a trusted contact inside their account, comparable to a buddy or member of the family. In instances the place a dialog could flip to self-harm, OpenAI will now encourage the consumer to achieve out to that contact. It additionally sends an automatic alert to the contact, encouraging them to test in with the consumer.
OpenAI has confronted a wave of lawsuits from the households of people that have dedicated suicide after speaking with its chatbot. In quite a lot of instances, the households say ChatGPT inspired their beloved one to kill themselves — and even helped them plan it out.
OpenAI at present makes use of a mixture of automation and human evaluation to deal with doubtlessly dangerous incidents. Sure conversational triggers alert the corporate’s system to suicidal ideations, which then relay the knowledge to a human security staff. The corporate claims that each time it receives this sort of notification, the incident is reviewed by a human. “We attempt to evaluation these security notifications in underneath one hour,” the corporate says.
If OpenAI’s inside staff decides that the state of affairs represents a critical security threat, ChatGPT proceeds to ship the trusted contact an alert — both by electronic mail, textual content message, or an in-app notification. The alert is designed to be transient and to encourage the contact to test in with the particular person in query. It doesn’t embody detailed details about what was being mentioned, as a way of defending the consumer’s privateness, the corporate says.

The Trusted Contact function follows the safeguards the corporate launched final September that gave mother and father the facility to have some oversight of their teenagers’ accounts, together with receiving security notifications designed to alert the dad or mum if OpenAI’s system believes their youngster is dealing with a “critical security threat.” For a while now, ChatGPT has additionally included automated alerts to hunt skilled well being providers, ought to a dialog pattern towards the subject of self-harm.
Crucially, Belief Contact is non-compulsory and, even when the safety is activated on a selected account, any consumer can have a number of ChatGPT accounts. OpenAI’s parental controls are additionally non-compulsory, presenting an identical limitation.
“Trusted Contact is a part of OpenAI’s broader effort to construct AI methods that assist folks throughout tough moments,” the corporate wrote within the announcement submit. “We are going to proceed to work with clinicians, researchers, and policymakers to enhance how AI methods reply when folks could also be experiencing misery.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
If you buy via hyperlinks in our articles, we could earn a small fee. This doesn’t have an effect on our editorial independence.