Felix Pinkston
Mar 03, 2026 23:02
OpenAI declares trusted contact function and improved misery detection as psychological well being lawsuits consolidate in California courtroom. New instances anticipated.
OpenAI is rolling out new psychological well being security options for ChatGPT whereas concurrently bracing for an expanded wave of litigation, as a number of lawsuits alleging the chatbot contributed to person hurt have been consolidated right into a single California continuing.
The corporate introduced on March 3, 2026 that it’s going to quickly launch a “trusted contact” function permitting grownup customers to designate somebody who receives notifications when they could want extra help. The function builds on parental controls launched in September 2025, which OpenAI says have seen “encouraging engagement from households.”
Authorized Strain Mounts
The timing is not coincidental. A California courtroom just lately consolidated a number of psychological health-related instances in opposition to OpenAI, with a coordination decide to be assigned in coming days. Extra troubling for the corporate: plaintiffs’ attorneys have knowledgeable the courtroom they intend to file extra instances.
OpenAI struck a notably measured tone in addressing the litigation, stating it could deal with instances “with care, transparency, respect for the folks concerned” and acknowledged that conditions contain “actual folks and actual lives.” The corporate urged observers to “reserve judgment” as info emerge by way of courtroom procedures.
Technical Enhancements
Past the trusted contact function, OpenAI says it is advancing how its fashions detect emotional misery by way of new analysis strategies that simulate prolonged psychological well being conversations. This work entails the corporate’s Council on Properly-Being and AI and its International Physicians Community.
These updates observe important mannequin security enhancements over the previous 12 months. When GPT-5 launched in late 2025, OpenAI reported it had considerably lowered undesired responses in psychological well being eventualities in comparison with GPT-4o. The corporate has additionally applied session cut-off dates and “mild reminders” encouraging customers to take breaks throughout extended interactions.
OpenAI up to date its utilization insurance policies to explicitly prohibit utilizing its fashions for diagnosing medical circumstances or offering particular psychological well being therapy, positioning ChatGPT as a supportive instrument moderately than an expert substitute.
Scale of the Problem
With greater than 900 million weekly ChatGPT customers, the stakes are substantial. A Stanford College research and different analysis have fueled issues about AI chatbots’ potential to contribute to psychological hurt, together with allegations in pending lawsuits that ChatGPT contributed to psychosis, paranoia, and person suicides.
The corporate has additionally dedicated as much as $2 million in grants for exterior analysis on culturally grounded psychological well being subjects and improved analysis strategies—an acknowledgment that inside efforts alone might not suffice.
How the consolidated California continuing unfolds will seemingly form regulatory expectations for the complete AI business. The courtroom’s choice of lead counsel for plaintiffs, anticipated quickly, will sign how aggressively these instances proceed.
Picture supply: Shutterstock

