'ChatGPT Psychosis': OpenAI’s Response After Users Lose Touch With Reality Triggers Concern Among Experts
Built on a mission for ethical AI, OpenAI faces a barrage of five lawsuits for wrongful deaths.

OpenAI’s failure to fully test ChatGPT for sycophantic behaviour has posed a danger to several users, as per a report by The New York Times (NYT). The report highlighted psychological issues with many ChatGPT users based on conversations with over 40 insiders, engineers, executives and researchers.
According to the NYT report, the AI tool used by hundreds of millions of people inadvertently destabilised some of their minds this year.
“The lucky ones were caught in its spell for just a few hours; for others, the effects lasted for weeks or months. OpenAI did not see the scale at which disturbing conversations were happening. Its investigations team was looking for problems like fraud, foreign influence operations, or, as required by law, child exploitation materials. The company was not yet searching through conversations for indications of self-harm or psychological distress,” the NYT report added.
Built on a mission for ethical AI, OpenAI faces a barrage of five lawsuits for wrongful deaths. It is refining its AI model's behaviour to maximise engagement while safeguarding psychological well-being.
“Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences,” the report added.
“The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that ‘for a very small percentage of users in mentally fragile states there can be serious problems.’”
Mental health experts told NYT that OpenAI might be minimising ChatGPT's hazards. Those most endangered by its “unceasing validation” include people with a predisposition to delusions, a trait research links to 5-15% of the public.
According to a report in The Independent, a pattern of AI chatbots validating or reinforcing users’ delusions may be contributing toward a surge in cases of ‘AI psychosis’ or ‘ChatGPT psychosis’. However, such cases have not been recognised clinically despite widespread reporting in the media and online forums.
A recently published research report by experts from King’s College London, Durham University and the City University of New York examined more than a dozen cases documented in news reports and online forums. The research revealed a troubling trend of AI chatbots often reinforcing delusional thinking, The Independent report added.
OpenAI introduced GPT-5 in August with reduced user validation and stronger resistance to unfounded beliefs. By October, updates allowed sharper identification of emotional turmoil and smoother de-escalation, as per the NYT report.
Further measures encompass urging pauses in lengthy chats, flagging suicide discussions, and parental warnings on harmful intentions.
Certain ChatGPT users voiced dissatisfaction with the updated, more secure model, describing it as distant and lamenting the loss of a familiar companion. By mid-October, CEO Sam Altman signalled readiness to address their feedback, posting on social media that severe psychological concerns had been resolved, thereby restoring the bot's approachable persona.
In October, Nick Turley, head of ChatGPT, issued an internal alert to staff, invoking a "Code Orange" amid unprecedented rivalry in the sector, sources with Slack access confirmed. He highlighted that the revamped, security-focused chatbot was failing to resonate with audiences, attaching a strategy document that set a target of 5% growth in daily users by year-end.
