White House AI Advisor Says 'AI Psychosis' Similar To Social Media's 'Moral Panic' In Early Days
AI psychosis or 'ChatGPT psychosis' is a new and urgent concern that is emerging at the intersection of AI and mental health, as more people turn to AI chatbots for emotional support.

The White House official spearheading America's AI policies, David Sacks, compared AI psychosis to the moral panic created over earlier tech leaps, like social media. Sacks, who is President Donald Trump's special advisor on AI and crypto, discussed 'AI psychosis' during an episode of the 'All-In Podcast' published recently.
AI psychosis or 'ChatGPT psychosis' is a new and urgent concern that is emerging at the intersection of AI and mental health, as more people turn to AI chatbots for emotional support and even as their therapists, according to Psychology Today. Cases of AI psychosis include people who become fixated on AI as a spiritual, or as a romantic partner, as per Psychology Today.
During the podcast, the White House AI wiz expressed doubts about the validity of the concept of 'AI psychosis.' Sacks highlighted that US is in the midst of a mental health crisis.
"I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI."
He also referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced AI psychosis if there aren't other risk factors — including social and genetic — involved, as per Business Insider report.
"In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said.
Sacks attributed the crisis instead to the Covid-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said.
What Sam Altman Recently Said
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenlyâ¦
— Sam Altman (@sama) August 11, 2025
Earlier on Aug.11, OpenAI CEO Sam Altman introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot after several reports of users suffering mental breaks while using ChatGPT due to attachement came up.
Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5.
"People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," he wrote on X.
"Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks," he said.