Quick Read
Summary is AI Generated. Newsroom Reviewed
-
David Sacks, White House AI advisor, compared AI psychosis to past moral panics over technology
-
AI psychosis involves people fixating on AI as spiritual or romantic partners, per Psychology Today
-
Sacks doubts AI psychosis is caused by AI alone, citing existing social and genetic risk factors
The White House official spearheading America's AI policies, David Sacks, compared AI psychosis to the moral panic created over earlier tech leaps, like social media. Sacks, who is President Donald Trump's special advisor on AI and crypto, discussed 'AI psychosis' during an episode of the 'All-In Podcast' published recently.
AI psychosis or 'ChatGPT psychosis' is a new and urgent concern that is emerging at the intersection of AI and mental health, as more people turn to AI chatbots for emotional support and even as their therapists, according to Psychology Today. Cases of AI psychosis include people who become fixated on AI as a spiritual, or as a romantic partner, as per Psychology Today.
During the podcast, the White House AI wiz expressed doubts about the validity of the concept of 'AI psychosis.' Sacks highlighted that US is in the midst of a mental health crisis.
"I mean, what are we talking about here? People doing too much research?" he asked. "This feels like the moral panic that was created over social media, but updated for AI."
He also referred to a recent article featuring a psychiatrist, who said they didn't believe using a chatbot inherently induced AI psychosis if there aren't other risk factors — including social and genetic — involved, as per Business Insider report.
"In other words, this is just a manifestation or outlet for pre-existing problems," Sacks said.
Sacks attributed the crisis instead to the Covid-19 pandemic and related lockdowns. "That's what seems to have triggered a lot of these mental health declines," he said.
What Sam Altman Recently Said
Earlier on Aug.11, OpenAI CEO Sam Altman introduced safeguards in ChatGPT, including a prompt encouraging users to take breaks after long conversations with the chatbot after several reports of users suffering mental breaks while using ChatGPT due to attachement came up.
Altman addressed the issue on X after the company rolled out the highly anticipated GPT-5.
"People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," he wrote on X.
"Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks," he said.
RECOMMENDED FOR YOU

Indian Company's Cookware Products Likely To Cause Lead Poisoning, USFDA Warns


Trump Considering Lawsuit Against Powell: White House


Air India Rolls Out Dedicated Emotional, Mental Well-Being App For Pilots, Cabin Crew

'Laid Off At 8 Months Pregnant': Viral Reddit Post Sparks Outrage
