Meta Announces New AI Chatbot Safety Features For Teens In Distress
Amid controversy over the influence of AI on teens, a recent study in medical journal Psychiatric Services has found inconsistencies in how these bots respond to distress-related questions.

Meta has announced upgrading safety features on their chatbots to respond better to teens in crisis.
Meta, which owns Instagram, Facebook and WhatsApp, now blocks its chatbots from discussing suicide, self-harm and disordered eating. The Mark Zuckerberg-owned company will now also block inappropriate romantic topics with teens. Instead, teens will be directed to experts for any help on these issues. Meta already provides parental controls for teen accounts.
ChatGPT owner OpenAI will let parents link to their teens’ accounts amid its goal to improve support for those in crisis. Through this move, parents will be able to turn off certain features on ChatGPT.
The latest OpenAI update follows as the company faces a lawsuit from the parents of a teen, Adam Raine, who alleged that ChatGPT helped the teen plan and end his life earlier this year.
"It’s encouraging to see OpenAI and Meta introducing features like parental controls and routing sensitive conversations to more capable models, but these are incremental steps,” mental policy researcher Ryan McBain was quoted as saying by AP.
Amid the controversy over the influence of AI on teens, a recent study in the medical journal ‘Psychiatric Services’ has found inconsistencies in how these bots respond to distress-related questions. RAND Corporation researchers reviewed ChatGPT, Google’s Gemini, and Anthropic’s Claude, calling for “further refinement.” McBrain was the lead author on this study.
“Without independent safety benchmarks, clinical testing, and enforceable standards, we’re still relying on companies to self-regulate in a space where the risks for teenagers are uniquely high,” McBrain further noted.