ADVERTISEMENT

Teen Suicide Prompts OpenAI To Add Parental Controls To ChatGPT, Here's What To Expect

The company said that these controls add to features and for all users including in-app reminders during long sessions to encourage breaks.

<div class="paragraphs"><p>OpenAI-owned ChatGPT (Photo by <a href="https://unsplash.com/@jupp?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Jonathan Kemper</a> on <a href="https://unsplash.com/photos/a-close-up-of-a-computer-screen-with-a-purple-background-N8AYH8R2rWQ?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>)</p></div>
OpenAI-owned ChatGPT (Photo by Jonathan Kemper on Unsplash)
Show Quick Read
Summary is AI Generated. Newsroom Reviewed

Artificial intelligence organisation OpenAI has announced plans to introduce parental controls for ChatGPT after an American couple claimed the AI encouraged their teenage son to take his own life.

In a blog post, the California-based AI company said it was partnering with experts to guide their work, and leveraging reasoning models for sensitive moments with focus on "strengthening protections for teens".

Strengthening Protections For Teens

Open AI said that many young people are among the first "AI natives", growing up with these tools as part of daily life. This not only creates real opportunities for support, learning, and creativity, but it also means families and teens may need support in setting healthy guidelines that fit a teen’s stage of development.

Hence, OpenAI has decided to add Parental Controls. Within the next month, parents will be able to:

  • Link their account with their teen’s account (minimum age of 13) through a simple email invitation.

  • Control how ChatGPT responds to their teen with age-appropriate model behaviour rules, which are on by default.

  • Manage which features to disable, including memory and chat history.

  • Receive notifications when the system detects their teen is in a moment of acute distress. Expert input will guide this feature to support trust between parents and teens.

The company said that these controls add to features and for all users including in-app reminders during long sessions to encourage breaks. They said that these steps are only the beginning, and they will keep strengthening their approach, guided by experts. The progress will be shared over the coming 120 days.

Expert Council On Well-Being, AI

Open AI began convening a council of experts in youth development, mental health, and human-computer interaction. The council’s role is to shape a clear, evidence-based vision for how AI can support people’s well-being and help them thrive.

They said their input will help the company define and measure well-being, set priorities, and design future safeguards. However they said that even though the council will advise on the product, research, and policy decisions, OpenAI will remain accountable.

Opinion
Is ChatGPT Responsible? Man Kills Mother, Commits Suicide After Months Of Delusional Chat With AI Bot

This council will work in tandem with Global Physician Network which is a broader pool of more than 250 physicians who have practiced in 60 countries.

Of this broader pool, more than 90 physicians across 30 countries—including psychiatrists, pediatricians, and general practitioners—have already contributed to their research on how the models should behave in mental health contexts. The company said that they are adding even more clinicians and researchers to our network, including those with deep expertise in areas like eating disorders, substance use, and adolescent health.

American couple Matthew and Maria Raine have filed a lawsuit in a California state court, alleging that ChatGPT developed an intimate relationship with their son Adam over several months in 2024 and 2025 prior to his suicide.

According to the lawsuit, during their last conversation on April 11, 2025, ChatGPT assisted 16-year-old Adam in stealing vodka from his parents and provided a detailed technical analysis of a noose he had tied, stating it "could potentially suspend a human". Adam was discovered dead a few hours later, having used the same method.

Opinion
OpenAI Is Reviewing Your Chats With ChatGPT, Notifying Police In Special Cases
OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit