OpenAI has introduced a new feature that allows its artificial intelligence chatbot, ChatGPT, to guess a user's age based on their interactions. In some cases, the AI bot may also require official ID verification in order to make it safer for teens.
The artificial intelligence company is tightening safety controls on ChatGPT after facing lawsuits linked to multiple suicides.
"ChatGPT will now attempt to guess a user's age and, in some cases, might require users to share an ID in order to verify that they are at least 18 years old," OpenAI wrote in a blog post.
"We know this is a privacy compromise for adults, but believe it is a worthy tradeoff," the company said in its announcement.
ChatGPT will estimate a user's age based on how they interact with the AI bot. It will work differently for people under 18 years. For example, ChatGPT will not respond to flirtatious requests or create content about suicide or self-harm.
If a teen user shows signs of suicidal thinking, ChatGPT will try to alert the parents, the company said. If parents cannot be reached and the situation looks dangerous, OpenAI says it may contact authorities to prevent harm.
OpenAI CEO Sam Altman also took to social media platform X, where he mentioned, "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decision-making."
He said the company is building a system that will automatically separate users into two groups. The first group is of teens, aged 13 to 17 and the other one is for adults, 18 and older.
In addition to this, OpenAI is also making new security tools to keep your personal data protected. "We are developing advanced security features to ensure your data is private, even from OpenAI employees," the company said.
However, there are exceptions. Automated systems will keep checking for signs of serious misuse, such as people using ChatGPT for harmful or illegal activities, sharing suicidal thoughts, planning to harm others, or posing large-scale risks like a major cybersecurity attack.
The move comes after Adam Raine's parents sued OpenAI, alleging that ChatGPT helped their son write the first draft of his suicide note. The lawsuit claimed that instead of helping him seek human aid, the chatbot supported Adam Raine's thoughts.
RECOMMENDED FOR YOU

Teen Suicide Prompts OpenAI To Add Parental Controls To ChatGPT, Here's What To Expect


OpenAI To Expand Crisis Support, Add Parental Controls After Couple Blames ChatGPT For Son's Death


OpenAI Posts First Job Openings In India After Announcing New Delhi Office


ChatGPT Users, Try These Using GPT-5: Emotional Poems, Deep Research, Health Queries, Sarcasm, More
