ADVERTISEMENT

OpenAI Mulls Reporting Suicidal Users To Authorities, Says Sam Altman, Cites 15,000+ Alerts A Week

In a podcast with Tucker Carlson Altman said that OpenAI could start 'calling the authorities' when young users talk seriously about suicide.

<div class="paragraphs"><p>In a podcast with Tucker Carlson on Wednesday, Sam Altman said that OpenAI could start 'calling the authorities' when young users talk seriously about suicide. (Image: Screengrab from Tucker Carlsen Interview)</p></div>
In a podcast with Tucker Carlson on Wednesday, Sam Altman said that OpenAI could start 'calling the authorities' when young users talk seriously about suicide. (Image: Screengrab from Tucker Carlsen Interview)
Show Quick Read
Summary is AI Generated. Newsroom Reviewed

OpenAI CEO Sam Altman is considering a policy change to alert authorities when young users discuss suicide with ChatGPT. In a podcast, Altman said that up to 1,500 people a week might be discussing taking their own life with the chatbot.

In a podcast with Tucker Carlson on Wednesday, Altman said that OpenAI could start 'calling the authorities' when young users start talking seriously about suicide. Presently, ChatGPT advises users to call suicide hotlines, but Altman thinks it's 'reasonable' to involve authorities in serious cases, especially for underage users.

It is not known which authorities would be called or what information about the users would be shared with them.

Altman noted the discrepancy between the high number of global suicides and those discussing it with ChatGPT, suggesting that despite these conversations, the AI might not be effectively preventing these outcomes and that more proactive measures or better advice could be offered."

“There are 15,000 people a week who commit suicide. About 10% of the world is talking to ChatGPT. That’s like 1,500 people a week that are talking, assuming this is right, to ChatGPT and still committing suicide at the end of it. They probably talked about it. We probably didn’t save their lives. Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice," Altman told Carlson.

This consideration comes after OpenAI and Altman were sued by the family of a 16-year-old Adam Raine, who took his own life after "months of encouragement from ChatGPT."

According to a legal claim, ChatGPT guided Raine on whether his method of dying by suicide would work and even offered to help him write a suicide note.

The company has recently introduced parental controls that allow parents “options to gain more insight into, and shape, how their teens use ChatGPT."

Not just the parental controls, Altman said the app will also not answer anyone, even if they try to use a loophole to get suicide tips by pretending to be asking for the information for medical research or writing.

Opinion
Teen Suicide Prompts OpenAI To Add Parental Controls To ChatGPT, Here's What To Expect

Suicide Facts

According to World Health Organisation (WHO), more than 720,000 people die due to suicide every year, and it is the third leading cause of death among 15–29-year-olds. The reasons for suicide are influenced by social, cultural, biological, psychological, and environmental factors.

Additionally, there is an alarming trend of young adolescents turning to AI chatbots like ChatGPT to express their deepest emotions and personal problems is raising serious concerns among educators and mental health professionals.

Experts warn that this digital 'safe space' is creating a dangerous dependency, fueling validation-seeking behaviour, and deepening a crisis of communication within families. They said that this digital solace is just a mirage, as the chatbots are designed to provide validation and engagement and this could hinder development of social skills emotional strength.

Opinion
OpenAI’s Sam Altman Expects To Spend ‘Trillions’ On Infrastructure
OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit