OpenAI Mulls Reporting Suicidal Users To Authorities, Says Sam Altman, Cites 15,000+ Alerts A Week

Advertisement
Read Time: 3 mins
In a podcast with Tucker Carlson on Wednesday, Sam Altman said that OpenAI could start 'calling the authorities' when young users talk seriously about suicide. (Image: Screengrab from Tucker Carlsen Interview)

OpenAI CEO Sam Altman is considering a policy change to alert authorities when young users discuss suicide with ChatGPT. In a podcast, Altman said that up to 1,500 people a week might be discussing taking their own life with the chatbot.

In a podcast with Tucker Carlson on Wednesday, Altman said that OpenAI could start 'calling the authorities' when young users start talking seriously about suicide. Presently, ChatGPT advises users to call suicide hotlines, but Altman thinks it's 'reasonable' to involve authorities in serious cases, especially for underage users.

Advertisement

It is not known which authorities would be called or what information about the users would be shared with them.

Altman noted the discrepancy between the high number of global suicides and those discussing it with ChatGPT, suggesting that despite these conversations, the AI might not be effectively preventing these outcomes and that more proactive measures or better advice could be offered."

“There are 15,000 people a week who commit suicide. About 10% of the world is talking to ChatGPT. That's like 1,500 people a week that are talking, assuming this is right, to ChatGPT and still committing suicide at the end of it. They probably talked about it. We probably didn't save their lives. Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice," Altman told Carlson.

Advertisement

Loading...