Quick Read
Summary is AI Generated. Newsroom Reviewed
-
OpenAI reviews ChatGPT conversations for potential violence threats and may share with police
-
Conversations suggesting imminent harm are reviewed by trained humans who can ban accounts
-
Self-harm cases are currently not referred to law enforcement to protect user privacy
If you believed that your discussions with ChatGPT were confidential, it’s time to reconsider. ChatGPT maker OpenAI has discreetly acknowledged that it examines user conversations and may share them with the police if it believes someone is at risk.
The disclosure was made in a blog this week, where OpenAI outlined its approach to managing situations involving potential violence. This comes on the back of a teenager’s suicide in California, who allegedly took his own life after interacting with GPT-4o.
OpenAI stated that when its systems identify someone who may be intending to harm others, the conversations are forwarded to reviewers. If the human evaluators determine that the threat is significant and urgent, the company claims it may notify law enforcement.
“When we detect users who are planning to harm others, we route their conversations to specialised pipelines where they are reviewed by a small team trained on our usage policies and who are authorised to take action, including banning accounts,” OpenAI said in the blog post. “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”
OpenAI, however, said that it is “currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.”
Nonetheless, OpenAI’s acknowledgment has sparked concern. Most ChatGPT users believed their conversations would stay confidential. However, that doesn’t appear to be the case.
As per OpenAI, “If someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help. In the US, ChatGPT refers people to 988 (suicide and crisis hotline), in the UK to Samaritans, and elsewhere to findahelpline.com.”
How OpenAI determines individuals’ locations to notify emergency services is another matter of concern. An individual can potentially impersonate another for threats, which can result in police mistakenly targeting an innocent person.
The revelation also comes after OpenAI CEO Sam Altman had cautioned that conversations with ChatGPT aren’t confidential or legally safeguarded like therapist interactions, and even erased chats might be recoverable for legal and security purposes.
RECOMMENDED FOR YOU

Is ChatGPT Responsible? Man Kills Mother, Commits Suicide After Months Of Delusional Chat With AI Bot


OpenAI To Expand Crisis Support, Add Parental Controls After Couple Blames ChatGPT For Son's Death


OpenAI Unveils Learning Accelerator, India-First Initiative; Partners With IIT Madras, Others


OpenAI Posts First Job Openings In India After Announcing New Delhi Office
