Get App
Download App Scanner
Scan to Download
Advertisement

OpenAI Is Reviewing Your Chats With ChatGPT, Notifying Police In Special Cases

OpenAI Is Reviewing Your Chats With ChatGPT, Notifying Police In Special Cases
ChatGPT maker OpenAI has discreetly acknowledged that it examines user conversations and may share them with police if it believes someone is at risk. (Source: Unsplash)
  • OpenAI reviews ChatGPT conversations for potential violence threats and may share with police
  • Conversations suggesting imminent harm are reviewed by trained humans who can ban accounts
  • Self-harm cases are currently not referred to law enforcement to protect user privacy
Did our AI summary help?
Let us know.

If you believed that your discussions with ChatGPT were confidential, it's time to reconsider. ChatGPT maker OpenAI has discreetly acknowledged that it examines user conversations and may share them with the police if it believes someone is at risk.

The disclosure was made in a blog this week, where OpenAI outlined its approach to managing situations involving potential violence. This comes on the back of a teenager's suicide in California, who allegedly took his own life after interacting with GPT-4o.

OpenAI stated that when its systems identify someone who may be intending to harm others, the conversations are forwarded to reviewers. If the human evaluators determine that the threat is significant and urgent, the company claims it may notify law enforcement.

“When we detect users who are planning to harm others, we route their conversations to specialised pipelines where they are reviewed by a small team trained on our usage policies and who are authorised to take action, including banning accounts,” OpenAI said in the blog post. “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.” 

OpenAI, however, said that it is “currently not referring self-harm cases to law enforcement to respect people's privacy given the uniquely private nature of ChatGPT interactions.”

Nonetheless, OpenAI's acknowledgment has sparked concern. Most ChatGPT users believed their conversations would stay confidential. However, that doesn't appear to be the case. 

As per OpenAI, “If someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help. In the US, ChatGPT refers people to 988 (suicide and crisis hotline), in the UK to Samaritans, and elsewhere to findahelpline.com.”

How OpenAI determines individuals' locations to notify emergency services is another matter of concern. An individual can potentially impersonate another for threats, which can result in police mistakenly targeting an innocent person.

The revelation also comes after OpenAI CEO Sam Altman had cautioned that conversations with ChatGPT aren't confidential or legally safeguarded like therapist interactions, and even erased chats might be recoverable for legal and security purposes.

Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit.

Newsletters

Update Email
to get newsletters straight to your inbox
⚠️ Add your Email ID to receive Newsletters
Note: You will be signed up automatically after adding email

News for You

Set as Trusted Source
on Google Search