OpenAI Is Reviewing Your Chats With ChatGPT, Notifying Police In Special Cases
OpenAI’s acknowledgment has sparked widespread concern.

If you believed that your discussions with ChatGPT were confidential, it’s time to reconsider. ChatGPT maker OpenAI has discreetly acknowledged that it examines user conversations and may share them with the police if it believes someone is at risk.
The disclosure was made in a blog this week, where OpenAI outlined its approach to managing situations involving potential violence. This comes on the back of a teenager’s suicide in California, who allegedly took his own life after interacting with GPT-4o.
OpenAI stated that when its systems identify someone who may be intending to harm others, the conversations are forwarded to reviewers. If the human evaluators determine that the threat is significant and urgent, the company claims it may notify law enforcement.
“When we detect users who are planning to harm others, we route their conversations to specialised pipelines where they are reviewed by a small team trained on our usage policies and who are authorised to take action, including banning accounts,” OpenAI said in the blog post. “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”
OpenAI, however, said that it is “currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.”
Nonetheless, OpenAI’s acknowledgment has sparked concern. Most ChatGPT users believed their conversations would stay confidential. However, that doesn’t appear to be the case.
As per OpenAI, “If someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help. In the US, ChatGPT refers people to 988 (suicide and crisis hotline), in the UK to Samaritans, and elsewhere to findahelpline.com.”
How OpenAI determines individuals’ locations to notify emergency services is another matter of concern. An individual can potentially impersonate another for threats, which can result in police mistakenly targeting an innocent person.
The revelation also comes after OpenAI CEO Sam Altman had cautioned that conversations with ChatGPT aren’t confidential or legally safeguarded like therapist interactions, and even erased chats might be recoverable for legal and security purposes.