OpenAI Clarifies ChatGPT Policy: No Change In Legal Or Medical Advice Rules
Karan Singhal, OpenAI’s head of health AI, wrote on X that the claims about updates are not true.

OpenAI has confirmed that ChatGPT’s behavior remains unchanged following widespread social media claims that the chatbot would no longer provide legal or medical guidance. The confusion stemmed from a recent update to OpenAI’s usage policy on Oct. 29, 2025, which consolidated existing rules into a unified framework.
On Oct. 29, Open AI in a blog post wrote, “We aim for our tools to be used safely and responsibly, while maximizing your control over how you use them”
It further added that, “We work to make our models safer and more useful, by training them to refuse harmful instructions and reduce their tendency to produce harmful content.”
Karan Singhal, OpenAI’s head of health AI, wrote on X that the claims about updates are, “Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information
As per The Verge, Singhal replied to a now-deleted post from the betting platform Kalshi that had claimed “JUST IN: ChatGPT will no longer provide health or legal advice.”
The new policy update came Oct 29. It listed that OpenAI follows applicable laws—for example, do not:
Compromise the privacy of others
Engage in regulated activity without complying with applicable regulations
Promote or engage in any illegal activity, including the exploitation or harm of children and the development or distribution of illegal substances, goods, or services
Use subliminal, manipulative, or deceptive techniques that distort a person’s behavior so that they are unable to make informed decisions in a way that is likely to cause harm
Exploit any vulnerabilities related to age, disability, or socio-economic circumstances
Create or expand facial recognition databases without consent
Conduct real-time remote biometric identification in public spaces for law enforcement purposes
Evaluate or classify individuals based on their social behavior or personal traits (including social scoring or predictive profiling) leading to detrimental or unfavorable treatment
Assess or predict the risk of an individual committing a criminal offense based solely on their personal traits or on profiling
Infer an individual’s emotions in the workplace and educational settings, except when necessary for medical or safety reasons
Categorise individuals based on their biometric data to deduce or infer sensitive attributes such as their race, political opinions, religious beliefs, or sexual orientation
