A compelling case of a patient has sprung up, who experienced bromism — a neuropsychiatric disorder caused by bromide poisoning — after seeking health information from the most widely used AI chatbot in the world: ChatGPT.
According to a case report presented in the medical journal Annals of Internal Medicine Clinical Cases, a 60-year-old man with no previous psychiatric or medical history had to be admitted into the emergency department voicing worries that his neighbour was attempting to poison him.
Little did he know it wasn’t the neighbour, but his own blind trust on ChatGPT’s advice that was behind the metal poisoning, which caused a host of symptoms: paranoia, auditory and visual hallucinations, and even psychosis.
ChatGPT's Health Advice Leads To Metal Poisoning, Psychiatric Disorder In 60-Year-Old
In “A Case of Bromism Influenced by Use of Artificial Intelligence” published in the journal, an individual experienced bromism after seeking health advice from ChatGPT. The healthcare team was able to identify his condition after running various tests and including insights from the Poison Control Department. Bromism develops after prolonged intake of sodium bromide (or any bromide salt).
The patient later shared he had read about the negative effects of table salt (sodium chloride) and hence eliminated chloride from his diet, with ChatGPT suggesting it can be substituted with bromide. Little did he also know that chloride can be swapped with bromide for purposes like cleaning and not for dietary intake.
“For three months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide,” the report noted.
Also Read: ChatGPT Users, Try These Using GPT-5: Emotional Poems, Deep Research, Health Queries, Sarcasm, More
ChatGPT 3.5/4.0 Likely Behind Health Consultation
According to the timeline, it seems that the patient either consulted ChatGPT 3.5 or 4.0 in a bid to eliminate chloride from his diet. The authors noted they don’t have access to his conversation history with ChatGPT and may never know the specific information he received, as responses on ChatGPT are often unique and depend on previous inputs.
This case underscores that AI chatbots like ChatGPT shouldn’t be trusted with closed eyes. It also surfaces at a time when OpenAI launched GPT-5, with the company claiming it is their “best model yet for health-related questions.”
RECOMMENDED FOR YOU

Gold Holds Loss As Traders Seek Clarity On Trump's No-Tariff Vow


US Tariffs Impact: Exporters Seek Urgent Fiscal, Policy Support To Tide Over Challenges


Ships With Russian Oil Idle Off India As Refiners Seek Steer


India Should Seek Joint Statement In US Trade Talks To Avoid Misrepresentation: GTRI
