Wednesday, August 13, 2025 | 8:45 pm

ChatGPT Suggests Sodium Bromide as A Salt Substitute, 60-year-old Man in Hospital

60-year-old Man in Hospital

A 60-year-old man, following ChatGPT’s advice to cut out salt from his diet and start using sodium bromide instead, developed a rare and fatal case of bromide poisoning (bromism) three months later. He was hospitalized with confusion, hallucinations and other physical symptoms. Doctors treated him for three weeks to recover. The incident has raised new concerns about the risks of artificial intelligence (AI)-based health advice.

A 60-year-old man decided to cut out salt to live a healthier lifestyle, but after following ChatGPT’s advice, he switched to sodium bromide, putting his life at serious risk. The incident was reported in a recent report published in the Journals of the American College of Physicians.

How the incident began

when the person asked ChatGP how to eliminate sodium chloride (table salt) from their diet. In response, they suggested sodium bromide as an alternative, which was previously used in medicine but is now known to be toxic in large doses. They purchased sodium bromide online and used it for three months.

Read more: Savings of over Tk 14,000 Crore in A Year in The Electricity and Energy Sector

Symptoms of illness

Despite having no history of mental or physical illness, he experienced symptoms such as hallucinations, suspiciousness, extreme thirst, and confusion. For the first 24 hours after being hospitalized, he refused to drink water because he did not consider it safe. Doctors tested him and diagnosed him with bromide poisoning a condition that was once more common due to medication.
Symptoms included:

  • Neurological problems
  • Skin problems (such as acne)
  • Red spots known as ‘cherry angiomas’

Treatment was provided by maintaining fluid and electrolyte balance for three weeks. The patient eventually recovered and was discharged from the hospital.

Experts’ Warning

The report’s authors warn:

“ChatGPT and other AI systems may provide scientific misinformation, be incapable of critical analysis, and spread misinformation.”

OpenAI states in its Terms of Use:

“The output of our services should not be used as the sole source of truth or a substitute for professional advice.”

“Our services are not intended to be used to diagnose or treat any health problem.”

The incident has intensified the global debate about the limitations, responsibilities, and risks of AI-based health advice particularly in the areas of physical and mental health.

Source: Mint

Share on Social Media

Leave a Comment

Your email address will not be published. Required fields are marked *

Update

Related Posts

Scroll to Top