Google is the worst place to search for your health-related symptoms. There is a saying that if you Google your health issue symptoms, Google will always make your symptoms related to a critical condition. Now, ChatGPT can be a bit of a smart version, but the platform is like those that don’t know you don’t know about your health condition and just have a bunch of random information. ChatGPT medical advice turned out to be bizarre for someone this year.
A 60-year-old man found himself in the hospital after following the diet suggestions from ChatGPT medical advice. He was having toxic sodium bromide instead of the regular salt, which made his condition worse; he started having hallucinations and paranoia.
AI health risks can be serious when someone starts overly trusting the super-intelligent stuff fully. It’s good, but only for educational purposes. You can’t trust AI for your health. When it’s about health, a person should always consult a doctor who has done years of work to study it and who can analyse their previous health profile.
AI health risks: From health-conscious to hospital bed
The 60-year-old man worries about the regular salt side effects on health, so he asked ChatGPT about it and how he can avoid having too much salt in his diet. The AI gave him a lot of suggestions, and the main one was to try having sodium bromide. This type of salt is usually considered poisonous, but he trusted ChatGPT’s medical advice and bought the salt online. He used the sodium bromide in his diet for the next three months, and after some time, he started feeling a bit unusual.
The man was having symptoms like trouble sleeping. Feeling extra thirsty, hallucinations, and paranoia and all these symptoms are related to the bromide poisoning. He didn’t think that this was because of his change in diet; he thought that maybe his neighbour tried to piss him off, but he ended up getting hospitalised.
The doctor tested him and found a high amount of bromide in his body. Sometimes the bromide also got confused with chloride because doctors were confused that he would even try eating this poisonous salt. In the end, they diagnosed him with bromism. He was treated with different fluids, electrolyte balancing, and specific medication to deal with his mental symptoms. After three weeks, he had a full recovery.
A Disease from the Past
Bromism diseases are rarely found these days. It was common back in the 1800s when this kind of salt was used in making medicines, but now people understand its high side effects, and now they have better ingredients to use with much lesser side effects. This was mainly used in medicine, such as sedatives. Now, people only get this disease when they unknowingly use any old medicine or are exposed to any harmful chemicals. This old man’s case is a great example of how AI healthcare safety is necessary and how people should be aware of this. ChatGPT medical advice is something that every person should avoid.
AI in Healthcare: Helpful but Risky
ChatGPT is a tool to gain information; it’s not a ChatGPT medical advice tool, which is specially made for people who want to have some medical advice. People now use AI bots for different health-related question tools like Chatbot, Gemini, and many more. According to the researchers, more than 5% of the answers were unsafe for the users, and 43% of the medical advice provided by these AI chatbots had serious problems. AI healthcare safety
It is not only necessary when it comes to physical health, but also for emotional health, because AI has also been seen giving weird emotional advice to people when they ask it to give some advice to manage their different emotions.
Know More: GitHub CEO Thomas Dohmke Stepping Down in 2025, Considers a New Startup Initiative
Know More: Anthropic Hiring Spree Europe Raises New Salary Stakes in 2025
What We Can Learn from the ChatGPT Medical Advice Incident
The case is a great example to learn how ChatGPT’s medical advice can be a dangerous thing. The AI, the most intelligent search tool worldwide, can also give the right advice if it is not aware of the whole situation. Now, even if you are thinking about taking some AI healthcare safety advice, always ask your doctor about this so you can follow it as a safe side, or who knows, you could end up in a much worse situation.
Now we talked about what people should do, but the makers should also need to add some restrictions to their AI bots, such as suggesting something without knowing the person’s history, especially when it comes to giving medical-related advice. The people need to be more aware of AI health risks, and the younger ones should introduce these risks to their elders and children who are not very familiar with this.