33 C
Dhaka
Tuesday, April 21, 2026

Should You Trust Health Advice From AI Chatbots? Experts Warn of Risks Amid Rising Use

Manchester – As artificial intelligence tools become increasingly embedded in daily life, more people are turning to chatbots like ChatGPT, Gemini and Grok for health advice. But experts are warning that while these tools can be helpful, they can also be dangerously unreliable.

For the past year, Abi, a Manchester resident, has been using ChatGPT to help manage her health, particularly during moments of anxiety. She says the chatbot often provides more tailored responses than a standard internet search, which can lead to worst-case scenarios.

“It allows a kind of problem solving together,” she said. “A little bit like chatting with your doctor.”

In one instance, when she suspected a urinary tract infection, the chatbot advised her to visit a pharmacist—leading to appropriate treatment. However, another experience was far more alarming. After falling while hiking and experiencing severe back pain, ChatGPT warned her she may have punctured an internal organ and needed emergency care.

“I went to A&E and sat there for three hours,” Abi said. “The pain started easing and I realised I wasn’t critically ill. The AI had clearly got it wrong.”

Health professionals are increasingly concerned about the growing reliance on AI for medical guidance.

Professor Sir Chris Whitty, England’s Chief Medical Officer, has warned that while people are using chatbots more frequently, their answers are often “not good enough” and can be “both confident and wrong.”

Researchers at the University of Oxford tested several AI systems by providing them with realistic medical scenarios. When given complete information, chatbots performed well—achieving up to 95% accuracy.

However, when 1,300 human participants interacted with the same systems, accuracy dropped sharply to around 35%. Researchers say this is because users often provide incomplete or unclear information, leading to incorrect conclusions.

One test scenario involved symptoms of a brain haemorrhage, a life-threatening condition requiring urgent treatment. In some cases, chatbot responses failed to recognise the seriousness of the condition, instead suggesting rest.

Experts say a major issue is the way AI systems communicate.

Dr Nicholas Tiller, a lead researcher in AI health studies, said chatbots are designed to respond in a confident and authoritative tone, which can mislead users.

“They give very confident responses, and that creates a sense of credibility,” he said. “So users assume it must be correct.”

Another study testing systems including ChatGPT, Gemini, Grok and Meta AI found that more than half of responses on topics such as cancer, vaccines, nutrition and alternative medicine were problematic or misleading. In one case, a chatbot suggested naturopathy as a valid cancer treatment, despite lack of scientific evidence.

Dr Margaret McCartney, a GP in Glasgow, said chatbots can feel more personal than traditional searches, which may affect how users interpret the advice.

“It feels like you’re having a supportive conversation made ‘for you’,” she said. “That changes how people trust the information.”

Researchers also noted that when users searched traditional medical queries, they were more likely to land on official sources such as the NHS website and receive more reliable guidance.

While AI companies argue that models are improving rapidly, experts say a fundamental limitation remains: chatbots are designed to predict language, not diagnose medical conditions.

Dr Tiller warned that users should not rely on AI for medical decisions unless they have the expertise to verify the information.

“If someone on the street gave you a very confident answer, would you just believe them?” he said. “You would at least check.”

OpenAI, the company behind ChatGPT, said the system is being improved with help from clinicians and is intended only for informational and educational use—not to replace professional medical advice.

Abi, who continues to use AI tools, says users should remain cautious.

“I wouldn’t trust that anything it says is absolutely right,” she said. “You have to take everything with a pinch of salt and remember it can get things wrong.”

Check out our other content

Check out other tags:

Most Popular Articles