Key Takeaways

  • AI-generated health advice can be dangerous and should never replace guidance from a licensed medical professional.
  • AI chatbots may deliver outdated, misleading, or overly generic health information.
  • Experts recommend using AI tools only for general background knowledge and discussing any AI-sourced health advice with a doctor.

A 60-year-old man replaced table salt with sodium bromide after consulting ChatGPT, a switch that led to bromide toxicity and a three-week psychiatric hospitalization.

The case highlights the potential dangers of relying on AI chatbots for health advice. However, most Americans think AI-generated health information is “somewhat reliable,” according to a recent survey. Experts warn that AI tools should never replace professional medical care.

AI Chatbots Don’t Have Your Medical Records

AI chatbots don’t have your personal health records, so they can’t give reliable guidance on new symptoms, an existing condition you have, or whether you need emergency care.

A chatbot’s health advice is also very generic, said Margaret Lozovatsky, MD, vice president of digital health innovations at the American Medical Association.

The best use of AI for now, she said, is for background information to help you ask your doctor questions, or to explain medical terms you don’t know. 

AI Information Might Be Outdated or Inaccurate

Generative AI relies on the data it was trained on, which may not reflect the most current medical guidance. For example, the Centers for Disease Control (CDC) only recently recommended the updated flu shot for everyone 6 months and older, and some chatbots may not be up to speed.

Even when an AI chatbot is wrong, it can sound confident and convincing. The AI systems may cobble together information to fill gaps and spit out false or misleading answers.

A study published in the journal Nutrients found that popular chatbots such as Gemini, Microsoft Copilot, and ChatGPT can generate decent weight loss meal plans, but they fail to balance macronutrients, including carbohydrates, proteins, fats, and fatty acids.

“I would be extremely reluctant to tell a patient to ever do something based on ChatGPT, “ says Ainsley MacLean, MD, a health AI consultant and former chief AI officer for the MidAtlantic Kaiser Permanente Medical Group. 

Is There a Safe Way to Use AI Tools for Health?

MacLean noted that generative AI bots are not covered by health privacy protections such as HIPAA right now. “Don’t input your personal health information,” she said. “It could end up anywhere.” 

When you’re browsing AI summaries on Google, it’s best to check if the information is sourced from a well-known science journal or medical organization. Also, double-check the date of the information to see when it was last updated.

Lozovatsky said she hopes that people will still visit their doctors if they’re experiencing new symptoms, and be upfront about information found through a chatbot and any action they’ve taken.

She added that it’s absolutely reasonable to share the information from AI with your physician and ask questions: “Is this accurate? Does it apply to my case? And if not, why not?” You may also ask your doctor if there’s any AI health tool that they trust.

By Fran Kritz

Kritz is a healthcare reporter with a focus on health policy. She is a former staff writer for U.S. News and World Report.

Share.

Comments are closed.