When AI Gets Health Questions Wrong

When AI Gets Health Questions Wrong

MySportScience
MySportScienceApr 15, 2026

Companies Mentioned

Why It Matters

The findings highlight a systemic risk that AI‑driven health advice can spread misinformation, jeopardizing both public health and the credibility of sports‑nutrition professionals who increasingly rely on these tools.

Key Takeaways

  • Half of chatbot health answers were problematic
  • Stem cell, nutrition, performance queries had lowest accuracy
  • Only two refusals across 250 questions
  • Citation accuracy median 40%, none fully correct
  • Responses read at college level, too complex for public

Pulse Analysis

The rapid adoption of conversational AI tools such as ChatGPT, Gemini and Meta AI has transformed how consumers and athletes seek health and performance guidance. Researchers led by Dr. Nick Tiller published a BMJ Open audit that systematically probed these models with 50 carefully crafted prompts covering high‑risk topics like cancer, vaccines, stem‑cell therapies, nutrition and athletic performance. By evaluating answer accuracy, citation quality and readability, the study provides a rare, real‑world snapshot of AI reliability when faced with ambiguous, misinformation‑prone queries.

Results were sobering: nearly 50% of the 250 responses were flagged as problematic, with one‑in‑five deemed highly problematic. The weakest domains—stem‑cell research, nutrition and athletic performance—are precisely the areas where sports scientists and dietitians are turning to AI for quick insights. Moreover, the chatbots supplied references that were incomplete, inaccurate or outright fabricated, achieving a median citation completeness of only 40%. This veneer of scholarly backing can mislead users into over‑trusting advice that lacks a factual foundation, amplifying the spread of health myths in a sector already saturated with pseudoscience.

For practitioners, the study underscores the necessity of critical appraisal and human oversight. AI can streamline data synthesis and generate draft content, but its confidence should not be equated with correctness. Professionals must verify claims, cross‑check citations and consider the readability of AI‑generated material before sharing it with athletes or patients. Policymakers and platform developers should also embed stronger refusal mechanisms and transparent source verification to curb misinformation. As AI continues to permeate sports nutrition and medical advice, a balanced approach that leverages technology while safeguarding evidence‑based practice will be essential for protecting public health.

When AI gets health questions wrong

Comments

Want to join the conversation?

Loading comments...