
Research Warn of Public-Health Risks as Chatbots Generate ‘Problematic’ Advice
Why It Matters
Inaccurate AI health advice can mislead consumers and strain healthcare systems, making oversight essential for protecting public health as AI adoption accelerates.
Key Takeaways
- •Half of health chatbot answers classified as problematic
- •Grok produced most highly problematic responses, 29/50
- •Google Gemini showed highest reliability across categories
- •Chatbots answered confidently despite inaccuracies and lack of caveats
- •Study calls for public education, training, and regulation
Pulse Analysis
The recent BMJ Open analysis shines a spotlight on the reliability gap in generative AI tools that many consumers turn to for medical guidance. Researchers from universities in Canada, the UK and the US posed 50 carefully crafted questions—ranging from vaccine safety to the efficacy of alternative cancer therapies—to five popular chatbots. While Gemini emerged as the most dependable, the overall finding that 50 percent of answers were problematic, and a fifth were highly troubling, underscores a systemic issue: these models often generate plausible‑sounding yet inaccurate or incomplete information, especially on nuanced topics like stem‑cell research and nutrition.
The implications for public health are profound. As AI assistants become embedded in smartphones, wearables and telehealth platforms, users may accept confident but erroneous advice without seeking professional verification. This can exacerbate misinformation cycles, delay appropriate treatment, and increase the burden on clinicians who must correct false beliefs. Moreover, the study revealed that only two queries were refused, indicating a low threshold for providing answers even when the content is risky. The lack of built‑in caveats or source citations further erodes trust and hampers informed decision‑making.
Policymakers and industry leaders now face pressure to establish clear standards for AI‑generated health content. Recommendations include mandatory disclosure of uncertainty, real‑time data integration, and rigorous third‑party testing before deployment. Educational initiatives aimed at both consumers and healthcare professionals can mitigate misuse, while regulatory frameworks—potentially modeled after medical device oversight—could enforce accountability. As the technology evolves, balancing innovation with safety will be critical to ensuring AI serves as a supportive tool rather than a source of public‑health risk.
Research warn of public-health risks as chatbots generate ‘problematic’ advice
Comments
Want to join the conversation?
Loading comments...