People Are Using AI Tools to Self-Diagnose, but Research Shows They Are Very Likely to Be Getting Bad Advice
Why It Matters
The findings expose a double‑edged impact on patient safety and healthcare utilization, prompting urgent calls for stronger digital‑health regulation and clinician oversight.
Key Takeaways
- •59% delayed professional care after AI reassurance.
- •59% sought unnecessary appointments due to AI alarm.
- •25% received incorrect health information from LLMs.
- •93% used AI symptom checkers late at night.
- •68% felt more confident discussing symptoms with clinicians.
Pulse Analysis
Artificial intelligence‑driven symptom checkers have moved from novelty to mainstream, with tools like ChatGPT offering instant, conversational health advice. Recent peer‑reviewed research in JAMA Network Open showed that these large language models misdiagnosed more than 80% of early clinical scenarios, a stark reminder that the underlying models lack the nuanced reasoning of trained clinicians. This technical limitation is compounded by the sheer volume of users; AXA Health’s survey of 2,000 UK adults found that nearly all AI users (93%) turn to bots after hours, often when anxiety peaks.
The AXA poll paints a paradoxical picture. On one hand, 78% of respondents say AI clarifies medical terminology and boosts confidence in doctor visits, while 68% feel better equipped to discuss symptoms. On the other hand, the same technology drives harmful behaviors: 59% delayed seeking professional care after AI reassurance, and an equal share pursued appointments that later proved unnecessary. A quarter of users encountered misinformation, and 35% reported heightened health anxiety, a phenomenon AXA dubs the "AI Health Anxiety Loop." These dynamics suggest that AI is not merely a supplemental information source but a catalyst reshaping patient decision‑making.
For healthcare systems, the implications are twofold. First, the surge in self‑diagnosis can strain resources through both over‑utilization and missed early interventions, challenging NHS capacity and increasing costs. Second, the prevalence of inaccurate advice underscores the need for regulatory frameworks that enforce transparency, validation, and integration with professional care pathways. While AI can empower patients, its deployment must be paired with clinician guidance and robust oversight to mitigate risks and harness its educational potential responsibly.
People are using AI tools to self-diagnose, but research shows they are very likely to be getting bad advice
Comments
Want to join the conversation?
Loading comments...