Gen AI Shows Promise and Peril in Patient-Centered Care, New Review Finds
Why It Matters
The findings underscore both the transformative potential and the urgent safety, liability and regulatory challenges of deploying generative AI in everyday care, prompting immediate action from health systems, policymakers and providers.
Key Takeaways
- •AI can personalize care, boost self‑management, and improve access.
- •Hallucinations and diagnostic errors pose liability and safety risks.
- •Patients with low digital literacy are most vulnerable to AI misinformation.
- •Six governance steps needed for safe, transparent AI integration.
Pulse Analysis
Generative artificial intelligence is rapidly moving from research labs into frontline health settings, driven by advances in large language models and growing consumer demand for digital health tools. Market analysts project that AI‑enabled patient engagement platforms could exceed $10 billion in revenue by 2030, as hospitals seek to reduce administrative burdens and improve outcomes. By framing AI as a patient‑centered decision‑support aid, providers aim to empower individuals to manage chronic conditions, access multilingual education, and participate more actively in shared decision‑making. However, the technology’s speed of adoption outpaces the development of standards, creating a gap between promise and proven safety.
The JMIR review highlights concrete opportunities—AI chatbots that triage symptoms, remote monitoring that flags early warning signs, and automated documentation that frees clinician time. Yet it also details stark risks: hallucinated responses, biased reasoning, and diagnostic inaccuracies that can mislead both patients and providers. Studies cited show that over half of clinicians missed errors in AI‑generated portal messages, and up to 45% inadvertently sent flawed content to patients. Such findings are especially concerning for vulnerable populations with limited digital health literacy, who may accept confident‑sounding AI output as fact, potentially worsening health disparities.
To bridge this divide, the authors propose six critical actions, from engaging patients in design to establishing independent testing and periodic reassessment for algorithmic drift. Health systems that embed these safeguards—clear consent policies, risk‑based autonomy thresholds, and transparent performance metrics—will be better positioned to harness AI’s benefits while minimizing harm. As regulators begin to draft AI‑specific guidance, early adopters that invest in robust governance can gain a competitive edge, delivering safer, more trustworthy digital care that aligns with evolving industry standards.
Gen AI Shows Promise and Peril in Patient-Centered Care, New Review Finds
Comments
Want to join the conversation?
Loading comments...