These Medical X-Rays Are All Deepfakes — and They Fool Even Radiologists

These Medical X-Rays Are All Deepfakes — and They Fool Even Radiologists

Nature – Health Policy
Nature – Health PolicyMar 24, 2026

Why It Matters

Undetected AI‑generated radiographs threaten diagnostic accuracy, research integrity, and legal evidence, prompting urgent need for robust detection mechanisms.

Key Takeaways

  • Radiologists identified AI X‑rays 75% after training
  • LLMs accuracy ranged 57‑85% on same task
  • Experience level did not affect detection performance
  • 41% initially suspected synthetic images in dataset
  • Synthetic scans risk corrupting research and litigation

Pulse Analysis

The proliferation of generative AI has extended beyond text and art into the realm of medical imaging, producing X‑ray deepfakes that appear indistinguishable from genuine scans. These synthetic images can infiltrate training datasets for diagnostic algorithms, potentially biasing outcomes and eroding trust in AI‑assisted radiology. As hospitals and research institutions increasingly rely on large‑scale image repositories, the ability to verify image provenance becomes a critical safeguard against data contamination.

In the recent Radiology study, 17 radiologists from 12 centers evaluated a mixed set of real and AI‑generated X‑rays. Without prior warning, fewer than half flagged the presence of synthetic images, yet once informed, they correctly identified real versus fake scans 75% of the time—a rate that held steady across a 0‑ to 40‑year experience spectrum. By contrast, leading large language models such as ChatGPT and Gemini achieved only 57‑85% accuracy, underscoring that current AI tools are not yet reliable auditors of their own output. The findings highlight a training gap: targeted education can markedly improve human detection, but systematic solutions are needed for scalable verification.

The implications stretch beyond clinical practice. Undetected deepfake radiographs could skew peer‑reviewed studies, inflate insurance claim values, or be weaponized in courtroom evidence, jeopardizing patient outcomes and financial liability. Stakeholders must invest in forensic imaging technologies, develop standardized metadata for authenticity, and embed detection protocols into radiology workflows. Proactive measures will preserve the integrity of medical research, protect patient safety, and ensure that AI remains a trustworthy ally rather than a source of hidden risk.

These medical X-rays are all deepfakes — and they fool even radiologists

Comments

Want to join the conversation?

Loading comments...