Deepfake X-Rays Are so Real Even Doctors Can’t Tell the Difference

Deepfake X-Rays Are so Real Even Doctors Can’t Tell the Difference

ScienceDaily Robotics
ScienceDaily RoboticsMar 26, 2026

Why It Matters

Undetectable deepfake medical images threaten diagnostic integrity, legal proceedings, and hospital cybersecurity, making reliable detection essential for patient safety and trust.

Key Takeaways

  • Radiologists identified only 41% deepfakes without prior warning
  • Accuracy rose to 75% when told images could be synthetic
  • Multimodal LLMs detected fakes with 57‑85% accuracy
  • Experience level didn't correlate with detection ability
  • Researchers suggest watermarks and cryptographic signatures for protection

Pulse Analysis

The rapid advancement of generative AI has moved beyond text and video, now infiltrating the highly regulated realm of medical imaging. Open‑source diffusion models like Stanford’s RoentGen can produce X‑ray images that mimic the texture, anatomy, and pathology of real scans, blurring the line between authentic and fabricated data. As hospitals increasingly rely on digital picture archiving and communication systems (PACS), the ease of inserting synthetic images raises concerns about data provenance and the potential for malicious actors to manipulate clinical records.

The RSNA‑backed study highlights a stark reality: even seasoned radiologists, when not primed for deception, miss more than half of AI‑generated X‑rays. Multimodal large language models, touted for their diagnostic assistance, exhibit comparable shortcomings, with detection accuracies ranging from just over half to under nine‑tenths. This performance gap erodes confidence in both human and AI decision‑making, especially in high‑stakes scenarios such as forensic imaging or litigation where fabricated fractures could sway outcomes. Moreover, the lack of correlation between years of experience and detection ability suggests that traditional expertise alone cannot safeguard against sophisticated synthetic media.

Mitigation will require a multi‑layered approach. Embedding invisible watermarks and cryptographic signatures at the point of image capture can provide immutable proof of origin, while AI‑driven forensic tools must evolve to spot subtle artifacts like overly smooth bone contours or unnaturally symmetrical structures. Industry bodies are already curating deepfake datasets and interactive training modules to upskill clinicians. Looking ahead, the emergence of AI‑generated 3‑D modalities such as CT and MRI will amplify the challenge, making early investment in detection infrastructure and regulatory standards critical to preserving the integrity of medical imaging ecosystems.

Deepfake X-rays are so real even doctors can’t tell the difference

Comments

Want to join the conversation?

Loading comments...