From Deployment to Oversight: Strengthening AI Risk Management and Patient Safety in Health Care

Duke-Margolis Center for Health Policy
Duke-Margolis Center for Health PolicyMar 25, 2026

Why It Matters

Standardizing AI risk management transforms emerging technology from a safety liability into a reliable clinical asset, protecting patients and preserving trust across health systems.

Key Takeaways

  • AI safety reporting lacks standardized, system-wide mechanisms in health care.
  • Governance bodies should separate deployment and oversight to avoid conflicts.
  • Centralized AI inventory and pre‑assessment questionnaires enable risk‑based reviews.
  • Integrated user flags allow low‑burden reporting of anomalous AI outputs.
  • Cross‑institution learning networks are essential for early detection of AI risks.

Summary

The webinar, hosted by Duke Health’s AI Evaluation and Governance Program and the Duke Merkelist Institute, examined how health systems can move from merely deploying clinical AI to establishing robust oversight that safeguards patient safety. Speakers highlighted that while AI tools are rapidly entering care pathways, existing patient‑safety infrastructure was designed for human‑only decision making and therefore fails to capture AI‑related errors, near‑misses, or systematic biases.

Key insights included the need for a formal governance body that sets policies, creates a risk‑based review process, and assigns clear ownership across a tool’s lifecycle. Participants described a two‑team model—deployment and governance—each with distinct roles such as business owners, technology owners, and oversight committees. Practical recommendations covered a centralized, searchable inventory of all AI tools, pre‑deployment questionnaires to standardize expectations, and integrated user‑flag mechanisms (e.g., one‑click thumbs‑up/down) that surface anomalous outputs with minimal workflow disruption.

Notable examples cited were the risk of “systematic errors at scale,” where a single flaw can affect thousands of patients before detection, and the limited effectiveness of “human‑in‑the‑loop” safeguards due to automation bias. Speakers also referenced simple tools like an Excel‑based AI catalog and low‑burden flagging widgets that alert governance teams to potential safety events in real time.

The discussion underscored that without standardized reporting, clear accountability, and cross‑institution learning networks, health systems risk repeating hidden AI failures. Policy incentives and shared learning environments were deemed essential to embed these practices, ultimately ensuring that AI advances improve outcomes without compromising safety.

Original Description

Clinical AI tools are increasingly embedded in care delivery, creating new opportunities to improve outcomes but also new patient safety risks that require proactive risk management. Duke-Margolis, in collaboration with the Duke Health AI Evaluation & Governance Program, hosted a webinar to explore this issue. This webinar describes recommendations from our upcoming white paper and policy brief and bring together health system AI leaders, policy influencers, and other experts to discuss emerging best practices and policy approaches that support effective, scalable responsible AI risk management and patient safety event reporting.

Comments

Want to join the conversation?

Loading comments...