The Imaginary of Informed Consent: Rethinking Approaches to Data Use for AI in Healthcare

The Imaginary of Informed Consent: Rethinking Approaches to Data Use for AI in Healthcare

GovLab — Digest —
GovLab — Digest —Apr 7, 2026

Key Takeaways

  • Informed consent struggles with AI's secondary data uses
  • Indian DPDPA 2023 relies heavily on patient consent
  • Patients often unaware of future AI training purposes
  • Anonymisation alone cannot guarantee privacy in AI models
  • Governance models like data trusts can supplement consent

Pulse Analysis

The surge of artificial intelligence in clinical diagnostics, drug discovery, and patient monitoring has turned health data into a strategic asset. In India, the Digital Personal Data Protection Act of 2023 designates informed consent as the cornerstone for lawful data processing, including the training of AI algorithms. While the legislation mirrors global privacy trends, it assumes that a single consent transaction can cover the myriad ways patient records will be repurposed, from electronic health‑record storage to future machine‑learning projects. This assumption creates a legal and ethical mismatch as AI models demand large, diverse datasets that extend far beyond the original clinical intent.

The paper highlights three structural flaws. First, patients must consent separately to medical procedures, digital record creation, and the opaque prospect of AI model training, often without clear explanations of scope or risk. Second, the notion of ‘informed’ consent erodes when data are anonymised but later re‑identified through advanced analytics, undermining privacy guarantees. Third, consent fatigue leads to blanket approvals that dilute autonomy, leaving individuals exposed to unintended commercial or research uses. These challenges reveal that consent alone cannot sustain responsible AI development in healthcare.

To bridge the gap, scholars propose complementary governance frameworks such as data trusts, fiduciary stewardship, and sector‑specific oversight bodies that balance patient rights with innovation needs. Such mechanisms can enforce purpose‑limitation, audit data reuse, and provide transparent benefit‑sharing models, while still respecting the spirit of the DPDPA. International examples—from the UK’s NHS data sharing agreements to Canada’s health data custodians—demonstrate that multi‑layered oversight can enhance trust and accelerate AI adoption without sacrificing ethical standards. Policymakers and health providers must therefore move beyond consent‑centric models toward a more resilient, accountable data ecosystem.

The imaginary of informed consent: Rethinking approaches to data use for AI in healthcare

Comments

Want to join the conversation?