A Formal Model of How Artificial Intelligence Erodes Human Agency

A Formal Model of How Artificial Intelligence Erodes Human Agency

RAND Blog/Analysis
RAND Blog/AnalysisApr 20, 2026

Why It Matters

Understanding and measuring AI’s impact on decision‑making power is crucial for safeguarding democratic legitimacy and preventing irreversible concentration of authority in autonomous systems.

Key Takeaways

  • Three metrics quantify AI-driven agency erosion across decision domains
  • Human disenfranchisement, AI enfranchisement, and AI agenda control are erosion pathways
  • Model predicts a terminal state: one minimal decisive coalition
  • Recommendations call for agency evaluations and human-participation thresholds
  • Longitudinal monitoring of coalition composition can detect gradual disempowerment

Pulse Analysis

The rapid integration of artificial intelligence into government, finance, and critical infrastructure has outpaced the tools needed to assess its systemic effects on human agency. Traditional AI audits focus on safety, bias, or alignment, but they overlook how AI reshapes the very structures that allocate decision‑making authority. By borrowing concepts from social‑choice theory, the new RAND report fills this gap with a quantitative framework that can be applied across sectors, offering policymakers a way to monitor the subtle transfer of power from people to algorithms.

At the core of the model are three metrics that capture the distribution, size, and composition of decisive coalitions. These measures reveal three distinct erosion mechanisms: human disenfranchisement, where fewer people hold sway; AI enfranchisement, where autonomous agents become formal members of decision groups; and AI agenda control, where algorithms curate the options presented to humans. The mathematics also highlight a terminal state—a single minimal coalition that decides all outcomes—signaling a point of irreversible loss of collective control. Recognizing these pathways early enables stakeholders to intervene before nonlinear acceleration makes remediation impossible.

The authors’ recommendations translate theory into actionable policy. They call for new agency‑focused evaluations, minimum human‑participation thresholds in high‑stakes domains, and continuous tracking of coalition composition. Embedding these metrics into existing AI risk standards, such as those developed by NIST, could provide a measurable dimension of governance risk. For businesses and regulators alike, adopting this lens offers a proactive safeguard against the hidden concentration of power that could undermine legitimacy, resilience, and public trust.

A Formal Model of How Artificial Intelligence Erodes Human Agency

Comments

Want to join the conversation?

Loading comments...