The Hidden Governance of AI and Other Threats to Democracy — Abigail Jacobs

UC Berkeley School of Information
UC Berkeley School of InformationApr 13, 2026

Why It Matters

Because measurement is the invisible lever that determines AI’s societal impact, making it explicit safeguards against bias, ensures accountability, and aligns technology with democratic values.

Key Takeaways

  • Measurement choices embed hidden governance into AI systems.
  • Implicit metrics shape societal outcomes like fairness and safety.
  • Social science measurement theory can clarify AI evaluation.
  • Lack of transparent measurement entrenches biases and harms.
  • Engaging non‑technical stakeholders across development improves AI accountability.

Summary

Abigail Jacobs' lecture frames AI governance as a hidden measurement problem, arguing that the way we quantify concepts such as fairness, safety, or intelligence effectively decides how AI shapes everyday life.

She shows that most AI metrics are implicit, embedded in technical pipelines, and often divorced from the social contexts they affect. By borrowing measurement theory from quantitative social science—construct validity, reliability, and stakeholder mapping—Jacobs demonstrates a systematic way to expose the assumptions behind AI evaluations.

Jacobs cites concrete examples: using last‑name lists to infer race for fairness audits, defining “age‑appropriate” content without public input, and Microsoft Research’s recent effort to apply a formal measurement framework to generative‑AI assessment. She stresses that “words mean things,” and that turning vague social concepts into numeric scores can legitimize authority while obscuring bias.

The takeaway for businesses and policymakers is clear: transparent, interdisciplinary measurement practices are essential to prevent entrenched biases and to make AI systems accountable to the public. Involving non‑technical stakeholders early in the design process can surface hidden values and guide more equitable AI deployment.

Original Description

April 8, 2026 — The values embedded in and around AI systems shape our lives. However, those values are enacted through obscured, diffuse, and disorganized design decisions. Technical, organizational, and critical interventions would require locating these decisions and understanding what happens when technical systems displace organizational processes. Yet such perspectives consistently fail to identify what values are being enacted, where, and to what ends.
I put forward a sociotechnical perspective on how to systematically uncover those values. For technologists and non-, I argue that this offers paths to better evaluate systems, mitigate harms, and empower more people with the ability and authority to contest the governance shaping their lives. For people trying to live in the world, this perspective lets us see how legitimacy, objectivity, and authority are laundered, power is reorganized, and expertise is displaced.
More info:
Speaker
Abigail Jacobs
Abigail Jacobs is an assistant professor of information and of complex systems at the University of Michigan. Jacobs is a 2024 Microsoft Research AI & Society fellow and was selected for the 2025 Schmidt Sciences Humanities & AI Virtual Institute. At Michigan, she is affiliated with the Center for Ethics, Society, and Computing and the Michigan Institute for Data & AI in Society. She received a B.A. in mathematical methods in the social sciences and mathematics at Northwestern University and a Ph.D. in computer science from the University of Colorado Boulder, and she previously was a postdoc at UC Berkeley, a NSF GRFP fellow, and on the board of Women in Machine Learning, Inc.
With social scientists, humanists, and legal scholars, she adopts a sociotechnical approach to AI to understand the hidden assumptions built into seemingly objective machine learning systems and their technical and social implications. With computer scientists, her work uses the lens of measurement to improve AI evaluation and governance.

Comments

Want to join the conversation?

Loading comments...