The Hidden Governance of AI and Other Threats to Democracy — Abigail Jacobs
Why It Matters
Because measurement is the invisible lever that determines AI’s societal impact, making it explicit safeguards against bias, ensures accountability, and aligns technology with democratic values.
Key Takeaways
- •Measurement choices embed hidden governance into AI systems.
- •Implicit metrics shape societal outcomes like fairness and safety.
- •Social science measurement theory can clarify AI evaluation.
- •Lack of transparent measurement entrenches biases and harms.
- •Engaging non‑technical stakeholders across development improves AI accountability.
Summary
Abigail Jacobs' lecture frames AI governance as a hidden measurement problem, arguing that the way we quantify concepts such as fairness, safety, or intelligence effectively decides how AI shapes everyday life.
She shows that most AI metrics are implicit, embedded in technical pipelines, and often divorced from the social contexts they affect. By borrowing measurement theory from quantitative social science—construct validity, reliability, and stakeholder mapping—Jacobs demonstrates a systematic way to expose the assumptions behind AI evaluations.
Jacobs cites concrete examples: using last‑name lists to infer race for fairness audits, defining “age‑appropriate” content without public input, and Microsoft Research’s recent effort to apply a formal measurement framework to generative‑AI assessment. She stresses that “words mean things,” and that turning vague social concepts into numeric scores can legitimize authority while obscuring bias.
The takeaway for businesses and policymakers is clear: transparent, interdisciplinary measurement practices are essential to prevent entrenched biases and to make AI systems accountable to the public. Involving non‑technical stakeholders early in the design process can surface hidden values and guide more equitable AI deployment.
Comments
Want to join the conversation?
Loading comments...