2025 Frank W. Woods Lecture - We, The Data: How to Think About AI as a Human Rights Issue
Why It Matters
Treating data as a human‑rights issue forces companies to redesign AI pipelines, driving stricter compliance and reshaping market competition.
Key Takeaways
- •Data not algorithms or compute is core human rights issue
- •AI's impact extends beyond privacy to equality and autonomy
- •Current human‑rights frameworks struggle to regulate opaque data practices
- •Big tech’s data control shapes everyday decisions and societal outcomes
- •Policymakers must treat data as a rights‑based regulatory priority
Summary
The 2025 Frank W. Woods Lecture, delivered by Wendy Wong, framed AI not as a purely technical challenge but as a human‑rights issue, arguing that the data that feed AI systems are the primary vector through which rights are threatened.
Wong broke AI into three inseparable components—algorithms, compute power, and data—and contended that only data consistently implicates rights such as privacy, autonomy, equality and dignity. She warned that current human‑rights tools are ill‑suited to the opacity and scale of modern data collection and algorithmic profiling.
Citing concrete examples—credit‑scoring, mortgage eligibility, border questioning, and personalized feeds—Wong emphasized that data shape both how institutions see individuals and how individuals perceive themselves. She rejected the shortcut of equating AI ethics with privacy, insisting that data‑driven decisions constitute a rights problem.
The lecture calls on governments, regulators and corporations to reframe data governance as a rights‑based imperative, suggesting new accountability mechanisms and transparent oversight. For businesses, this signals an imminent shift toward stricter compliance, risk‑management, and the need to embed human‑rights assessments into AI product lifecycles.
Comments
Want to join the conversation?
Loading comments...