Genevieve Smith - What Gets Encoded: AI, Inequity, and Alternative Technological Futures

Berkeley EECS
Berkeley EECSApr 6, 2026

Why It Matters

If unchecked, biased AI credit systems will widen gender gaps in financial access, undermining development goals and exposing firms to reputational and regulatory risk; responsible design offers a pathway to inclusive growth.

Key Takeaways

  • AI credit tools disproportionately exclude women, especially in rural areas.
  • Gender bias persists despite “gender‑blind” model designs in fintech.
  • Alternative data can boost financial inclusion but embeds existing hierarchies.
  • Co‑creation and dataset redesign can mitigate encoded inequities in AI.
  • Responsible AI Initiative bridges social science and core AI for equitable outcomes.

Summary

Genevieve Smith, founder of the Responsible AI Initiative at Berkeley’s AI Lab, delivered a talk titled “What Gets Encoded: AI, Inequity, and Alternative Technological Futures.” She argued that AI systems are not neutral; they embed existing social hierarchies and can amplify gender inequities, especially in rapidly expanding “AI‑for‑good” domains such as fintech credit scoring.

Smith presented three studies. The first examined machine‑learning‑based credit assessment tools used in Kenya and India. Survey data and 200,000 app‑store reviews revealed that women constitute only 40 % of users, need twice as much assistance, receive smaller loans, and rural women are five times more likely to be denied credit—gaps that persist after controlling for income, education, and repayment behavior. The second study documented linguistic and visual gender bias in large language models and text‑to‑image generators. The third outlined design interventions—co‑creation with affected communities, alternative data pipelines, and fine‑tuning strategies—to produce more equitable AI outcomes.

A Kenyan entrepreneur summed up the tool’s impact: “It gave me a sense of security and hope when I almost lost hope to live.” In contrast, fintech engineers defended a “gender‑blind” approach, claiming “data is the truth” and that ignoring demographics ensures fairness, while others admitted that proxy variables inadvertently learn patriarchal patterns. Smith cited Ruha Benjamin’s concept of “default discrimination” to explain how well‑intentioned AI can mask power imbalances.

The findings underscore that financial‑inclusion promises can be hollow if AI perpetuates bias. By exposing hidden inequities and proposing participatory model development, Smith’s work calls on regulators, investors, and product teams to embed social‑science insights into AI pipelines, lest billions of dollars in credit be funneled unevenly and systemic discrimination be codified.

Original Description

Biography:
Dr. Genevieve Smith founded the Responsible AI Initiative at the Berkeley Artificial Intelligence Research Lab, which conducts multidisciplinary research on topics of responsible and equitable AI. She is a postdoctoral research fellow at Stanford University and also serves as professional faculty at UC Berkeley Haas School of Business on responsible AI. Dr. Smith completed her doctoral degree at the University of Oxford in the Department of International Development, co-supervised through the Oxford Internet Institute, and is a research affiliate at the Minderoo Centre for Technology & Democracy at Cambridge University and the Technology & Management Centre for Development at University of Oxford. Smith was recently the Responsible AI Fellow at the United States Agency for International Development. Prior to her doctoral work, Smith spent over a decade researching topics of economic empowerment and inclusive technology including with UN Women and the International Center for Research on Women. Her research and work has been published in journals such as Big Data & Society, as well as shared in Nature, Wall Street Journal, Forbes, Social Stanford Innovation Review, the Economist and more. Her research has also been shared at leading conferences such as the International Conference on Machine Learning (ICML), the ACM Conference on Fairness, Accountability & Transparency (FAccT) and the Society for the Social Studies of Science.
Abstract:
As AI is increasingly integrated across social and economic life, some of the most challenging problems sit at the boundary between computation and society. Chief among them are questions of fairness, equity, and bias: how these terms are defined and by whom, how they are encoded, and how we might build AI systems differently. This talk uses a sociotechnical lens grounded in Science and Technology Studies (STS) to investigate how inequity is built into AI as a “neutral” default and explores pathways toward alternative technological futures. I examine gender bias in ML-based credit assessment tools in low- and middle-income countries. Using mixed methods, I trace how “gender blind” algorithms and profit priorities privilege male-coded financial and digital behaviors, and reveal how utilitarian perceptions of fairness legitimize gender inequities in financial access. I then show that this same mechanism, the encoding of dominant norms and power hierarchies as neutral defaults, operates across modalities. Drawing on collaborative, multidisciplinary work, I present large-scale studies of linguistic bias in large language models, where models systematically disadvantage speakers of non-“standard” English varieties; and gender bias in text-to-image models, where generated images reliably reproduce traditional gender roles, culminating in current work developing a global counter-stereotypical dataset for open-weight model finetuning. Across this work, I illustrate that inequity is not a glitch but a standard output of current AI paradigms; yet, this outcome is not inevitable. Responsible AI research is strengthened by bringing social science and computational rigor together, requiring attention not only to model outputs, but also the human choices, organizational logics, and structural inequities that shape them. The talk ends by exploring alternative paradigms for AI that ask not just what we can build, but how, why, and with whom.

Comments

Want to join the conversation?

Loading comments...