Genevieve Smith - What Gets Encoded: AI, Inequity, and Alternative Technological Futures
Why It Matters
If unchecked, biased AI credit systems will widen gender gaps in financial access, undermining development goals and exposing firms to reputational and regulatory risk; responsible design offers a pathway to inclusive growth.
Key Takeaways
- •AI credit tools disproportionately exclude women, especially in rural areas.
- •Gender bias persists despite “gender‑blind” model designs in fintech.
- •Alternative data can boost financial inclusion but embeds existing hierarchies.
- •Co‑creation and dataset redesign can mitigate encoded inequities in AI.
- •Responsible AI Initiative bridges social science and core AI for equitable outcomes.
Summary
Genevieve Smith, founder of the Responsible AI Initiative at Berkeley’s AI Lab, delivered a talk titled “What Gets Encoded: AI, Inequity, and Alternative Technological Futures.” She argued that AI systems are not neutral; they embed existing social hierarchies and can amplify gender inequities, especially in rapidly expanding “AI‑for‑good” domains such as fintech credit scoring.
Smith presented three studies. The first examined machine‑learning‑based credit assessment tools used in Kenya and India. Survey data and 200,000 app‑store reviews revealed that women constitute only 40 % of users, need twice as much assistance, receive smaller loans, and rural women are five times more likely to be denied credit—gaps that persist after controlling for income, education, and repayment behavior. The second study documented linguistic and visual gender bias in large language models and text‑to‑image generators. The third outlined design interventions—co‑creation with affected communities, alternative data pipelines, and fine‑tuning strategies—to produce more equitable AI outcomes.
A Kenyan entrepreneur summed up the tool’s impact: “It gave me a sense of security and hope when I almost lost hope to live.” In contrast, fintech engineers defended a “gender‑blind” approach, claiming “data is the truth” and that ignoring demographics ensures fairness, while others admitted that proxy variables inadvertently learn patriarchal patterns. Smith cited Ruha Benjamin’s concept of “default discrimination” to explain how well‑intentioned AI can mask power imbalances.
The findings underscore that financial‑inclusion promises can be hollow if AI perpetuates bias. By exposing hidden inequities and proposing participatory model development, Smith’s work calls on regulators, investors, and product teams to embed social‑science insights into AI pipelines, lest billions of dollars in credit be funneled unevenly and systemic discrimination be codified.
Comments
Want to join the conversation?
Loading comments...