
Everyday AI
The episode highlights how large language models and generative AI often erase the experiences of historically marginalized communities. Bridget Todd points out that people of color, women, queer, trans, older, youth, and working‑class voices are routinely missing from the data that powers tools like ChatGPT, Midjourney, and Canva’s image generator. Real‑world examples—such as Canva refusing to render Black women with natural hairstyles or Midjourney consistently depicting CEOs as white men—illustrate how bias in training data translates into invisible representation. This exclusion not only skews the technology’s output but also reinforces a digital landscape that mirrors existing societal inequities.
Responsibility for these blind spots is distributed across AI developers, platform owners, and end‑users. The hosts argue that companies must audit training sets, diversify engineering teams, and design moderation tools that understand cultural nuance. At the same time, users wield influence by amplifying under‑represented creators and demanding transparency from vendors. Human‑in‑the‑loop processes become crucial when models generate content at scale, ensuring that automated outputs are checked for harmful stereotypes. By treating AI as a collaborative tool rather than an autonomous authority, businesses can mitigate the risk of perpetuating systemic bias.
For business leaders, the conversation translates into actionable strategies. First, embed diverse perspectives in AI procurement criteria and require vendors to disclose data provenance. Second, implement regular bias audits and maintain a human review layer before publishing AI‑generated material. Third, encourage employees to challenge their own echo chambers by customizing model instructions and seeking counter‑arguments from the system. Ultimately, keeping humanity at the center of AI deployment preserves trust, creativity, and competitive advantage. As the hosts conclude, AI should be “for us, by us”—a technology that amplifies, not erases, the full spectrum of human experience.
One of the scariest parts of AI? 😰
Who (or what) gets left out.
As a result, LLM outputs are heavily skewed toward the perspectives and content most common in their training data and the people who supervise them.
Which is almost always an absolutely terrible thing.
So, who gets written out of the AI future? And how do we fix it?
Join us to find out.
Newsletter: Sign up for our free daily newsletter
More on this Episode:Episode Page
Join the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: info@youreverydayai.com
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
Over-Reliance on AI in Daily Life
Marginalized Groups Excluded from AI Future
AI Reflecting Societal Biases and Blind Spots
Responsibility for AI Training Data and Bias
Dangers of "AI Slop" and Unedited Content
Biased AI Moderation and Platform Challenges
Importance of Human Oversight in AI Outputs
Avoiding AI Echo Chambers and Algorithmic Divide
Trust and Quality Concerns with AI Content
Amplifying Diverse Voices in AI Leadership
Timestamps:
00:00 "AI Reliance and Ethical Risks"
03:37 "Inclusion in AI Conversations"
06:32 "Shared Responsibility for AI Change"
11:03 "AI Bias Against Black Hairstyles"
15:02 "Growing Businesses with Generative AI"
16:21 "For Us, By Us"
20:07 Preventing AI Echo Chambers
25:14 "Rethinking Leadership and AI Use"
26:46 "Everyday AI Wrap-Up"
Keywords:
AI bias, large language models, marginalized voices in AI, representation in AI, diversity in AI, AI and identity, technology and power, algorithmic bias, training data bias, cultural competence in AI, AI exclusion, social media moderation algorithms, biased AI moderation, racial bias in AI, gender bias in AI, queer representation in AI, trans representation in technology, working class and AI, age bias in AI, responsible AI use, AI content creation, AI slop, human in the loop, human-centered AI, ethical AI, trust in AI, AI and creativity, AI echo chambers, personalization in AI models, AI-generated content, voice amplification in technology, AI-powered surveillance, inverse surveillance, AI leadership, tech activism, AI for social good, AI media trust, challenge in AI adoption, AI community guidelines, inclusion in technology, future of AI representation, multi-agent orchestration, responsible AI auditing, training data selection, human feedback in AI, algorithmic transparency.
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Ready for ROI on GenAI? Go to youreverydayai.com/partner
Comments
Want to join the conversation?
Loading comments...