Who Gets Written Out of the AI Future?
AI

Everyday AI

Who Gets Written Out of the AI Future?

Everyday AI•Dec 30, 2025

AI Summary

The episode examines how AI systems systematically exclude marginalized groups, highlighting biases that arise from skewed training data and the perspectives of those who supervise models. It discusses concrete examples such as AI misrepresenting Black hairstyles and the dangers of unedited "AI slop," while emphasizing the need for shared responsibility, human oversight, and diverse leadership to prevent echo chambers and ensure inclusive AI. Listeners are urged to amplify underrepresented voices and rethink AI governance to create a more equitable future.

Episode Description

One of the scariest parts of AI? 😰

Who (or what) gets left out. 

As a result, LLM outputs are heavily skewed toward the perspectives and content most common in their training data and the people who supervise them.

Which is almost always an absolutely terrible thing. 

So, who gets written out of the AI future? And how do we fix it? 

Join us to find out. 

Newsletter: Sign up for our free daily newsletter

More on this Episode:Episode Page

Join the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup

Website: YourEverydayAI.com

Email The Show: info@youreverydayai.com

Connect with Jordan on LinkedIn

Topics Covered in This Episode:

Over-Reliance on AI in Daily Life

Marginalized Groups Excluded from AI Future

AI Reflecting Societal Biases and Blind Spots

Responsibility for AI Training Data and Bias

Dangers of "AI Slop" and Unedited Content

Biased AI Moderation and Platform Challenges

Importance of Human Oversight in AI Outputs

Avoiding AI Echo Chambers and Algorithmic Divide

Trust and Quality Concerns with AI Content

Amplifying Diverse Voices in AI Leadership

Timestamps:

00:00 "AI Reliance and Ethical Risks"

03:37 "Inclusion in AI Conversations"

06:32 "Shared Responsibility for AI Change"

11:03 "AI Bias Against Black Hairstyles"

15:02 "Growing Businesses with Generative AI"

16:21 "For Us, By Us"

20:07 Preventing AI Echo Chambers

25:14 "Rethinking Leadership and AI Use"

26:46 "Everyday AI Wrap-Up"

Keywords:

AI bias, large language models, marginalized voices in AI, representation in AI, diversity in AI, AI and identity, technology and power, algorithmic bias, training data bias, cultural competence in AI, AI exclusion, social media moderation algorithms, biased AI moderation, racial bias in AI, gender bias in AI, queer representation in AI, trans representation in technology, working class and AI, age bias in AI, responsible AI use, AI content creation, AI slop, human in the loop, human-centered AI, ethical AI, trust in AI, AI and creativity, AI echo chambers, personalization in AI models, AI-generated content, voice amplification in technology, AI-powered surveillance, inverse surveillance, AI leadership, tech activism, AI for social good, AI media trust, challenge in AI adoption, AI community guidelines, inclusion in technology, future of AI representation, multi-agent orchestration, responsible AI auditing, training data selection, human feedback in AI, algorithmic transparency.

Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Ready for ROI on GenAI? Go to youreverydayai.com/partner

Show Notes

One of the scariest parts of AI? 😰

Who (or what) gets left out. 

As a result, LLM outputs are heavily skewed toward the perspectives and content most common in their training data and the people who supervise them.

Which is almost always an absolutely terrible thing. 

So, who gets written out of the AI future? And how do we fix it? 

Join us to find out. 

Newsletter: Sign up for our free daily newsletter

More on this Episode:Episode Page

Join the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup

Website: YourEverydayAI.com

Email The Show: info@youreverydayai.com

Connect with Jordan on LinkedIn

Topics Covered in This Episode:

  1. Over-Reliance on AI in Daily Life

  2. Marginalized Groups Excluded from AI Future

  3. AI Reflecting Societal Biases and Blind Spots

  4. Responsibility for AI Training Data and Bias

  5. Dangers of "AI Slop" and Unedited Content

  6. Biased AI Moderation and Platform Challenges

  7. Importance of Human Oversight in AI Outputs

  8. Avoiding AI Echo Chambers and Algorithmic Divide

  9. Trust and Quality Concerns with AI Content

  10. Amplifying Diverse Voices in AI Leadership

Timestamps:

00:00 "AI Reliance and Ethical Risks"

03:37 "Inclusion in AI Conversations"

06:32 "Shared Responsibility for AI Change"

11:03 "AI Bias Against Black Hairstyles"

15:02 "Growing Businesses with Generative AI"

16:21 "For Us, By Us"

20:07 Preventing AI Echo Chambers

25:14 "Rethinking Leadership and AI Use"

26:46 "Everyday AI Wrap-Up"

Keywords:

AI bias, large language models, marginalized voices in AI, representation in AI, diversity in AI, AI and identity, technology and power, algorithmic bias, training data bias, cultural competence in AI, AI exclusion, social media moderation algorithms, biased AI moderation, racial bias in AI, gender bias in AI, queer representation in AI, trans representation in technology, working class and AI, age bias in AI, responsible AI use, AI content creation, AI slop, human in the loop, human-centered AI, ethical AI, trust in AI, AI and creativity, AI echo chambers, personalization in AI models, AI-generated content, voice amplification in technology, AI-powered surveillance, inverse surveillance, AI leadership, tech activism, AI for social good, AI media trust, challenge in AI adoption, AI community guidelines, inclusion in technology, future of AI representation, multi-agent orchestration, responsible AI auditing, training data selection, human feedback in AI, algorithmic transparency.

Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Ready for ROI on GenAI? Go to youreverydayai.com/partner

Comments

Want to join the conversation?

Loading comments...