Why Safer Futures Are Still Possible & What You Can Do to Help with Tristan Harris | TGS 214

The Great Simplification (Nate Hagens)
The Great Simplification (Nate Hagens)Mar 25, 2026

Why It Matters

Understanding and mitigating AI’s systemic risks is crucial for protecting market stability, democratic institutions, and societal well‑being, making proactive governance a business imperative.

Key Takeaways

  • AI risks span misuse, surveillance, economic disruption, and power concentration.
  • Clear-eyed understanding prevents paralysis and guides proactive safety measures.
  • Harris urges establishing red lines to curb uncontrolled AI development.
  • Collective action from institutions, public, and tech firms is essential.
  • Individual steps can steer toward humane, accountable AI ecosystems.

Summary

In this TGS episode, host Nate Higgins sits down with Tristan Harris, co‑founder of the Center for Humane Technology, to discuss why a safer AI future remains achievable and what concrete actions individuals and institutions can take. Harris, known for his work on the social‑media crisis and the Netflix documentary “The Social Dilemma,” is promoting a new film, “The AI Doc: How I Became an AI Apocalyptist,” that frames the next wave of technology risk.

Harris outlines a spectrum of AI hazards—misuse for disinformation or child‑exploitation, pervasive surveillance enabled by real‑time image and audio analysis, massive economic disruption as a handful of firms capture the bulk of AI‑generated labor, and the emergence of autonomous agents that could act beyond human control. He stresses that feeling overwhelmed leads to shutdown, and the antidote is a clear‑eyed assessment that lets society choose the direction it wants to steer.

He likens today’s AI to a “baby AI” that already reshaped social media feeds, noting, “If a baby AI could wreck democracy, imagine a fully‑scaled system.” Harris recounts receiving an early warning call from insiders at an AI lab before GPT‑4’s launch, prompting the Center’s “AI Dilemma” briefing to policymakers in Washington, New York, and San Francisco. The conversation also references a tongue‑in‑cheek claim that a trillion‑dollar lawsuit solved social‑media harms, underscoring the urgency of real regulatory action.

The discussion signals that businesses, regulators, and citizens must collaborate on red‑line policies, transparency standards, and funding for humane‑technology research. By translating abstract risks into actionable steps—such as demanding auditability, limiting concentration of AI compute, and supporting public‑interest AI projects—stakeholders can help prevent an unchecked arms race and preserve economic and democratic stability.

Original Description

(Conversation recorded on March 5th, 2026)
The conversation around artificial intelligence has been captured by two competing narratives – techno-abundance or civilizational collapse – both of which sidestep the question of who this technology is actually being built for. But if we consider that we are setting the initial conditions for everything that follows, we might realize that we are in a pivotal moment for AI development which demands a deeper cultural conversation about the type of future we actually want. What would it look like to design AI for the benefit of the 99%, and what are the necessary steps to make that possible?
In this episode, Nate welcomes back Tristan Harris, co-founder of the Center for Humane Technology, for a wide-ranging conversation on AI futures and safety. Tristan explains how his organization pivoted from social media to AI risks after insiders at AI labs warned him in early 2023 that a dangerous step-change in capabilities was coming – and with it, risks that are orders of magnitude larger. Tristan outlines the economic and psychological consequences already unfolding under AI’s race-to-the-bottom engagement incentives, as well as the major threat categories we face: including massive wealth concentration, government surveillance, and the very real risk that humanity loses meaningful control of AI systems in critical domains. He also shares about his involvement in the new documentary, The AI Doc: Or How I Became an Apocaloptimist, and ultimately highlights the highest-leverage areas in the movement toward safer AI development.
If we start seeing AI risks clearly without surrendering to despair, could we regain the power to steer toward safer technological futures? What would it mean to design AI around human wellbeing rather than engagement, attention, and profit? And can we cultivate the kind of shared cultural reckoning that makes collective action possible – before it’s too late?
About Tristan Harris:
Tristan is the Co-Founder of the Center for Humane Technology (CHT), a nonprofit organization whose mission is to align technology with humanity’s best interests. He is also the co-host of the top-rated technology podcast Your Undivided Attention, where he, Aza Raskin, and Daniel Barclay explore the unprecedented power of emerging technologies and how they fit into both our lives and a humane future. Previously, Tristan was a Design Ethicist at Google, and today he studies how major technology platforms wield dangerous power over our ability to make sense of the world and leads the call for systemic change.
In 2020, Tristan was featured in the two-time Emmy-winning Netflix documentary The Social Dilemma. The film unveiled how social media is dangerously reprogramming our brains and human civilization. It reached over 100 million people in 190 countries across 30 languages. He regularly briefs heads of state, technology CEOs, and US Congress members, in addition to mobilizing millions of people around the world through mainstream media.
Most recently, Tristan was featured in the 2026 documentary, The AI Doc: Or How I Became an Apocaloptimist, which is available in theaters on March 27th. Learn more about Tristan’s work and get involved at the Center for Humane Technology.
Join The Human Movement Now at HUMAN.MOV
Show Notes and More:
Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie:

Support The Institute for the Study of Energy and Our Future:
Join our Substack newsletter:
Join our Hylo channel and connect with other listeners:

00:00 - Introduction
03:06 - Pivot from Social Media to AI
11:04 - AI Harms Breakdown
24:26 - Species Rite of Passage
28:33 - Denial And Agency
33:32 - Which AI Is Safest?
36:50 - AI Winter and Bailouts
41:50 - How to Have Good AI Hygiene
46:24 - AI Attachment Dangers
53:25 - Meaning and Burnout
59:06 - Policy Wins
01:04:44 - Why Companies Keep Racing
01:11:00 - Can We Control AI?
01:14:16 - AI Doc Call To Action
01:21:16 - Global AI Treaties
01:25:08 - The Intelligence Curse
01:30:01 - Technological Adolescence
01:38:48 - Human Movement Blueprint
01:49:31 - Closing Credits

Comments

Want to join the conversation?

Loading comments...