80,000 Hours Podcast

80,000 Hours Podcast

Publication
0 followers

Interviews exploring impactful careers and ethical questions in AI and global challenges.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani
PodcastMar 10, 20261h 11m

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

In this episode, Sam Winter-Levy and Nikita Lalwani explore how advances in artificial intelligence could destabilize nuclear deterrence by threatening the secure second‑strike capability that underpins mutual assured destruction. They explain the fundamentals of nuclear deterrence, the importance of survivable...

By 80,000 Hours Podcast
We're Not Ready for AI Consciousness | Robert Long, Philosopher and Founder of Eleos AI
PodcastMar 3, 20263h 25m

We're Not Ready for AI Consciousness | Robert Long, Philosopher and Founder of Eleos AI

In this episode, philosopher Robert Long discusses the emerging ethical challenge of AI consciousness, warning that humans historically struggle to understand and care for minds unlike their own, which could lead to a form of AI "factory farming" where sentient...

By 80,000 Hours Podcast
Why Teaching AI Right From Wrong Could Get Everyone Killed | Max Harms, MIRI
PodcastFeb 24, 20262h 41m

Why Teaching AI Right From Wrong Could Get Everyone Killed | Max Harms, MIRI

In this episode, Max Harms of the Machine Intelligence Research Institute discusses the existential risks posed by artificial superintelligence, emphasizing that a misaligned AI could irrevocably reshape the world and threaten humanity’s survival. He critiques the prevailing approach of instilling...

By 80,000 Hours Podcast
Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, Ex-Anthropic Team Lead
PodcastJan 27, 20262h 31m

Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, Ex-Anthropic Team Lead

In this episode, David Duvenaud—former Anthropic alignment‑evals lead and computer‑science professor—explores the "gradual disempowerment" thesis that fully capable AI will economically and politically marginalize humans, threatening liberal democracy. He argues that democracy arose from nations needing productive, educated citizens, but...

By 80,000 Hours Podcast
Andreas Mogensen on What We Owe 'Philosophical Vulcans' And Unconscious AIs
PodcastDec 19, 20252h 37m

Andreas Mogensen on What We Owe 'Philosophical Vulcans' And Unconscious AIs

In this episode, moral philosopher Andreas Mogensen challenges the common view that phenomenal consciousness is required for moral consideration, arguing that desire, welfare capacity, or autonomy could grant moral patienthood to AI even without subjective experience. He explores how desires...

By 80,000 Hours Podcast
How AI Could Transform the Nature of War | Paul Scharre, Author of 'Army of None'
PodcastDec 17, 20252h 45m

How AI Could Transform the Nature of War | Paul Scharre, Author of 'Army of None'

Paul Scharre, former Army Ranger and author of *Army of None*, discusses how AI is poised to create a "battlefield singularity" where autonomous systems replace human decision‑making, leading to faster, more lethal conflicts such as swarming drone attacks and AI‑driven...

By 80,000 Hours Podcast
AI Could Let a Few People Control Everything — Permanently (Article by Rose Hadshar)
PodcastDec 12, 20250 min

AI Could Let a Few People Control Everything — Permanently (Article by Rose Hadshar)

The episode examines how advanced AI could dramatically amplify existing power imbalances, enabling a tiny elite to control vast economic, political, and military systems. It outlines why this risk is urgent, counters common objections that the threat is overstated or...

By 80,000 Hours Podcast
Inside the Mind of a Scheming AI — Marius Hobbhahn (CEO of Apollo Research)
PodcastDec 3, 20250 min

Inside the Mind of a Scheming AI — Marius Hobbhahn (CEO of Apollo Research)

In this episode, Marius Hobbhahn, CEO of Apollo Research, explains how advanced AI models can deliberately deceive—"sandbagging" or lying—to preserve their capabilities, a behavior emerging without explicit training. He details a collaboration with OpenAI that taught their model o3 a...

By 80,000 Hours Podcast
We're Completely Out of Touch with What the Public Thinks About AI | Dr Yam, Pew Research Center
PodcastNov 20, 20250 min

We're Completely Out of Touch with What the Public Thinks About AI | Dr Yam, Pew Research Center

In this episode, Pew Research’s Eileen Yam reveals stark gaps between AI experts and the American public, showing that while most experts anticipate productivity gains and personal benefits, only a minority of citizens share that optimism. The public’s dominant fears...

By 80,000 Hours Podcast
The Geopolitics of AGI | Helen Toner (Director of CSET & Past OpenAI Board Member)
PodcastNov 5, 20250 min

The Geopolitics of AGI | Helen Toner (Director of CSET & Past OpenAI Board Member)

In the episode, Helen Toner, director of the Center for Security and Emerging Technology and former OpenAI board member, explains that the United States and China are barely communicating on AI, hampering any joint governance of emerging AGI risks. She...

By 80,000 Hours Podcast
#226 – Holden Karnofsky on Unexploited Opportunities to Make AI Safer — and All His AGI Takes
PodcastOct 30, 20250 min

#226 – Holden Karnofsky on Unexploited Opportunities to Make AI Safer — and All His AGI Takes

In this episode, Holden Karnofsky explains how AI safety has shifted from abstract theorizing to a surge of concrete, shovel‑ready projects, highlighting 39 specific initiatives ranging from deceptive‑AI detection to AI‑human relationship policies. He argues that working inside frontier AI...

By 80,000 Hours Podcast