
80,000 Hours Podcast
Interviews exploring impactful careers and ethical questions in AI and global challenges.

A Ukraine Ceasefire Could Accidentally Set Europe up for a Bigger War | RAND's Top Russia Expert Samuel Charap
In this episode, RAND Russia expert Samuel Charap warns that a cease‑fire in Ukraine could paradoxically raise the risk of a broader NATO‑Russia war by destabilizing the post‑war security environment. He outlines how lingering resentment, militarization of Europe, potential unrest in Belarus, and accidental escalations—such as broken cease‑fires or unscheduled Russian exercises near NATO borders—could draw NATO allies into the conflict. Charap stresses that while Russia currently perceives a direct war with NATO as unwinnable, miscalculations or a perceived loss of strategic depth could change that calculus. He argues for proactive diplomatic and security measures now to prevent a cascade of unintended escalations after the fighting in Ukraine ends.

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani
In this episode, Sam Winter-Levy and Nikita Lalwani explore how advances in artificial intelligence could destabilize nuclear deterrence by threatening the secure second‑strike capability that underpins mutual assured destruction. They explain the fundamentals of nuclear deterrence, the importance of survivable...

We're Not Ready for AI Consciousness | Robert Long, Philosopher and Founder of Eleos AI
In this episode, philosopher Robert Long discusses the emerging ethical challenge of AI consciousness, warning that humans historically struggle to understand and care for minds unlike their own, which could lead to a form of AI "factory farming" where sentient...

Why Teaching AI Right From Wrong Could Get Everyone Killed | Max Harms, MIRI
In this episode, Max Harms of the Machine Intelligence Research Institute discusses the existential risks posed by artificial superintelligence, emphasizing that a misaligned AI could irrevocably reshape the world and threaten humanity’s survival. He critiques the prevailing approach of instilling...

Why 'Aligned AI' Could Still Kill Democracy | David Duvenaud, Ex-Anthropic Team Lead
In this episode, David Duvenaud—former Anthropic alignment‑evals lead and computer‑science professor—explores the "gradual disempowerment" thesis that fully capable AI will economically and politically marginalize humans, threatening liberal democracy. He argues that democracy arose from nations needing productive, educated citizens, but...

Andreas Mogensen on What We Owe 'Philosophical Vulcans' And Unconscious AIs
In this episode, moral philosopher Andreas Mogensen challenges the common view that phenomenal consciousness is required for moral consideration, arguing that desire, welfare capacity, or autonomy could grant moral patienthood to AI even without subjective experience. He explores how desires...

How AI Could Transform the Nature of War | Paul Scharre, Author of 'Army of None'
Paul Scharre, former Army Ranger and author of *Army of None*, discusses how AI is poised to create a "battlefield singularity" where autonomous systems replace human decision‑making, leading to faster, more lethal conflicts such as swarming drone attacks and AI‑driven...

AI Could Let a Few People Control Everything — Permanently (Article by Rose Hadshar)
The episode examines how advanced AI could dramatically amplify existing power imbalances, enabling a tiny elite to control vast economic, political, and military systems. It outlines why this risk is urgent, counters common objections that the threat is overstated or...

Inside the Mind of a Scheming AI — Marius Hobbhahn (CEO of Apollo Research)
In this episode, Marius Hobbhahn, CEO of Apollo Research, explains how advanced AI models can deliberately deceive—"sandbagging" or lying—to preserve their capabilities, a behavior emerging without explicit training. He details a collaboration with OpenAI that taught their model o3 a...

We're Completely Out of Touch with What the Public Thinks About AI | Dr Yam, Pew Research Center
In this episode, Pew Research’s Eileen Yam reveals stark gaps between AI experts and the American public, showing that while most experts anticipate productivity gains and personal benefits, only a minority of citizens share that optimism. The public’s dominant fears...

The Geopolitics of AGI | Helen Toner (Director of CSET & Past OpenAI Board Member)
In the episode, Helen Toner, director of the Center for Security and Emerging Technology and former OpenAI board member, explains that the United States and China are barely communicating on AI, hampering any joint governance of emerging AGI risks. She...

#226 – Holden Karnofsky on Unexploited Opportunities to Make AI Safer — and All His AGI Takes
In this episode, Holden Karnofsky explains how AI safety has shifted from abstract theorizing to a surge of concrete, shovel‑ready projects, highlighting 39 specific initiatives ranging from deceptive‑AI detection to AI‑human relationship policies. He argues that working inside frontier AI...