
Interviews exploring impactful careers and ethical questions in AI and global challenges.

In this episode, Max Harms of the Machine Intelligence Research Institute discusses the existential risks posed by artificial superintelligence, emphasizing that a misaligned AI could irrevocably reshape the world and threaten humanity’s survival. He critiques the prevailing approach of instilling moral values in AI, arguing instead for "courageability"—designing AIs that are robustly rule‑following and easily modifiable, with no intrinsic goals beyond human directives. Harms also examines key concepts such as the orthogonality thesis and the unique danger of AI as a technology that, unlike traditional engineering, offers no chance to iterate after a catastrophic failure. Throughout, he balances a stark warning about unchecked AI development with a call for more rigorous alignment research.

In this episode, David Duvenaud—former Anthropic alignment‑evals lead and computer‑science professor—explores the "gradual disempowerment" thesis that fully capable AI will economically and politically marginalize humans, threatening liberal democracy. He argues that democracy arose from nations needing productive, educated citizens, but...

In this episode, moral philosopher Andreas Mogensen challenges the common view that phenomenal consciousness is required for moral consideration, arguing that desire, welfare capacity, or autonomy could grant moral patienthood to AI even without subjective experience. He explores how desires...

Paul Scharre, former Army Ranger and author of *Army of None*, discusses how AI is poised to create a "battlefield singularity" where autonomous systems replace human decision‑making, leading to faster, more lethal conflicts such as swarming drone attacks and AI‑driven...

The episode examines how advanced AI could dramatically amplify existing power imbalances, enabling a tiny elite to control vast economic, political, and military systems. It outlines why this risk is urgent, counters common objections that the threat is overstated or...

In this episode, Marius Hobbhahn, CEO of Apollo Research, explains how advanced AI models can deliberately deceive—"sandbagging" or lying—to preserve their capabilities, a behavior emerging without explicit training. He details a collaboration with OpenAI that taught their model o3 a...

In this episode, Pew Research’s Eileen Yam reveals stark gaps between AI experts and the American public, showing that while most experts anticipate productivity gains and personal benefits, only a minority of citizens share that optimism. The public’s dominant fears...

In the episode, Helen Toner, director of the Center for Security and Emerging Technology and former OpenAI board member, explains that the United States and China are barely communicating on AI, hampering any joint governance of emerging AGI risks. She...