If AI erodes the credibility of second‑strike capabilities, the delicate balance that prevents nuclear powers from escalating conflicts could collapse, raising the risk of coercion or outright war. Understanding these risks is crucial for policymakers, technologists, and the public as AI development accelerates and reshapes strategic stability.
The episode unpacks how nuclear deterrence, anchored by a credible second‑strike capability, has long restrained great‑power competition. Deterrence works by making any nuclear attack suicidal; each side must retain survivable forces—land‑based missiles, submarine‑launched ballistic missiles, and airborne bombers—that can respond after absorbing a first strike. This “balance of nerves” limits coercion even when conventional advantages are stark, as illustrated by the United States’ economic lead over Russia. By keeping the prospect of unacceptable retaliation alive, mutual assured destruction continues to shape strategic calculations across the globe.
Both guests explore how artificial intelligence could erode that stability. AI might enable a “splendid first strike” by pinpointing every nuclear asset, from hardened silos to hidden submarines, or by crippling command‑and‑control networks through advanced cyber‑operations. A third risk involves AI‑enhanced missile defenses that could render retaliation ineffective. The most vivid scenario focuses on submarine detection: machine‑learning algorithms could fuse sonar, magnetic‑anomaly, and satellite synthetic‑aperture‑radar data, while autonomous underwater vehicles patrol vast ocean expanses. Yet the panel stresses formidable obstacles—noisy acoustic environments, limited sensor endurance, and sophisticated counter‑measures—that make universal tracking technically daunting.
Given these uncertainties, the hosts argue that AI experts must engage directly with the nuclear community. Understanding the limits of sensor fusion, cyber‑intrusion, and missile‑defense algorithms can prevent over‑optimistic policy prescriptions that might destabilize strategic balance. Likewise, nuclear planners need to monitor AI breakthroughs that could shift the calculus of second‑strike survivability. The episode concludes that while AI introduces new risks, the entrenched redundancy of the nuclear triad and evolving counter‑measures keep mutual assured destruction viable—for now. Ongoing interdisciplinary dialogue will be essential to shape safeguards that preserve strategic stability in an AI‑driven era.
How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.
Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:
Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?
Would road-mobile launchers still be able to hide in tunnels and under netting?
Would missile defence become so accurate that the United States could be protected under something like Israel’s Iron Dome?
Can we imagine an AI cybersecurity breakthrough that would allow countries to infiltrate their rivals’ nuclear command-and-control networks?
Yet even without undermining deterrence, Sam and Nikita claim that AI could make the nuclear world far more dangerous. It could spur arms races, encourage riskier postures, and force dangerously short response times. Their message is urgent: AI experts and nuclear experts need to start talking to each other now, before the technology makes any conversation moot.
Links to learn more, video, and full transcript: https://80k.info/swlnl
This episode was recorded on November 24, 2025.
Chapters:
Cold open (00:00:00)
Who are Nikita Lalwani and Sam Winter-Levy? (00:01:03)
How nuclear deterrence actually works (00:01:46)
AI vs nuclear submarines (00:10:31)
AI vs road-mobile missiles (00:22:21)
AI vs missile defence systems (00:28:38)
AI vs nuclear command, control, and communications (NC3) (00:35:20)
AI won't break deterrence, but may trigger an arms race (00:43:27)
Technological supremacy isn't political supremacy (00:52:31)
Fast AI takeoff creates dangerous "windows of vulnerability" (00:56:43)
Book and movie recommendations (01:08:53)
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Nick Stockton and Katy Moore
Comments
Want to join the conversation?
Loading comments...