AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIPodcastsHow AI Could Transform the Nature of War | Paul Scharre, Author of 'Army of None'
How AI Could Transform the Nature of War | Paul Scharre, Author of 'Army of None'
AI

80,000 Hours Podcast

How AI Could Transform the Nature of War | Paul Scharre, Author of 'Army of None'

80,000 Hours Podcast
•December 17, 2025•2h 45m
0
80,000 Hours Podcast•Dec 17, 2025

Key Takeaways

  • •AI may cause battlefield singularity, outpacing human control
  • •Swarming autonomous drones could coordinate thousands of targets simultaneously
  • •Speed arms race pressures militaries to remove humans from loop
  • •Autonomous weapons risk escalation like financial market flash crashes
  • •Nuclear command AI seeks reliability while preventing unauthorized launches

Pulse Analysis

In this episode, former Army Ranger and CNAS director Paul Scharre explains how artificial intelligence is already reshaping modern battlefields and why a "battlefield singularity" could soon outpace human decision‑making. He traces the evolution from early automated missile defenses to today’s incremental autonomy in targeting, emphasizing that AI’s cognitive role—processing data, selecting targets, and executing strikes—will grow over decades. Scharre’s insights frame AI not just as a new weapon, but as a systemic shift that could redefine the tempo and scale of war, demanding fresh strategic thinking.

Scharre highlights swarming drones as a concrete illustration of future combat. Thousands of autonomous aerial, sea and land platforms could network, self‑heal communications, and adapt in real time, turning a chaotic swarm into a coordinated strike force. He contrasts the current Ukrainian conflict, where drones are largely remote‑controlled, with a potential future where swarms act cooperatively without human pilots. The discussion draws a parallel to high‑frequency trading: rapid, algorithm‑driven actions can trigger flash crashes, and a similar “flash war” could escalate beyond human oversight. This speed arms race creates pressure for militaries to remove humans from the loop, even as policymakers voice caution.

The conversation turns to nuclear command and control, where AI promises both heightened reliability and new safety challenges. Scharre argues that AI could sharpen the “always‑never” dilemma—ensuring authorized launches occur while preventing accidental or rogue use. Yet he warns that delegating life‑or‑death decisions to machines risks misinterpretation of ambiguous orders and could make crises more brittle. The episode underscores the urgent need for governance frameworks that preserve human judgment while harnessing AI’s performance, suggesting that the balance between speed, autonomy, and control will shape the next era of warfare.

Episode Description

In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” The system told him the United States had fired five nuclear weapons at the Soviet Union. Protocol demanded he report it to superiors, which would almost certainly trigger a retaliatory strike.

Petrov didn’t do it. He had a “funny feeling” in his gut. He reasoned that if the US were actually attacking, they wouldn’t just fire five missiles — they’d empty the silos. He bet the fate of the world on a hunch that the machine was broken. He was right.

Paul Scharre, the former Army Ranger who led the Pentagon team that wrote the US military’s first policy on autonomous weapons, asks a terrifying question: What would an AI have done in Petrov’s shoes? Would an AI system have been flexible and wise enough to make the same judgement? Or would it have launched a counterattack?

Paul joins host Luisa Rodriguez to explain why we are hurtling toward a “battlefield singularity” — a tipping point where AI increasingly replaces humans in much of the military, changing the way war is fought with speed and complexity that outpaces humans’ ability to keep up.

Links to learn more, video, and full transcript: https://80k.info/ps

Militaries don’t necessarily want to take humans out of the loop. But Paul argues that the competitive pressure of warfare creates a “use it or lose it” dynamic. As former Deputy Secretary of Defense Bob Work put it: “If our competitors go to Terminators, and their decisions are bad, but they’re faster, how would we respond?”

Once that line is crossed, Paul warns we might enter an era of “flash wars” — conflicts that spiral out of control as quickly and inexplicably as a flash crash in the stock market, with no way for humans to call a timeout.

In this episode, Paul and Luisa dissect what this future looks like:

Swarming warfare: Why the future isn’t just better drones, but thousands of cheap, autonomous agents coordinating like a hive mind to overwhelm defences.

The Gatling gun cautionary tale: The inventor of the Gatling gun thought automating fire would reduce the number of soldiers needed, saving lives. Instead, it made war significantly deadlier. Paul argues AI automation could do the same, increasing lethality rather than creating “bloodless” robot wars.

The cyber frontier: While robots have physical limits, Paul argues cyberwarfare is already at the point where AI can act faster than human defenders, leading to intelligent malware that evolves and adapts like a biological virus.

The US-China “adoption race”: Paul rejects the idea that the US and China are in a spending arms race (AI is barely 1% of the DoD budget). Instead, it’s a race of organisational adoption — one where the US has massive advantages in talent and chips, but struggles with bureaucratic inertia that might not be a problem for an autocratic country.

Paul also shares a personal story from his time as a sniper in Afghanistan — watching a potential target through his scope — that fundamentally shaped his view on why human judgement, with all its flaws, is the only thing keeping war from losing its humanity entirely.

This episode was recorded on October 23-24, 2025.

Chapters:

Cold open (00:00:00)

Who’s Paul Scharre? (00:00:46)

How will AI and automation transform the nature of war? (00:01:17)

Why would militaries take humans out of the loop? (00:12:22)

AI in nuclear command, control, and communications (00:18:50)

Nuclear stability and deterrence (00:36:10)

What to expect over the next few decades (00:46:21)

Financial and human costs of future “hyperwar” scenarios (00:50:42)

AI warfare and the balance of power (01:06:37)

Barriers to getting to automated war (01:11:08)

Failure modes of autonomous weapons systems (01:16:28)

Could autonomous weapons systems actually make us safer? (01:29:36)

Is Paul overall optimistic or pessimistic about increasing automation in the military? (01:35:23)

Paul’s takes on AGI’s transformative potential and whether natsec people buy it (01:37:42)

Cyberwarfare (01:46:55)

US-China balance of power and surveillance with AI (02:02:49)

Policy and governance that could make us safer (02:29:11)

How Paul’s experience in the Army informed his feelings on military automation (02:41:09)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour

Music: CORBIT

Coordination, transcripts, and web: Katy Moore

Show Notes

0

Comments

Want to join the conversation?

Loading comments...