Why It Matters
Understanding how AI is weaponized reveals a hidden layer of modern warfare that threatens to erode international humanitarian law and civilian protection. For policymakers, activists, and the public, recognizing the role of tech firms and governments in this "technification of the kill chain" is crucial to demand oversight, regulation, and accountability before these opaque systems become standard in conflict zones.
Key Takeaways
- •AI chatbots generate target lists for US war operations.
- •Tech firms profit from militarizing large language models.
- •Automated kill chain obscures accountability and legal oversight.
- •Racialized data biases amplify civilian casualties in conflicts.
- •Human operators become passive approvers of algorithmic decisions.
Pulse Analysis
In this opening episode of the "Computer Says Kill" series, host Alex Dunn sits down with Matt Mahmoudi, an assistant professor at Cambridge and Amnesty International advisor, to expose how U.S. defense agencies are already deploying large language models—such as Anthropic's Claude—to compile target lists for operations in Iran, Venezuela, and Gaza. The conversation frames AI‑enabled warfare as a watershed moment where algorithmic decision‑making bypasses traditional human judgment, turning probabilistic outputs into lethal directives. By highlighting concrete examples of AI‑driven kill‑chain integration, the episode underscores the urgency of understanding the technology’s role in modern conflict.
Mahmoudi explains that tech firms are not neutral providers; they monetize the militarization of generative AI, offering contracts that promise efficiency while simultaneously distancing themselves from accountability. The episode details how these models ingest biased, often synthetic data—social‑media posts, iris scans, refugee aid records—producing skewed probability scores that disproportionately target racialized populations. Companies like Anthropic navigate a dual narrative, presenting AI as a defensive tool yet enabling autonomous weapon systems without transparent oversight. This convergence of profit motives, data bias, and normalized rhetoric compresses the traditional kill‑chain, eroding legal safeguards and making it harder to trace responsibility.
The discussion concludes with a call for robust policy interventions. Mahmoudi stresses that human operators must retain meaningful control, and that international humanitarian law requires clear attribution of decisions throughout the AI‑augmented targeting process. Transparency, independent audits, and strict export controls on AI weaponization are presented as essential to prevent a future where algorithmic outputs become the default justification for lethal force. For business leaders and policymakers, the episode offers a stark reminder: without proactive regulation, the integration of large language models into warfare threatens both ethical standards and global security.
Episode Description
How does a country wage war using LLMs? Oh and WHY?
More like this: AI in Gaza: Live from Mexico City
In Computer Says Kill Ep #1 we are joined by Matt Mahmoudi. The US Department of War is leaning heavily on AI technologies to attack Iran. Matt explains how the use of LLMs to identify ‘legitimate targets’ is collapsing the chain of decisions that lead to lethal force. We discuss what this means at a time when fascist governments are eager to demonstrate their strength on the global stage. From Israel field-testing AI weapons in Gaza, to the US using AI tools in horrifying new ways to perpetuate ever worse war crimes, we start to connect the dots between the technology, the people powering it, and the human costs.
Further reading & resources:
Automated Apartheid — Amnesty International 2023
How Israel uses facial-recognition systems in Gaza and beyond — Matt’s interview in The Guardian about the report
Crimes of Dispassion: Autonomous Weapons and the Moral Challenge of Systematic Killing — Elke Schwartz, 2023
Sam Altman May Control Our Future—Can He Be Trusted? — By Ronan Farrow and Andrew Marantz, The New York Times, April 2026
“Big Brother” in Jerusalem’s Old City — Who Profits Research Centre
What is Israel's secretive cyber warfare unit 8200? — Reuters 2024
Genocide as Colonial Erasure — Francesca Albanese, October 2024
Buy Resisting Borders and Technologies of Violence, edited by Mizue Aizeki, Matt Mahmoudi, and Coline Schupfer
Buy The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World by Anthony Loewesnstein
can we add
Francesa Albanese report
Matt’s research (Automated Apartheid, and anything else on warfare to link to?)
Palestine Laboratory
Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!
Computer Says Maybe is produced by Georgia Iacovou, Kushal Dev, Marion Wellington, Sarah Myles, Van Newman, and Zoe Trout

Comments
Want to join the conversation?
Loading comments...