A Feasible Precaution Ignored: AI Targeting Algorithms and the Failure to Recognize Protected Emblems

A Feasible Precaution Ignored: AI Targeting Algorithms and the Failure to Recognize Protected Emblems

Just Security
Just SecurityApr 1, 2026

Key Takeaways

  • AI misidentifies water jugs as explosives, causing civilian deaths
  • Current TEVV tests omit protected emblem recognition
  • Human‑in‑the‑loop needed to validate algorithmic nominations
  • Congress urged to mandate disclosure of AI‑targeting incidents
  • ICRC proposes standards for humanitarian emblem training data

Summary

Recent civilian deaths in Afghanistan, Lebanon, Gaza and Iran highlight how AI‑driven targeting algorithms can misclassify harmless objects as threats. In each case, water‑filled containers were mistaken for explosives, leading to lethal strikes by U.S. drones, Israeli missiles and Tomahawk missiles. Current Testing, Evaluation, Validation and Verification (TEVV) procedures do not require algorithms to recognize protected humanitarian emblems, leaving a legal and strategic blind spot. Experts call for immediate policy fixes, transparent reporting, and humanitarian‑focused training data to align AI targeting with international law.

Pulse Analysis

The rapid integration of artificial‑intelligence into battlefield targeting promises faster decision cycles, but recent tragedies reveal a dangerous trade‑off. In August 2021 a U.S. drone strike in Kabul killed an aid worker and nine civilians after the system flagged a car loading water jugs as a potential bomb. Similar errors occurred when an Israeli missile struck a family in Lebanon and when a Tomahawk missile hit an Iranian school. These incidents underscore that algorithmic models, trained on legacy conflict data, can still mistake benign civilian activity for hostile intent, especially when visual cues resemble historic threat signatures.

A critical weakness lies in the Testing, Evaluation, Validation and Verification (TEVV) framework, which currently lacks explicit requirements for recognizing protected humanitarian emblems such as the Red Cross, Red Crescent, or NGO logos. Training datasets are dominated by combat‑focused imagery, causing AI to treat humanitarian markings as background noise. Without qualitative criteria and human oversight, quantitative confidence scores can mislead operators, leading to unlawful attacks that violate Article 57 of Additional Protocol I. Incorporating emblem detection into TEVV would force developers to address data bias and ensure that AI outputs are interpretable and aligned with International Humanitarian Law.

Policymakers have a clear path forward. The DoD should amend TEVV policy to mandate pass/fail emblem‑recognition tests, while combatant commanders must certify compliance before deployment. The International Committee of the Red Cross, in partnership with the U.N., can define standardized training data for humanitarian symbols. Congress can strengthen oversight by requiring annual disclosures of AI‑targeting incidents and earmarking funds for protective‑AI research. These steps would preserve the strategic advantage of AI‑enhanced targeting while safeguarding civilians and upholding the rule of law.

A Feasible Precaution Ignored: AI Targeting Algorithms and the Failure to Recognize Protected Emblems

Comments

Want to join the conversation?