
Will the Next U.N. Counterterrorism Strategy Hold States Accountable For Their Use of AI?
Key Takeaways
- •AI used in US and Israeli military targeting operations
- •UN travel‑surveillance program may add AI capabilities soon
- •Current AI exemptions risk undermining human‑rights protections
- •9th Strategy could mandate due‑diligence for AI use
- •Experts call for transparent, accountable AI governance
Summary
The U.N. Secretary‑General warned that terrorist groups are increasingly exploiting artificial intelligence, while states are deploying AI in counter‑terrorism operations without robust human‑rights safeguards. Recent examples include U.S. and Israeli militaries using AI to select bombing targets, and the UN’s own travel‑surveillance program planning AI integration. The upcoming 9th Global Counter‑Terrorism Strategy, due in June, offers a chance to embed accountability and due‑diligence requirements for AI use. Experts argue that without clear safeguards, AI‑driven counter‑terrorism could exacerbate rights violations worldwide.
Pulse Analysis
Artificial intelligence is rapidly moving from research labs into the battlefield of counter‑terrorism. Nations such as the United States and Israel have already integrated AI algorithms to identify high‑value targets, a practice that promises operational efficiency but also raises alarms about civilian harm and opaque decision‑making. Parallel to these state‑driven initiatives, the United Nations is grappling with how to regulate AI’s role in security, especially after the Secretary‑General’s warning that terrorist groups are co‑opting the same technologies for propaganda and recruitment. The convergence of these trends underscores a critical policy gap: the need for clear, enforceable standards that prevent AI‑enabled abuses while preserving legitimate security objectives.
The forthcoming 9th Global Counter‑Terrorism Strategy presents a pivotal moment for the U.N. to codify accountability mechanisms. Draft language could require member states to conduct human‑rights impact assessments before deploying AI tools, mandate explainability for automated decisions, and establish independent audit pathways. Such provisions would align the strategy with existing U.N. resolutions on digital rights and the EU’s AI Act, while addressing loopholes that currently exempt national‑security applications. By embedding due‑diligence obligations, the strategy would signal that AI‑driven counter‑terrorism cannot sidestep international humanitarian and human‑rights law.
For policymakers, technology firms, and civil‑society advocates, the stakes are clear. Robust safeguards will not only protect vulnerable populations from arbitrary surveillance and lethal targeting but also preserve the legitimacy of counter‑terrorism efforts in the eyes of the international community. Failure to act could entrench a precedent where security imperatives override fundamental freedoms, eroding trust in both state institutions and emerging AI governance frameworks. The next U.N. strategy, therefore, must move beyond generic calls for AI adoption and instead enforce concrete, rights‑based constraints that ensure transparency, accountability, and proportionality in every AI‑enabled operation.
Comments
Want to join the conversation?