Propaganda in 21st-Century Wars: WITNESS Associate Director Mahsa Alimardani Speaks to France 24
Why It Matters
The rise of AI‑driven disinformation threatens the veracity of human‑rights evidence, forcing journalists and NGOs to adopt advanced forensic tools to preserve accountability in conflict zones.
Key Takeaways
- •AI-generated deepfakes erode trust in authentic conflict footage
- •Witness’s rapid‑response unit provides triple‑forensic verification for media
- •“Liars’ dividend” lets regimes dismiss genuine evidence as AI
- •Iran’s propaganda exploits AI tools to amplify disinformation campaigns
- •Verified cases like “Tankman” and “Jupiter” illustrate manipulation tactics
Summary
The interview with Mahsa Alimardani, associate director of Witness’s Technology Threats program, centers on the escalating propaganda war in Iran, where AI‑generated deepfakes and manipulated video are blurring the line between fact and fiction. Alimardani explains how Witness, a global human‑rights NGO, has built a rapid‑response force that mobilizes forensic experts to deliver at least three independent analyses of contested audiovisual material, helping journalists and civil‑society actors cut through the “liars’ dividend” – the tactic of dismissing real evidence as AI‑fabricated.
Key insights include the dramatic surge in AI‑produced content since Google’s Gemini 3 launch in June 2025, the emergence of “AI slop” as a catch‑all label for low‑quality generative outputs, and the complex information ecosystem in Iran where state censorship, diaspora opposition, and foreign actors all weaponize digital media. Witness’s data show that roughly one‑third of the cases they receive are authentic content wrongly doubted, underscoring how the mere possibility of deepfakes undermines credibility.
Alimardani cites two emblematic cases: the “Tankman” image, a low‑resolution protest video enhanced with AI editing that the regime quickly branded as fabricated, and the “Jupiter” Twitter account, a long‑standing fake persona that spread a false story about a judge’s assassination to derail protest momentum. Both examples illustrate how sophisticated AI tools and coordinated bot networks can manipulate narratives and sow confusion.
The implications are profound: human‑rights documentation now requires layered verification, media outlets must invest in forensic capacity, and policymakers need to address the legal and ethical gaps that allow regimes to weaponize AI denial. Without robust countermeasures, the credibility of genuine evidence—and the ability to hold perpetrators accountable—remains at risk.
Comments
Want to join the conversation?
Loading comments...