Propaganda in 21st-Century Wars: WITNESS Associate Director Mahsa Alimardani Speaks to France 24

FRANCE 24 English
FRANCE 24 EnglishMar 14, 2026

Why It Matters

The rise of AI‑driven disinformation threatens the veracity of human‑rights evidence, forcing journalists and NGOs to adopt advanced forensic tools to preserve accountability in conflict zones.

Key Takeaways

  • AI-generated deepfakes erode trust in authentic conflict footage
  • Witness’s rapid‑response unit provides triple‑forensic verification for media
  • “Liars’ dividend” lets regimes dismiss genuine evidence as AI
  • Iran’s propaganda exploits AI tools to amplify disinformation campaigns
  • Verified cases like “Tankman” and “Jupiter” illustrate manipulation tactics

Summary

The interview with Mahsa Alimardani, associate director of Witness’s Technology Threats program, centers on the escalating propaganda war in Iran, where AI‑generated deepfakes and manipulated video are blurring the line between fact and fiction. Alimardani explains how Witness, a global human‑rights NGO, has built a rapid‑response force that mobilizes forensic experts to deliver at least three independent analyses of contested audiovisual material, helping journalists and civil‑society actors cut through the “liars’ dividend” – the tactic of dismissing real evidence as AI‑fabricated.

Key insights include the dramatic surge in AI‑produced content since Google’s Gemini 3 launch in June 2025, the emergence of “AI slop” as a catch‑all label for low‑quality generative outputs, and the complex information ecosystem in Iran where state censorship, diaspora opposition, and foreign actors all weaponize digital media. Witness’s data show that roughly one‑third of the cases they receive are authentic content wrongly doubted, underscoring how the mere possibility of deepfakes undermines credibility.

Alimardani cites two emblematic cases: the “Tankman” image, a low‑resolution protest video enhanced with AI editing that the regime quickly branded as fabricated, and the “Jupiter” Twitter account, a long‑standing fake persona that spread a false story about a judge’s assassination to derail protest momentum. Both examples illustrate how sophisticated AI tools and coordinated bot networks can manipulate narratives and sow confusion.

The implications are profound: human‑rights documentation now requires layered verification, media outlets must invest in forensic capacity, and policymakers need to address the legal and ethical gaps that allow regimes to weaponize AI denial. Without robust countermeasures, the credibility of genuine evidence—and the ability to hold perpetrators accountable—remains at risk.

Original Description

France 24’s Gavin Lee speaks with Mahsa Alimardani, associate director at WITNESS, about the challenges the militaries, but also the societies as wholes, face in a time of AI-generated content and misinformation spreading through social media. 
Read more about this story in our article: https://f24.my/Bnl9.y
🔔 Subscribe to France 24 now: https://f24.my/YTen
🔴 LIVE - Watch FRANCE 24 English 24/7 here: https://f24.my/YTliveEN
🌍 Read the latest International News and Top Stories: https://www.france24.com/en/
Like us on Facebook: https://f24.my/FBen
Follow us on X: https://f24.my/Xen
Browse the news in pictures on Instagram: https://f24.my/IGen
Discover our TikTok videos: https://f24.my/TKen
Get the latest top stories on Telegram: https://f24.my/TGen

Comments

Want to join the conversation?

Loading comments...