Poisoning, Bias, or Randomness? How and Why AI (Mis)represents Russia’s War in Ukraine

Oxford Internet Institute (OII)
Oxford Internet Institute (OII)Mar 12, 2026

Why It Matters

Because AI-generated misinformation can reshape perceptions of an ongoing war, it threatens democratic discourse, policy formulation, and the long‑term historical memory of the conflict.

Key Takeaways

  • Generative AI can fabricate misleading narratives about Ukraine conflict.
  • Model lifecycle, stochasticity, and interpretation amplify misrepresentation risks.
  • Front‑end AI interfaces hide data provenance, increasing user trust.
  • Existing audit tools are insufficient for tracking AI‑driven disinformation.
  • Misrepresentation threatens historical memory and policy responses worldwide.

Summary

The OI seminar titled “Poisoning, bias, or randomness? How and why AI (mis)represents Russia’s war in Ukraine” brought together Ukrainian scholars Dr. Molort and data‑engineer Marina Seda to examine how generative AI systems shape public narratives of the conflict.

The speakers outlined a three‑act framework—model lifecycle, stochasticity, and interpretation—showing how training data, random sampling, and user‑driven prompts can introduce systematic bias or outright hallucinations. They emphasized that front‑end applications (ChatGPT, image generators) expose users to outputs whose provenance is opaque, allowing both inadvertent bias and deliberate poisoning to spread.

Citing the EU AI Act definition, they described AI as an autonomous, adaptive machine‑based system. A striking example was the rapid creation of fabricated images and text that echo Kremlin disinformation, illustrating how AI’s persuasive, anthropomorphic style can amplify false narratives about battlefield events.

The discussion concluded that current audit frameworks are inadequate, urging the development of transparent, cross‑platform tools to detect and mitigate AI‑driven misrepresentation. Without such safeguards, public opinion, diplomatic decision‑making, and the historical record of the Ukraine war risk being distorted.

Original Description

"Poisoning, bias, or randomness? How and why AI (mis)represents Russia’s war in Ukraine" with Dr Mykola Makhortykh and Maryna Sydorova.
How do advances in AI technologies change the representation of modern wars, particularly Russia's war in Ukraine? To what degree are such representations prone to poisoning, bias, or stochasticity, and what methods can we use to study these risks? And what can be the implications of the different forms of AI (mis)representation for how wars are perceived today and will be perceived in the future? To address these questions, we will discuss results from a series of studies in which we explored how text- and image-generative AI applications represent different aspects of Russia's war in Ukraine.
Speaker biographies
Mykola Makhortykh is an Alfred Landecker lecturer at the Institute of Communication and Media Science, where he studies politics- and history-centred information behaviour in online environments and how it is affected by the algorithm- and AI-driven systems. His other research interests include trauma and memory studies, armed conflict reporting, disinformation and computational propaganda research, cybersecurity and critical security studies, and bias in information retrieval systems.
Maryna Sydorova is a data engineer and scientific programmer specializing in the study of AI systems and the development of large-scale research infrastructures. At the University of Bern and the University of Fribourg, I lead the development of cross-platform AI audit frameworks designed to study how complex systems shape information exposure. My background combines AI, deep learning, and cloud computing, and my current work focuses on developing research techniques for investigating the performance of search engines, generative AI models, and applications. My research interests include the impact of AI on disinformation production and dissemination.

Comments

Want to join the conversation?

Loading comments...