
Poisoning, Bias, or Randomness? How and Why AI (Mis)represents Russia’s War in Ukraine
The OI seminar titled “Poisoning, bias, or randomness? How and why AI (mis)represents Russia’s war in Ukraine” brought together Ukrainian scholars Dr. Molort and data‑engineer Marina Seda to examine how generative AI systems shape public narratives of the conflict. The speakers outlined a three‑act framework—model lifecycle, stochasticity, and interpretation—showing how training data, random sampling, and user‑driven prompts can introduce systematic bias or outright hallucinations. They emphasized that front‑end applications (ChatGPT, image generators) expose users to outputs whose provenance is opaque, allowing both inadvertent bias and deliberate poisoning to spread. Citing the EU AI Act definition, they described AI as an autonomous, adaptive machine‑based system. A striking example was the rapid creation of fabricated images and text that echo Kremlin disinformation, illustrating how AI’s persuasive, anthropomorphic style can amplify false narratives about battlefield events. The discussion concluded that current audit frameworks are inadequate, urging the development of transparent, cross‑platform tools to detect and mitigate AI‑driven misrepresentation. Without such safeguards, public opinion, diplomatic decision‑making, and the historical record of the Ukraine war risk being distorted.

Safer Internet Day 2026 with Dr Vicki Nash
On Safer Internet Day 2026, Dr. Vicki Nash highlighted the UK’s Online Safety Act, which obliges online pornography providers to verify users are at least 18. The law makes it illegal to supply adult content to minors, positioning age‑verification as...