When the Information Environment Becomes the Attack Surface

When the Information Environment Becomes the Attack Surface

6G Flagship (University of Oulu) blog
6G Flagship (University of Oulu) blogMar 26, 2026

Key Takeaways

  • Information operations now share cyberattack tools and channels.
  • AI-generated disinformation costs as low as $0.02 to €15.
  • Small-language AI models lack moderation, creating exploitable gaps.
  • Finland identified 1,300 hijacked domains feeding data harvesting.
  • Combined tech, media literacy, and regulation needed for resilience.

Summary

The Oulu City Library hosted Faktabaari’s Fact Tour, bringing together fact‑checkers, cybersecurity experts and officials to discuss the merging of information operations and cyber threats. Speakers highlighted how the same digital techniques—bot networks, AI‑generated deepfakes, and phishing—are used by both hostile actors and erstwhile allies, eroding the line between disinformation and cyber‑attack. Professor Kimmo Halunen emphasized that AI vulnerabilities are both security and ethical issues, especially as AI tools now produce targeted Finnish‑language disinformation for as little as a few cents to €15. The event underscored the "small‑language problem" and the need for coordinated technical, educational, and regulatory responses before Finland’s 2027 municipal elections.

Pulse Analysis

The Oulu Fact Tour illustrated a pivotal shift: information integrity is no longer a peripheral concern for cybersecurity teams but a core attack surface. By exposing how phishing tactics mirror deepfake dissemination, panelists showed that adversaries exploit the same human trust vectors across domains. This convergence forces security architects to broaden threat models, integrating misinformation detection into traditional cyber defenses, especially as nations like Finland confront sophisticated Russian influence campaigns.

Artificial intelligence accelerates the problem, slashing production costs for tailored disinformation. A basic influence package now costs between a few cents and roughly $16, making mass‑scale manipulation financially trivial. Moreover, AI models trained on publicly available data inadvertently ingest Russian propaganda, leading assistants to repeat false narratives. The "small‑language problem" compounds these risks; Finnish‑language moderation tools lag behind, leaving regional dialects and niche platforms exposed to automated attacks and data‑harvesting networks, such as the 1,300‑site NETRACK operation uncovered by Harto Pönkä.

Policymakers and civic leaders must adopt a multi‑layered response. Technical solutions—AI‑hardening, real‑time content verification—must be paired with widespread media‑literacy programs, like Faktabaari’s student workshops, to inoculate citizens against deceptive content. Regulatory frameworks should mandate rapid response mechanisms for election‑related misinformation, narrowing the speed gap between institutional fact‑checking and AI‑driven disinformation. Only a coordinated effort across technology, education, and governance can safeguard democratic processes ahead of Finland’s 2027 municipal elections.

When the information environment becomes the attack surface

Comments

Want to join the conversation?