AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsTech Dependencies Undermine UK National Security
Tech Dependencies Undermine UK National Security
DefenseAICybersecurity

Tech Dependencies Undermine UK National Security

•February 2, 2026
0
RUSI
RUSI•Feb 2, 2026

Companies Mentioned

X (formerly Twitter)

X (formerly Twitter)

Cloudflare

Cloudflare

NET

Meta

Meta

META

Why It Matters

Reliance on US tech firms threatens the UK’s ability to enforce security laws and undermines sovereign policy goals, creating diplomatic friction and operational risk.

Key Takeaways

  • •US platform control hampers UK security enforcement
  • •X's geoblock faced US political backlash
  • •Cloudflare fines highlight compliance tensions
  • •Takedowns too slow for automated influence ops
  • •Infrastructure-level disruption offers alternative mitigation

Pulse Analysis

The growing rift between Washington’s tech giants and European regulators is reshaping national‑security strategy. In the UK, pressure on X to block illicit deep‑fake images sparked a public showdown, while Cloudflare’s clash with Italy underscored how American providers can push back against local compliance demands. These disputes reveal a structural dependency: critical moderation tools, DNS services, and content‑delivery networks remain under US jurisdiction, limiting the UK’s capacity to act swiftly when foreign influence operations target its democratic institutions.

Compounding the problem, the speed and scale of AI‑generated disinformation outpace traditional takedown mechanisms. Evidence shows that during the 2024 general election, removal requests took roughly a day—long enough for malicious narratives to gain traction. As generative models automate the creation of persuasive content, platform‑level interventions become reactive fire‑fighting rather than a deterrent. Experts therefore advocate a shift toward upstream disruption, targeting the infrastructure and financial lifelines of influence campaigns, mirroring tactics used in cyber‑crime and counter‑terrorism.

Policymakers must translate these insights into concrete capability building. Investing in domestic moderation platforms, forging data‑sharing agreements with hosting providers, and expanding sanction regimes against illicit financial channels can reduce reliance on reluctant US firms. Cross‑sector collaboration—linking intelligence, cybersecurity, and financial regulators—will enable early‑stage identification of inauthentic actors before they mobilise. By diversifying its technical toolkit, the UK can safeguard its information environment while navigating the geopolitical realities of a US‑dominated internet ecosystem.

Tech Dependencies Undermine UK National Security

While the UK focuses on hybrid threats, is it being undermined by dependencies on US providers? Can the UK have a national security agenda in isolation?

In January, public outcry developed in the UK over X’s chatbot – Grok – and its ability to generate explicit and non‑consensual images of people. The UK government ultimately succeeded in pressuring X into implementing a localised geoblock on the generation of deep‑fake sexual images via its platform, which are illegal to share in the UK. But this small victory was accompanied by a concerning wave of allegations and hostile rhetoric from the US, with Elon Musk accusing the UK regulator, Ofcom, of suppressing free speech and a Republican congresswoman threatening sanctions and tariffs should the UK block access to the platform.

This is not an isolated incident. Also in January, Italy announced a €14.2 million fine against the US‑based internet infrastructure provider, Cloudflare, for non‑compliance with the country’s anti‑piracy laws. Cloudflare’s CEO, in response, defended his company’s position not only by highlighting the need to prevent latency and poor resolution on Cloudflare’s domain name service (DNS), but to resist attempts to ‘censor’ online content. This is not the first time Cloudflare’s role in promoting online safety has been questioned, as debates over its response to extremist content hosted on 8chan in 2019 show.

Nevertheless, these incidents appear both more salient and geopolitically charged in light of the growing rift between US companies and foreign governments on the legality of state intervention in online moderation. While in recent years the UK and Europe have introduced new laws intended to enhance state powers to mitigate online threats to safety and security – including through increased moderation of social‑media platforms – the Trump administration has presented this agenda as ‘censorship’, while increasing diplomatic pressure against its key architects.

In this heavily politicised climate, the UK and Europe can no longer bet on the consistent cooperation of US‑based platform and internet‑services companies to implement national laws promoting online safety and national security. Among other threats, this poses serious problems for the UK’s ability to disrupt foreign influence campaigns online, an area which has typically relied on platform cooperation and social‑media ‘takedowns’.

Social Media Takedowns Are No Longer A Suitably Scalable Solution

Events over the past two years have highlighted the threat that foreign influence campaigns pose to security and democracy. Foreign influence operators attempt to impact UK politics during acute crisis moments – such as during the Southport Riots – and in a more endemic manner – for instance on longstanding political issues like Scottish independence. In contrast to the United States – where Trump has presided over a dismantling of the operational architecture for combatting foreign interference online – the UK has been increasing its capability to counter these operations. An amendment to the National Security Act, passed in 2023, criminalises foreign‑directed influence activity which uses illegitimate (for example, misleading or coercive) means to achieve its objectives.

“Heightened political tensions aside, external takedown requests have often proven slow and difficult to scale.”

Critical mechanisms for enforcing this law depend on cooperation with US platform providers. Disrupting foreign influence networks online has traditionally relied on public‑ and private‑sector investigators – including teams within the social‑media platforms themselves – collating evidence of illegitimate foreign influence campaigns and requesting the content and/or users be removed by the platforms. These ‘takedowns’ can be couched in the language of breaking platform policy (such as Meta’s coordinated inauthentic behaviour policy) or implemented in response to other legal actions, such as sanctions against information threat actors.

Nominally, much of this is still going on despite shifts in American politics. While Elon Musk abolished many of X’s anti‑misinformation policies after taking over the platform in 2022 and Meta replaced fact‑checkers with community notes in 2025, policies on coordinated and inauthentic foreign interference remain largely intact.

The operating environment has clearly altered, with practitioners attesting to the difficulties of accessing the necessary data for research or getting through to the right contacts for takedowns over the past year or two. According to the former Chief Executive of the UK’s National Cyber Security Centre, it is now ‘much much harder’ to defend against foreign inauthentic activity online, as many of the major platforms are no longer ‘playing ball’ in the way they were five years ago, due to the political climate surrounding free speech in the US.

Platform counter‑threat teams face competing pressures. They need to maintain a minimum capability for complying with requests of international governments on issues including foreign influence campaigns and tend to do so via CIB and similar policies. Yet those same campaigns instrumentalise disinformation, an issue which, domestically in the US, is now being presented as a ‘guise’ for suppressing free speech.

During the UK general election in 2024, takedowns were taking around a day to be implemented, by which time audience impact had often already occurred.

Meanwhile, advances in automation and generative AI are increasing the scale and the sophistication of the threat. In this context, many argue that ‘reactive’ social‑media takedowns are an ineffective and ‘insufficient’ response. With automated and industrialised foreign influence operations being generated and disseminated faster than they can be detected and removed, relying on social‑media takedowns as the principal response is no longer viable.

Off‑Platform and Covert Approaches to Disrupting Foreign Influence Campaigns

There is a need for wider and more sophisticated thinking about the art of the possible for disrupting foreign influence campaigns online.

Some insights can be drawn from adjacent fields, such as cyber‑security and counter‑financial crime, as well as nation‑state approaches to disrupting other threat actors online.

  1. Infrastructure‑level disruption – Foreign influence operations rely on activities conducted across an influence ‘stack’: cognitive, application, infrastructure, and network levels. Over the last decade, disruption strategies expanded from cognitive interventions (e.g., debunking) toward platform‑level actions, including content and account takedowns. Drawing on practices from cyber‑security and counter‑cybercrime, there are opportunities to disrupt influence operations at the infrastructure or network level, for instance through cooperation with web‑hosting services. This could provide opportunities to disrupt website‑based influence operations, but leaves unresolved the issue of ‘bulletproof hosting’ and infrastructure hosted in adversary states. Recent and historic examples involving Cloudflare illustrate how public‑private cooperation may prove unreliable.

  2. Upstream disruption via the cyber kill chain – Conceiving foreign influence activity as staged operations involving infrastructure, funding, and personnel highlights opportunities to intervene during research, asset acquisition, and planning phases. This might include identifying and exposing inauthentic digital identities acquired by influence operators before they are mobilised.

  3. Off‑platform actions against operators – Sanctions or transaction restrictions, drawn from the toolkit of counter‑financial crime, can target the financial lifelines of influence campaigns. Countries such as Moldova – which witnessed widespread financial interference in its 2024 presidential election – are investing in new teams dedicated to securing elections through financial monitoring, including of cryptocurrency. These measures introduce grit into adversary influence operations, complicating their ability to pay proxies or procure professional services.

  4. Learning from counter‑terrorism and cyber‑crime disruption – Foreign‑interference architects often recruit proxies via social‑media groups or channels. Targeting these recruitment vectors at a covert level—e.g., undermining trust in financial offers—can draw on successful counter‑ransomware operations that combine cyber and information effects to enact technical disruptions while maximising cognitive impact on operators.

Countering foreign influence campaigns therefore requires a range of approaches that span resilience measures, proactive disruptive tactics, and offensive options. The UK must remain cognisant of technical and geopolitical realities when prioritising investment. Two trends look unlikely to be reversed: the increasing pace, scale, and scope of adversary influence activities online; and political polarisation and technical dependencies reducing the reliability of on‑platform takedowns as the primary response.

In this context, we need to think more deeply about how to meaningfully counter this threat, drawing on adjacent industry or policy areas, and being clear‑sighted about what levers remain available to government.


Written by: Sophie Williams‑Dunning, Research Analyst, Cyber and Tech View

Profile image – RUSI profile

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...