Half of Security Leaders Unready for AI Attacks, 59% UK Firms Can't Shut Down AI Quickly

Half of Security Leaders Unready for AI Attacks, 59% UK Firms Can't Shut Down AI Quickly

Pulse
PulseMar 24, 2026

Why It Matters

The findings highlight a systemic weakness at the intersection of cybersecurity and AI governance. As AI tools become integral to business processes, the attack surface expands, giving threat actors new vectors such as deep‑fake phishing and automated vulnerability scanning. Simultaneously, regulators like the EU are tightening rules around AI transparency and accountability, meaning that unprepared firms face both operational risk and legal exposure. For CIOs, the surveys signal that under‑investment in AI‑specific security and a lack of clear incident‑response protocols could translate into costly breaches, regulatory fines, and loss of stakeholder trust. Addressing these gaps will require a shift from ad‑hoc pilots to enterprise‑wide AI security programs, backed by dedicated budgets and cross‑functional governance. The data also suggests a talent shortage: without professionals who understand both AI and security, organizations will struggle to design controls that keep pace with rapidly evolving threats. The pressure is now on senior IT leaders to prioritize AI risk management as a core component of their cyber‑defense strategy.

Key Takeaways

  • 96% of surveyed security leaders view AI‑enabled attacks as a significant threat.
  • Only 46% feel "strongly confident" in their AI security controls.
  • 67% of organizations are still in "pilot mode" for AI defense strategies.
  • 85% say current cybersecurity budgets are insufficient for AI threats.
  • 59% of UK firms cannot quickly shut down an AI system in a crisis.

Pulse Analysis

The convergence of AI and cyber‑risk is reshaping the CIO agenda in ways that traditional security frameworks cannot fully address. Historically, cyber defenses have focused on perimeter protection and known malware signatures. AI, however, introduces a dynamic adversary capable of generating novel attack vectors at scale, forcing security teams to adopt predictive analytics and automated response mechanisms. The EY data shows that while awareness is near‑universal, execution lags, largely due to budget constraints and a reliance on pilot projects that never mature into production‑grade defenses.

Regulatory pressure compounds the technical challenge. The EU AI Act mandates explainability and accountability, turning AI governance into a compliance imperative. ISACA’s UK survey reveals that many firms lack the basic operational controls—such as rapid shutdown capabilities and clear accountability—to meet these standards. This creates a two‑front battle: defending against sophisticated AI‑driven threats while simultaneously proving to regulators that the organization can contain and explain any AI failure.

For CIOs, the path forward involves integrating AI risk into the broader cyber‑risk management lifecycle. This means allocating dedicated funding for AI‑specific tools, hiring or upskilling talent with hybrid AI‑security expertise, and embedding AI incident‑response drills into regular tabletop exercises. Companies that can operationalize these measures will not only reduce their exposure to emerging threats but also position themselves as compliant, trustworthy players in an increasingly regulated AI market.

Half of Security Leaders Unready for AI Attacks, 59% UK Firms Can't Shut Down AI Quickly

Comments

Want to join the conversation?

Loading comments...