How Safe Is Agentic AI? #Cybersecurity

GovTech Singapore
GovTech SingaporeMar 27, 2026

Why It Matters

Securing the interaction layer of agentic AI prevents malicious exploitation and ensures uninterrupted operations, making it a strategic priority for any organization deploying autonomous agents.

Key Takeaways

  • Connection layer between agents and tools is primary attack surface.
  • Lack of defined security boundaries amplifies vulnerability in agentic AI.
  • Sandbox tools and resources, not just the model, for protection.
  • Implement kill switches with clear trigger criteria and recovery plans.
  • Use containers and mode controls to isolate autonomous multi-step workflows.

Summary

The discussion centers on the security challenges of agentic AI systems, focusing on how these autonomous agents interact with tools, data sources, and other agents. Gw, a research director at Cyber Arc Labs, outlines the emerging threat landscape as organizations integrate AI-driven workflows into core operations.

He identifies the connection layer between agents and their tools as the largest attack surface, noting the absence of clear security boundaries. Demonstrations of AI coding agents being manipulated to execute shell commands illustrate how vulnerable these interfaces can be. Consequently, traditional isolation mechanisms—such as sandboxing, containers, and mode controls—are recommended to protect the resources the agents consume.

Gw emphasizes that the most effective controls reside on the tools rather than the language model itself, stating, “the most effective security control are the ones we placed on the tools and resources that the agent uses.” He also stresses the need for a well‑defined kill‑switch protocol that can halt rogue behavior while allowing seamless recovery or replacement of the compromised agent.

For enterprises, these safeguards are critical to prevent AI‑induced disruptions and to maintain business continuity. Implementing layered isolation and emergency shutdown mechanisms will become a baseline requirement as autonomous AI agents assume greater roles in multi‑step business processes.

Original Description

We chat with Gal Zror, Research Director at CyberArk a Palo Alto Networks company, as he shares insights from over 15 years in the field, including his work leading adversary AI initiatives.
Catch him live at STACKx Cybersecurity 2026, Track 2: Tempering the Flames: Securing AI, where he will deep dive into how organisations can secure agentic AI systems, from building robust development pipelines to implementing proactive detection and operational controls to prevent ex.
🗓️ 17 April | Sands Expo & Convention Centre
🎟️ Secure your spot now at go.gov.sg/stackxcyber2026-yt10
#TechEvent #STACKxCybersecurity2026 #EngineeringDigitalGovernment #10YearsOfGovTech #GovTechSG

Comments

Want to join the conversation?

Loading comments...