When AI Starts Acting on Its Own...

Simply Cyber
Simply CyberMar 25, 2026

Why It Matters

As AI agents proliferate across enterprise environments, extending zero‑trust controls to them is essential to prevent rapid, automated attacks that outpace traditional defenses.

Key Takeaways

  • Zero trust must extend to autonomous AI agents, not just humans
  • Agents operate at machine speed, requiring real-time authentication and authorization
  • Scaling human oversight demands policy automation and autonomous decision frameworks
  • Continuous monitoring is essential to detect agents deviating from intended behavior
  • Human-in-the-loop remains critical for policy creation and exception handling

Summary

At RSA, Cisco senior vice president Peter Bailey explained that zero‑trust security must evolve when the "identity" is an autonomous AI agent rather than a human user.

He noted traditional zero‑trust assumes breach and relies on static credentials, but agents act at machine speed, can masquerade, and therefore need dynamic identity, authentication, authorization, and continuous behavior monitoring. Scaling these controls to thousands of agents requires rule‑based policies and automated decision‑making.

"Zero trust assumes breach… now we think about agents," Bailey said, highlighting the difficulty of granting identity to non‑human actors and the necessity of autonomous capabilities to authenticate and authorize them while keeping humans in the policy‑creation loop.

The discussion signals that enterprises must redesign security architectures to incorporate AI‑aware trust models, invest in real‑time monitoring tools, and retain human oversight for exception handling, lest rogue agents undermine network integrity.

Original Description

See how Cisco Reimagines Security for the Agentic Workforce
=========================
Simply Cyber empowers people who want a rewarding cybersecurity career 💪
=========================
=========================
All the ways to connect with Simply Cyber
=========================

Comments

Want to join the conversation?

Loading comments...