F5 Targets AI Runtime Risk with New Guardrails and Adversarial Testing Tools
CybersecurityAI

F5 Targets AI Runtime Risk with New Guardrails and Adversarial Testing Tools

Help Net Security
Help Net SecurityJan 15, 2026

Why It Matters

Enterprises can now secure AI deployments at scale, meeting rising regulatory demands and protecting against sophisticated adversarial threats that legacy tools miss.

F5 targets AI runtime risk with new guardrails and adversarial testing tools

Published by Help Net Security

F5 has unveiled general availability of F5 AI Guardrails and F5 AI Red Team, two solutions that secure mission‑critical enterprise AI systems. With these releases, F5 is providing a comprehensive end‑to‑end lifecycle approach to AI runtime security, including enhanced ability to connect and protect AI agents with both out‑of‑the‑box and custom guardrails.

These security offerings align with customer needs for flexible deployment, model‑agnostic protection, and the ability to tailor and adapt AI security policies in real time, drawing on F5’s deep expertise at the application layer, where AI interactions occur. F5 AI Guardrails and F5 AI Red Team are already deployed at leading Fortune 500 enterprises across multiple industries globally, including in highly regulated financial services and healthcare organizations.

“Traditional enterprise governance cannot keep up with the velocity of AI,” said Kunal Anand, Chief Product Officer at F5. “When policy lags adoption, you get data leaks and unpredictable model behavior. Organizations need defenses that are as dynamic as the models themselves. F5 AI Guardrails secures the traffic in real time, turning a black box into a transparent system, while F5 AI Red Team proactively finds vulnerabilities before they reach production. This allows organizations to stop fearing risk and start shipping apps and features with confidence.”

As enterprises accelerate AI adoption across customer experiences, internal workflows, and mission‑critical decision making, the risk landscape is rapidly shifting. Organizations now grapple not only with external attackers, but also adversarial manipulation of models, data leakage, unpredictable user interactions, and growing compliance obligations.

By pairing F5 AI Guardrails and F5 AI Red Team with infrastructure protection—including API security, web application firewalls, and DDoS defenses—enterprises can secure AI systems alongside existing applications, improving visibility and policy consistency without relying on fragmented point solutions.

Transforming risk into confident AI deployment

As organizations race to operationalize AI, most security tools address only fragments of the expanding attack surface. F5 is delivering an AI security solution, combining real‑time runtime defenses with offensive security testing and pre‑built attack patterns to help organizations deploy AI with confidence. Doing so requires addressing the risks inherent in how AI systems operate in practice, where models vary widely in capability and behavior, and also interact with sensitive data, users, APIs, and other systems in ways legacy tools weren’t built to manage.

F5 AI Guardrails provides a model‑agnostic runtime security layer designed to protect every AI model, app, and agent across every cloud and deployment environment with consistent policy enforcement. As the number of models grows into the millions, AI Guardrails delivers consistent protection against adversarial threats such as prompt injection and jailbreak attacks, prevents sensitive data leakage, and enforces corporate and regulatory obligations, including GDPR and the EU AI Act.

AI Guardrails also delivers in‑depth observability and auditability of AI inputs and outputs so teams can see not just what the model did, but why it did it—a core need for governance and compliance in regulated industries.

Continuous assurance for evolving AI systems

Complementing runtime protection, F5 AI Red Team delivers scalable, automated adversarial testing that simulates both common and obscure threat vectors, powered by the industry’s preeminent AI vulnerability database—adding over 10,000 new attack techniques every month as real‑world threats evolve.

AI Red Team reveals where models can provide dangerous or unpredictable outputs, and its insights feed directly back into AI Guardrails policies so defenses evolve as threats and models themselves change. Together, AI Guardrails and AI Red Team establish a continuous AI security feedback loop: proactive assurance, adaptive runtime enforcement, centralized governance, and ongoing improvement.

Comments

Want to join the conversation?

Loading comments...