AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosBlack Hat USA 2025 | Breaking Out of The AI Cage: Pwning AI Providers with NVIDIA Vulnerabilities
EnterpriseAICybersecurityHardware

Black Hat USA 2025 | Breaking Out of The AI Cage: Pwning AI Providers with NVIDIA Vulnerabilities

•February 23, 2026
0
Black Hat
Black Hat•Feb 23, 2026

Why It Matters

The vulnerability creates a single point of failure for the rapidly growing AI cloud market, allowing attackers to hijack infrastructure and steal sensitive data. Mitigating it is essential to preserve trust in AI-as-a-service offerings.

Key Takeaways

  • •NVIDIA Container Toolkit vulnerability enables host escape.
  • •Affects all major AI cloud platforms using NVIDIA containers.
  • •Enables cross‑tenant data theft and Kubernetes cluster compromise.
  • •Demonstrated on Replicate and DigitalOcean services.
  • •Highlights single point of failure in AI infrastructure security.

Pulse Analysis

NVIDIA’s GPUs and software have become the de‑facto backbone for most commercial AI workloads, with the NVIDIA Container Toolkit handling isolation for thousands of concurrent models. This dominance simplifies deployment but also concentrates risk; a single flaw in the toolkit can cascade across any environment that depends on it, from private data centers to multi‑tenant cloud services. As AI adoption accelerates, the security of the underlying container layer is increasingly critical for protecting intellectual property and compliance.

The Wiz team’s research revealed a container‑escape vulnerability that lets an attacker break out of the sandbox and gain host‑level privileges. By exploiting a flaw in the toolkit’s runtime, the adversary can infiltrate the Kubernetes control plane, pivot between pods, and harvest credentials, model weights, and customer data. Real‑world validation on platforms such as Replicate and DigitalOcean demonstrated that the issue is not theoretical—cross‑tenant data leakage and full cluster takeover are achievable with minimal effort. These findings highlight how a single software component can undermine the entire security model of AI‑as‑a‑service providers.

Industry response will likely focus on rapid patching, hardening of container runtimes, and diversification of isolation mechanisms. Providers must audit their dependency chains, enforce least‑privilege policies, and consider alternative orchestration layers that reduce reliance on a single vendor’s stack. For enterprises, the episode serves as a reminder to implement defense‑in‑depth strategies, including runtime monitoring and segmentation, to mitigate the impact of any future supply‑chain weaknesses. The broader lesson is clear: as AI workloads become mission‑critical, their underlying infrastructure must be secured with the same rigor as the applications themselves.

Original Description

The overwhelming majority of AI applications run on NVIDIA hardware and software and use NVIDIA tools to containerize and isolate applications running on the same infrastructure. A vulnerability in this single point of failure could allow the breakdown of security mechanisms and takeover of the AI infrastructure.
In this research project, we managed to prove this scenario is indeed possible. We found a critical vulnerability in one of the foundational software components that powers all the world's AI managed infrastructure: the NVIDIA Container Toolkit. This vulnerability allows an attacker to escape from the container to the underlying host and often compromise the entire Kubernetes cluster.
We tested this vulnerability on all major AI platforms, all of which proved to be susceptible to this attack. In some cases, the container escape was sufficient to prove unauthorized cross-tenant data access. Including credentials and customer data, breaching the platform's foundational security model. We'll take a deep dive into two case studies with completely different results: Replicate and DigitalOcean.
In this talk, we will dive into our findings, starting from the discovery of the vulnerability itself, through its real-world exploitation on AI cloud services, finishing with the details of industry-wide impact. Attendees will learn about how major cloud services operate their security behind the scenes and the lessons they can apply to their own environment.
By:
Andres Riancho | Security Researcher, Wiz
Hillai Ben-Sasson | Security Researcher, Wiz
Ronen Shustin | Security Researcher, Wiz
Presentation Materials Available at:
https://blackhat.com/us-25/briefings/schedule/?#breaking-out-of-the-ai-cage-pwning-ai-providers-with-nvidia-vulnerabilities-46498
0

Comments

Want to join the conversation?

Loading comments...