Black Hat USA 2025 | Advanced Bypass Techniques and a Novel Detection Approach

Black Hat
Black HatMar 10, 2026

Why It Matters

As enterprises increasingly adopt third‑party AI models, undetected malicious code can compromise critical systems; a robust detection approach is essential to secure the AI supply chain.

Key Takeaways

  • Third‑party AI models can execute malicious code at load or inference.
  • Static scanners rely on deny lists, which cannot cover all unsafe functions.
  • Embedded bytecode and pickle opcodes enable bypasses that evade detection.
  • Model architecture serialization often uses unsafe formats like pickle or lambdas.
  • Novel dynamic detection approach promises to mitigate static scanner shortcomings.

Summary

The Black Hat USA 2025 presentation by Itai Ravia of AIM Security highlighted a growing crisis in AI supply‑chain security: third‑party models can execute malicious code during loading or inference, and back‑door inputs can be silently injected by model authors. Ravia explained that model files contain not only massive weight tensors but also complex architecture definitions, often serialized with unsafe formats such as Python pickle, dill, or embedded lambda bytecode, which can hide arbitrary OS commands.

Current defenses rely on static scanners that inspect model byte streams for known dangerous imports like os.system. The talk demonstrated why this approach is fundamentally flawed: deny‑list scanners cannot enumerate the millions of possible Python functions, and they fail to analyze embedded bytecode or the full semantics of pickle opcodes. Using an AI‑driven agent, the researchers uncovered dozens of wrapper functions that bypass scanners, and they showed concrete exploits—e.g., leveraging the Emlflow projects backend or crafting minimal bytecode that imports os and calls system—undetected by Hugging Face’s own scans.

Ravia cited real‑world examples where Hugging Face’s pickle scanner flagged a model as unsafe for one import but marked another with identical imports as “unknown,” leaving data scientists to decide manually. He also detailed how pickle’s stack‑based opcodes (IMPORT, PUT, GET, INSTANTIATE, CALL) can be orchestrated to desynchronize scanner expectations, creating a stealthy execution path that static analysis cannot reliably emulate.

The implication is clear: organizations must move beyond static signature checks toward dynamic, behavior‑based detection that can safely execute and monitor model loading in a sandbox. Ravia’s novel detection framework promises to simulate the full pickle execution flow, catching hidden malicious logic and restoring confidence in the rapidly expanding model marketplace.

Original Description

Many AI frameworks present attackers with a new attack potential by introducing unsafe serialization formats, such as Pickle and lambda functions, into their model formats. To mitigate against these kinds of attacks, several model scanners have emerged. These model scanners run through public AI repositories, such as HuggingFace, in the hope of finding a supply chain attack. Such scanners typically rely on static analysis of model files. However, this approach has inherent limitations, as static analysis alone lacks the algorithmic capability to accurately emulate the actual loading process. Consequently, relying solely on static analysis may create a false sense of security when using models from unknown third-party sources.
In this talk, we will discuss the shortcomings of the static analysis approach. We start by discussing common model formats (such as Pickle and Keras) and why they can never be replaced in some popular frameworks, despite being unsafe. This means that the problem of model scanning is unfortunately here to stay and needs to be properly dealt with. We then talk about how we managed to create and identify dozens of examples that go completely undetected by current model scanners and provide several examples, including non-detected malicious models found in the wild.
Based on those examples, we deep-dive into the inherent shortcomings of static scanners and why they cannot hope to provide a comprehensive solution. From these insights, we derive a dynamic approach that allows mitigating the static scanner shortcomings and shows how it does not exhibit the inherent problems static scanners have. Throughout this talk, we will discuss models' lifecycle in data-science work, and how to make sure both homegrown models and external models do not pose risks to organizations.
By:
Itay Ravia | Head of Aim Labs, Aim Security
Presentation Materials Available at:

Comments

Want to join the conversation?

Loading comments...