Black Hat USA 2025 | Evaluating Autonomous Vehicle Resilience

Black Hat
Black HatMar 27, 2026

Why It Matters

Ensuring safe AI‑human collaboration in autonomous vehicles prevents accidents and mitigates malicious exploitation, making large‑scale deployment viable.

Key Takeaways

  • Teleoperation assists AI in complex driving scenarios via human guidance
  • Continuous validation loops test AI‑human waypoint interactions for safety
  • Malicious or erroneous teleoperation commands can cause off‑road incidents
  • Fuzzing generates thousands of waypoint variants to uncover hidden failures
  • Scalable simulation‑based testing is essential for autonomous vehicle resilience

Summary

The Black Hat USA 2025 presentation from Zuks engineers focused on the resilience of autonomous‑driving vehicles through a human‑in‑the‑loop teleoperation model. Jan Hu and Shane Gupta explained how a remote operator can intervene when the AI’s confidence drops, sending waypoint suggestions while the vehicle retains full control.

The speakers highlighted a rigorous validation pipeline: simulated test cases replicate complex scenarios, operators provide guidance, and the system’s response is monitored for safety. They warned that both accidental operator errors and deliberately malicious waypoint commands can push a vehicle off‑road or create near‑misses, underscoring the need for continuous, repeatable testing.

A key demonstration showed a vehicle navigating a construction zone with remote assistance, followed by a controlled failure where an operator’s erroneous waypoint nearly collided with a pedestrian. To scale testing, Zuks adapted software‑security fuzzing techniques, mutating real‑world driving data into 50,000+ simulated “mutants” that exposed hidden failure modes, including subtle command variations that led to collisions.

The implication is clear: traditional manual testing cannot cover the combinatorial explosion of AI‑human interactions. Industry players must adopt automated, simulation‑driven fuzzing frameworks to certify teleoperation safety, a move likely to influence future regulatory standards and consumer trust.

Original Description

The Adversarial Scenario Fuzzer is an automated testing framework that evaluates autonomous vehicle resilience against potentially harmful teleoperation commands. While teleoperation can help resolve complex driving situations, incorrect or malicious commands pose safety risks.
The fuzzer systematically generates challenging scenarios through simulation, including:
- Malicious trajectory suggestions
- Conflicting guidance signals
- Environmental perturbations
Using iterative optimization, the fuzzer creates increasingly impactful test cases while evaluating the vehicle's ability to reject unsafe commands. This approach helps validate the robustness of autonomous decision-making systems and ensures safety mechanisms can effectively handle adversarial inputs.
By:
Zhisheng Hu | Product Security Engineer, Zoox, Inc.
Shanit Gupta | Director of Product Security, Zoox, Inc.
Cooper de Nicola | Product Security Engineer, Zoox, Inc.
Presentation Materials Available at:

Comments

Want to join the conversation?

Loading comments...