AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosIROS 2025 Keynotes - Mechanisms and Controls: Fei Miao
AutonomyAIRobotics

IROS 2025 Keynotes - Mechanisms and Controls: Fei Miao

•February 18, 2026
0
IEEE Robotics & Automation Society
IEEE Robotics & Automation Society•Feb 18, 2026

Why It Matters

By quantifying perception uncertainty and embedding it into robust multi‑agent RL, the approach enables safe, reliable operation of autonomous robots at scale, reducing costly failures and accelerating real‑world adoption.

Key Takeaways

  • •Uncertainty quantification improves perception safety in robotics
  • •Robust multi‑agent RL handles adversarial state perturbations
  • •Hybrid deep‑learning and statistical calibration reduces prediction errors
  • •Hierarchical control merges RL decisions with safety‑focused MPC‑CBF
  • •Zero‑shot transfer outperforms domain randomization on hardware

Summary

The keynote by Fei Miao focused on advancing uncertainty understanding and safe, robust reinforcement learning for multi‑agent robotic systems, with autonomous driving as a primary example. Miao highlighted the gap between high‑performance perception models and their lack of calibrated uncertainty, which hampers safety guarantees in real‑time decision making.

The research introduces a novel pipeline that predicts both mean and covariance for perception outputs, then applies statistical calibration—such as moving‑block bootstrap and conformal prediction—to produce reliable uncertainty estimates. This approach boosts detection accuracy, improves tracking of occluded objects, and enhances 3D occupancy predictions, especially for rare, safety‑critical classes like pedestrians and bicycles.

Building on calibrated perception, the team formulates a state‑adversarial Markov game to model multi‑agent reinforcement learning under uncertain observations. They prove that robust Nash equilibria rarely exist, so they adopt worst‑case optimization and integrate adversarial training into policy learning. The resulting algorithm couples discrete RL actions with a continuous Model Predictive Control layer constrained by Control Barrier Functions that incorporate perception uncertainty, achieving 100% safety in simulated intersection and highway scenarios.

Hardware experiments demonstrate that the robust RL‑MPC‑CBF framework transfers zero‑shot to real robots, outperforming traditional domain‑randomization methods and delivering consistent safety margins even under aggressive perturbations. This work signals a shift toward provably safe, uncertainty‑aware AI for large‑scale robotic fleets, paving the way for reliable deployment in warehouses, manufacturing, and autonomous vehicles.

Original Description

"KeynoteTitle: ""From Uncertainty to Action: Robust and Safe Multi-Agent Reinforcement Learning for Embodied AI""
Speaker Biography
Dr. Fei Miao is the Pratt & Whitney Endowed Associate Professor of the School of Computing at the University of Connecticut, where she joined in 2017. She received Ph.D. degree and the Best Doctoral Dissertation Award in Electrical and Systems Engineering, with a dual M.S. degree in Statistics, from the University of Pennsylvania in 2016, where she also completed her postdoc training to 2017. Her research focuses on multi-agent reinforcement learning, robust and safe RL, uncertainty quantification, and foundation models, to address safety, efficiency, robustness, and security challenges of Embodied AI. Dr. Miao is a receipt of the NSF CAREER award and a couple of other awards from NSF, a Nominee of NSF the Alan T. Walterman Award. Dr. Miao's work has been recognized with multiple best paper awards at top-tier conferences, and she serves as an associate editor for several IEEE journals such as IEEE RAL, OJ-CSYS, and multiple conferences.
Abstract
Deploying Embodied AI and multi-agent systems is critically hampered by challenges in perception uncertainty and robust decision-making in various scenarios. This talk presents novel uncertainty quantification and robust multi-agent reinforcement learning (MARL) frameworks that directly confront this challenge. First, we introduce an uncertainty quantification method for deep learning-based perception and prediction models. Building upon this, we provide a theoretical analysis of MARL under state uncertainties, leading to a provably robust algorithm that can withstand worst-case uncertainties. Furthermore, we leverage control theory with robust MARL framework to achieve a critical balance between provable safety and high operational efficiency. We demonstrate the power of this research in the context of connected autonomous vehicles, validating our framework in both high-fidelity simulators and on real-world hardware testbeds. The talk will conclude by outlining a forward-looking research agenda for creating the next generation of trustworthy, cooperative AI systems.
"
0

Comments

Want to join the conversation?

Loading comments...