IROS 2025 Keynotes - Field Robotics: Brendan Englot

IEEE Robotics & Automation Society
IEEE Robotics & Automation SocietyFeb 18, 2026

Why It Matters

The technologies enable autonomous underwater robots to inspect, map, and navigate complex, turbid environments with high precision, reducing human risk and operational costs for offshore industries.

Key Takeaways

  • Stereo imaging sonars enable real-time 3D underwater perception.
  • Fusion of sonar and monocular camera improves close-range inspection.
  • Virtual landmark framework reduces map uncertainty and navigation drift.
  • Distributional reinforcement learning adds risk-aware decision making for USVs.
  • Multi-robot rendezvous using shared virtual maps enhances collaborative mapping.

Summary

Brendan Englot’s IROS 2025 keynote highlighted the latest advances in situational awareness and decision‑making for marine robots, spanning perception, exploration, and risk‑aware control. His Robust Field Autonomy Lab at Stevens focuses on equipping underwater platforms with sensors and algorithms that can operate under turbidity, lighting constraints, and dynamic disturbances.

The presentation detailed three core innovations. First, a stereo pair of imaging sonars mounted on a customized Blue ROV generates dense 3D point clouds in real time, using CFAR‑based feature detection and cross‑sensor clustering to reconstruct geometry that cameras alone cannot capture. Second, fusing these sonar outputs with a monocular camera enables close‑range inspection, demonstrated by reconstructing a steel pipe passing near the robot despite turbid water. Third, a virtual‑landmark exploration framework drives down covariance across a discretized map, yielding accurate SLAM without explicit landmark tracking and supporting multi‑robot rendezvous for shared uncertainty reduction.

Englot showcased experimental results: a pier‑inspection run where the robot continuously updated a high‑resolution occupancy grid while maintaining low pose uncertainty, contrasted with a next‑best‑view approach that accumulated drift. He also introduced an adaptive implicit quantile network (IQN) that distorts value distributions based on proximity to obstacles, producing conservative trajectories (orange) versus risky greedy paths (green). Multi‑USV simulations demonstrated non‑cooperative congestion handling and collaborative mapping via virtual landmark exchanges.

These advances promise reliable autonomous underwater maintenance for offshore infrastructure, lower operational costs for fish‑farm inspections, and safer navigation of USVs in stochastic riverine environments. By integrating robust 3D perception, uncertainty‑aware exploration, and risk‑sensitive reinforcement learning, marine robotics moves closer to fully autonomous, resilient field deployments.

Original Description

"Keynote Title: ""Situational Awareness and Decision-Making Under Uncertainty for Marine Robots""
Speaker Biography
Brendan Englot is the Anson Wood Burchard Endowed Professor at Stevens Institute of Technology in New Jersey, USA, where he is also the Director of the Stevens Institute for Artificial Intelligence. Brendan and his students develop perception, navigation and decision-making algorithms that enable mobile robots to achieve robust autonomy in complex physical environments. Brendan is a Senior Member of the IEEE, and a co-author of eight U.S. patents and more than 75 refereed journal and conference papers. Abstract
This talk will discuss recent work aimed at advancing the autonomy of marine robots operating in complex environments. First, to achieve the situational awareness needed for autonomous inspection and precise physical intervention, I will discuss research that aims to produce accurate, high-definition 3D maps of underwater structures using wide-aperture multi-beam imaging sonar. Second, I will discuss research intended to help marine robots make safe and efficient navigation decisions under both epistemic and aleatoric uncertainty. To address the former, sonar-equipped underwater robots use ""virtual maps"" as a tool to support accurate map-building under localization uncertainty. To address the latter, we employ distributional reinforcement learning to help lidar-equipped unmanned surface vehicles navigate congested and disturbance-filled environments. Our results include several open-source algorithm implementations and benchmarking tools.
"

Comments

Want to join the conversation?

Loading comments...