Autonomy Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Autonomy Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AutonomyVideosIROS 2025 Keynotes - Learning and Embodied Control: Abhinav Valada
AutonomyAIRobotics

IROS 2025 Keynotes - Learning and Embodied Control: Abhinav Valada

•February 19, 2026
0
IEEE Robotics & Automation Society
IEEE Robotics & Automation Society•Feb 19, 2026

Why It Matters

By enabling label‑efficient perception, continual adaptation, and offline policy refinement, these methods bring truly autonomous, everyday robots closer to commercial viability, reducing deployment costs and expanding their functional scope.

Key Takeaways

  • •Open‑world robot autonomy demands continual learning from diverse data.
  • •Foundation models enable label‑efficient perception with minimal annotations.
  • •Continual SLAM balances adaptation and memory across environments.
  • •Diffusion policy adaptation uses world models for offline skill improvement.
  • •Neural navigation integrates base motion and manipulation intent for mobile robots.

Summary

Abhinav Valada’s IROS 2025 keynote outlines a roadmap toward open‑world autonomy for everyday robots, emphasizing that true utility requires systems that can learn continuously across heterogeneous environments. He frames the challenge with a data pyramid—ranging from scarce, high‑quality tele‑operated robot data to abundant video streams—and asks how to fuse these tiers into robust policies. The talk then surveys concrete advances: foundation‑model‑driven perception that matches fully supervised performance with only ten labeled images, open‑set segmentation of unseen objects, and 3‑D scene‑graph representations for planning. He introduces continual SLAM, a dual‑network architecture that retains knowledge from prior scenes while adapting online, and demonstrates superior odometry across multiple city datasets. Further, Valada presents Artipoint for training‑free articulation reasoning from human demonstrations, and D.VA, which fine‑tunes diffusion policies entirely offline inside a learned world model, achieving drawer‑opening performance without any real‑world interactions. Finally, he showcases neural navigation for mobile manipulation, where a reinforcement‑learning base controller is conditioned on end‑effector intent, enabling zero‑shot task generalization across robots and dynamic environments. The overarching message is that perception must be label‑efficient, domain‑aware, and continuously improving, while policies should adapt offline using simulated rollouts and integrate navigation with manipulation intent. These innovations collectively lower the data‑collection burden, close the sim‑to‑real gap, and make robots more reliable in human‑centric settings, paving the way for service and industrial agents that can operate safely and autonomously throughout an entire day. The significance lies in demonstrating that robots can acquire and refine complex skills without extensive real‑world trial‑and‑error, dramatically accelerating deployment of adaptable, autonomous agents in homes, factories, and public spaces.

Original Description

"Keynote Title: ""Open World Embodied Intelligence: Learning from Perception to Action in the Wild""
Speaker Biography
Abhinav Valada is a Full Professor at the University of Freiburg, where he directs the Robot Learning Lab. He is affiliated with the Department of Computer Science, the BrainLinks-BrainTools center, and a founding faculty of the ELLIS Unit Freiburg. He received his Ph.D. from the University of Freiburg and his M.S. in Robotics from Carnegie Mellon University. Abhinav’s research lies at the intersection of robotics, machine learning, and computer vision, addressing fundamental problems in perception, state estimation, and decision making to enable robots to operate reliably in complex and diverse open-world settings. For his research, he received the IEEE RAS Early Career Award in Robotics and Automation, IROS Toshio Fukuda Young Professional Award, NVIDIA Research Award, IROS Best Paper on Cognitive Robotics, among others. Abhinav is a DFG Emmy Noether AI Fellow, Scholar of the ELLIS Society, IEEE Senior Member, and Co-Chair of the IEEE RAS Technical Committee on Robot Learning. He is a Senior Editor for IEEE Robotics and Automation Letters as well as an Associate Editor and Area Chair for multiple conferences and journals. Many aspects of his research have been prominently featured in wider media such as the Discovery Channel, NBC News, Business Times, and The Economic Times.
Abstract
A longstanding goal in robotics is to build agents that learn from the world and assist people in everyday tasks across homes, factories, and streets. This talk outlines a path to open world autonomy that learns continuously, reasons with language and vision, and closes the loop from perception to action. I will present representations that capture objects, relations, and articulation, online learning that adapts during deployment without forgetting, and uncertainty-aware decision making that knows when to ask for clarification, seek information, or recover. I will also discuss data and model efficiency in policy learning for long-horizon tasks, including from demonstrations, teleoperation, and world models for rapid offline adaptation. I will conclude with a discussion of safety, fairness, and responsible deployment, so that learning-enabled autonomy earns trust and delivers value to society.
"
0

Comments

Want to join the conversation?

Loading comments...