
NASA’s Perseverance rover has gained the ability to determine its exact position on Mars without Earth‑based assistance, thanks to a new system called Mars Global Localization. The rover stitches together panoramic shots into a bird’s‑eye map, then an onboard algorithm rapidly matches this view against high‑resolution terrain maps supplied by orbiters. A commercial‑grade processor, originally used for communicating with the Ingenuity helicopter, performs these calculations more than 100 times faster than Perseverance’s primary computer, delivering pinpoint location in seconds. The same team is now testing generative‑AI tools to create optimal driving waypoints, extending the rover’s autonomous range. Engineers note that the technology, proven on Mars, will be critical for upcoming lunar missions where harsh lighting and long nights demand precise navigation. Self‑localization and AI‑driven path planning free Perseverance to explore farther, collect more science, and reduce the latency and workload of ground controllers, setting a template for future planetary rovers and crewed landers.

In this keynote, Shifen Yan from UC Santa Barbara introduced a token‑level adaptive inference framework for transformer models, arguing that the uniform computational cost per token is inefficient for many robotics and language tasks. By inserting a lightweight router before...

Brendan Englot’s IROS 2025 keynote highlighted the latest advances in situational awareness and decision‑making for marine robots, spanning perception, exploration, and risk‑aware control. His Robust Field Autonomy Lab at Stevens focuses on equipping underwater platforms with sensors and algorithms that...

The keynote at IROS 2025 presented a multidisciplinary optimization framework for long‑range autonomous underwater vehicles (AUVs), aiming to overcome traditional design bottlenecks and deliver cost‑effective, high‑performance ocean observation platforms. The speaker, Jiancheng Yu of the Shenyang Institute of the Chinese Academy...

Matteucci’s IROS 2025 keynote frames agriculture’s fourth, digital revolution as a necessity to feed a projected two‑billion‑person increase by 2100. He links declining farm labor, rising food insecurity, and unsustainable fertilizer and water use to the urgent need for robotics,...

Timothy Chung’s IROS 2025 keynote outlined Microsoft’s strategic roadmap for field robotics, emphasizing a shift from isolated robot pilots to large‑scale, interoperable robot federations that operate across air, sea, ground and underwater domains. He framed this evolution on a two‑dimensional...

The keynote by Kei Okada traced the evolution of humanoid robotics from the early HRP2 platform to today’s foundation‑model‑driven systems, emphasizing that robots must coexist with humans in environments built for us. He argued that the defining trait of humanity—tool...

Xingxing Wang’s IROS 2025 keynote highlighted his firm’s rapid evolution from early Kodrader platforms to a diversified humanoid portfolio. Since its 2016 founding, the company unveiled the full‑size H1, then the compact 1.3‑meter G1 in 2023, and most recently the...

The IROS 2025 keynote by Eiichi Yoshida examined how contact‑rich human motions can be harvested to advance humanoid robot mechanisms and control. Yoshida traced the evolution from a handful of humanoid platforms in 2022 to a burgeoning ecosystem of commercial...

The keynote by Fei Miao focused on advancing uncertainty understanding and safe, robust reinforcement learning for multi‑agent robotic systems, with autonomous driving as a primary example. Miao highlighted the gap between high‑performance perception models and their lack of calibrated uncertainty,...

Kenjiro Tadakuma’s IROS 2025 keynote centered on a sweeping portfolio of novel mechanisms and control concepts, ranging from omni‑directional locomotion modules to bio‑inspired soft actuators. He framed the discussion around the invention process, showcasing dozens of prototype models that illustrate both...

At Manifest Vegas, Easy Metrics co‑founder Dan Keto explained how distribution centers are moving from simple labor tracking to a holistic, engineering‑driven Warehouse Performance Management model. He emphasized that the proliferation of robotics and siloed software creates a fragmented data...

At the World Defense Show, Shield AI highlighted its AI‑driven unmanned aircraft designed for GPS‑denied battlefields, emphasizing trusted autonomy and sovereign control. The company unveiled its latest vertical‑takeoff‑and‑landing (VTOL) strike platform, the Expat, powered by a GE F‑110 engine that...

Aaron Borger, co‑founder and CEO of Orbital Robotics, presented the company’s vision for AI‑controlled robotic arms that can capture, refuel, repair, or de‑orbit spacecraft in orbit. The firm aims to provide space‑grade hardware and integrated software to any satellite...

CreateMe Technologies, led by Cam Myers, is deploying AI‑driven robotics to automate soft‑material handling, a long‑standing barrier in apparel manufacturing. By pairing machine‑learning perception with a custom robotic stack, the firm replaces traditional stitching with high‑speed adhesive bonding. This enables...

The episode of TechStrong TV featured Brian Dawson, director of product management Linux at CIQ, discussing the company’s launch of a hardened version of Rocky Linux designed to meet the security demands of the AI‑driven compute era. Dawson highlighted that AI...

The video documents the author’s first public ride in a Zoox autonomous vehicle in Las Vegas, part of a limited free‑service pilot that requires only a credit‑card‑linked app download. The eight‑seat pod offers individual climate and music controls at four user...

DiskChunGS introduces a scalable 3D Gaussian splatting SLAM pipeline that overcomes traditional GPU memory constraints by treating scene reconstruction as a spatial streaming problem. The system partitions the environment into discrete chunks, keeping only the currently visible regions in GPU...

The session, led by PX4 maintainer Benjamin and Linux Foundation’s Ramon, introduced a new workflow for achieving precision landing on drones by integrating the PX4 autopilot stack with ROS 2. Key technical points included the use of PX4’s uORB middleware extended to...

The IROS 2025 keynote by Fumin Zhang examined how robots can perform high‑stakes search and rescue tasks by marrying classic search theory with modern generative AI and control techniques. Zhang highlighted that, despite a half‑century of research, the field has...

Fumiya Lida’s IROS 2025 keynote framed embodied intelligence as the reciprocal relationship between a body’s physical dynamics and the brain’s control mechanisms, challenging the longstanding brain‑versus‑body dualism that has split robotics from AI. He highlighted the staggering scale gap—30 trillion cells...

The video highlights Alabama’s acute shortage of obstetric‑gynecologists in many rural counties and the state’s experimental response: deploying robotic ultrasound systems to scan pregnant patients remotely. Ultrasound imaging is vital for high‑risk pregnancies—cervical‑length measurement, fetal growth monitoring, and still‑birth prevention—but traditional...

The video introduces acoustic robotics, where tiny polymer devices are powered solely by ultrasound‑induced bubble dynamics, eliminating wires, batteries, or magnets and opening the door to fully wireless medical microrobots. A thin polymer sheet is laser‑molded with thousands of sub‑millimetre cavities...

The video examines Vitestro’s robotic blood‑drawing system, which uses ultrasound to locate a vein, positions the arm, inserts the needle, collects the sample, retracts the needle and applies a bandage—all without human hands touching the needle. The device already carries...

Li Zhang’s IROS 2025 keynote highlighted the rapid evolution of miniature biomedical robots, emphasizing magnetic actuation, bio‑hybrid materials, and modular architectures for safe, targeted therapy. He traced the concept back to Richard Feynman’s swallowable‑surgery vision and described how his team fabricates...

The IROS 2025 Human‑Robot Interaction keynote by Javier Alonso‑Mora centered on the challenges and breakthroughs in multi‑agent autonomy for mobile robots. He outlined how robots must not only navigate complex, dynamic environments but also cooperate with other robots and humans...

The keynote highlighted the limits of pure deep‑learning approaches for robot cognition, arguing that true general intelligence requires embodied, tactile experience. Perla Maiolino described how artificial skin (SciSkin) and distributed proximity sensors give robots a closed‑loop sense‑act‑perceive cycle, allowing safe,...

Eric Kimberling, CEO of Third Stage Consulting, hosted a solo “Industry 4.0 Reality Check” session after his guest canceled, framing a wide-ranging discussion on manufacturing technology trends, what’s working, and what’s overhyped. He emphasized his firm’s manufacturing focus, invited audience...

Elon Musk uses a recent interview to argue that America’s manufacturing lag, especially in rare‑earth refining, can only be closed with advanced robotics. He points out that China processes roughly twice the global ore output, and the United States routinely...

The lecture framed autonomous driving as the ultimate test for artificial intelligence, contrasting it with games like chess that have already been mastered by AI. While chess operates in a closed, rule‑bound environment, driving unfolds in an open system where...

The video introduces a senior capstone course, Design and Testing of Autonomous Vehicles, where students build a complete autonomous landing system that mimics lunar‑lander challenges. The class moves from mission definition through requirements, software architecture, and finally hardware integration, tasking...

The paper presents the first differentiable Model Predictive Control (MPC) framework that can vary its cost‑function weights online for constrained nonlinear systems, leveraging gradient‑based policy learning. A lightweight neural network receives real‑time observations—such as reference trajectory curvature and velocity—and outputs MPC...

Now that the Atlas enterprise platform is getting to work, the research version gets one last run in the sun. Our engineers made one final push to test the limits of full-body control and mobility, with help from the RAI...

The final project presentation of the Robotics Developer Masterclass showcased Aaron Emer’s "tic‑tac‑toe bot," a robotic arm that plays tic‑tac‑toe against a human opponent using computer vision and motion planning. The system combines the ROS framework, OpenCV for perception, MoveIt...

In February 2026 the company closed $270M series B, to begin the first stages of commercialization and production deployments later this year.

Jee Hwan Ryu presented the latest advances in soft‑growing "vine" robots, machines that extend their bodies by everting material rather than moving a rigid chassis. This eversion‑based locomotion lets the robot slip through tight, slippery or even vertical passages,...

Kevin Chen’s presentation spotlights a new generation of insect‑scale aerial robots that combine soft artificial muscles with rigid airframes, challenging the conventional view that soft robots are inherently slow and imprecise. By leveraging dielectric elastomer actuators capable of hundreds of...

Xifeng Yan, a UC Santa Barbara researcher, presented an adaptive inference framework for transformer models, highlighting its relevance to emerging robotics applications that increasingly rely on large‑scale language and vision transformers. He argued that the uniform computational cost per token...

The presentation focused on making autonomous robots transparent by integrating interpretable and explainable AI methods. Ramirez outlined a five‑layer model—intention, reasoning, capabilities, prediction, and context—designed to let humans understand a robot’s internal decision process. Key technical contributions include a semantic decision‑tree...

Fuchun Sun outlines a knowledge-guided approach to embodied vision-language-action (VLA) agents that integrates tactile sensing and physical awareness with large language models. He argues tactile feedback closes the semantic–physics gap—enabling fine force control, collision detection, and perception of material properties—critical...

Host answers listener questions about exploration of icy moons, outlining a variety of nontraditional rover concepts—large-wheeled vehicles, rocket-assisted hoppers, snake-like robots, under-ice crawlers and rappelling bots—designed to handle pulverized ice, spikes and cliffs. He notes hoppers that leap on ballistic...

Fauna Robotics’ new general-purpose platform Sprout is drawing attention from major partners including Disney and Boston Dynamics as a safe, human-friendly robot designed for research labs. Demonstrated by IEEE Spectrum’s editor, Sprout can be teleoperated via VR paddles, controlled through...

Seoul National University researcher Hyoun Jin Kim reviewed advances and remaining hurdles in autonomous aerial manipulation, arguing that drones must move beyond sensing to physically interacting with environments. He highlighted core technical challenges—limited thrust, stability during contact, unknown interaction forces,...

Marco Hutter traced the rapid maturation of legged robotics from his ETH Zurich PhD work on dynamically balancing quadrupeds to commercial deployments today, highlighting advances in actuation, autonomy, sensing and system-level robustness. He described early field trials that exposed reliability...

The paper presents Actor‑Critic Model Predictive Control (ACMPC), a hybrid framework that merges a differentiable MPC module with an actor‑critic reinforcement‑learning architecture to achieve agile flight in highly nonlinear quadrotor systems. By embedding a dynamics model directly into the MPC, the...

The video introduces a quadruped robot designed to autonomously monitor volcanic gases on Italy’s Mount Etna, addressing the long‑standing challenge of sampling in unstable, toxic terrain. Equipped with a commercial quadrupole mass spectrometer, the robot combines global localization and terrain‑aware navigation,...

The video introduces a multitask reinforcement‑learning framework that trains a single, generalist controller for quadrotors capable of handling stabilization, high‑speed racing, and velocity‑tracking commands. By partitioning sensor inputs into shared and task‑specific observations, the system feeds each through a common...

The paper introduces a novel online learning framework—Rapid Policy Adaptation via Differentiable Simulation (RA‑L 2026)—that lets quadrotor controllers adjust to unknown disturbances in seconds during real‑world deployment. The method starts with a low‑fidelity, fully differentiable dynamics model to train a policy...

The video walks viewers through Arduino App Lab, a nascent IDE for the Arduino Uno Q that blends Python on the board’s micro‑computer with C++ on its microcontroller. It explains the board’s dual‑processor architecture—QRB2210 running Debian Linux and STM32U585 running...

At CES 2026 NVIDIA introduced Alpamayo, an open‑source ecosystem designed to push autonomous‑driving technology toward Level 4 capability by embedding reasoning into AI models. The announcement highlighted a suite of components—including a large‑scale data set, a closed‑loop simulation framework, and a...