
Jee Hwan Ryu presented the latest advances in soft‑growing "vine" robots, machines that extend their bodies by everting material rather than moving a rigid chassis. This eversion‑based locomotion lets the robot slip through tight, slippery or even vertical passages, making it a promising tool for both post‑disaster search‑and‑rescue and minimally invasive medical procedures. The research tackles three technical hurdles: steering, tip‑mount stability, and reliable retraction. Ryu’s team favors whole‑body steering, using artificial‑muscle actuators wrapped around the robot to induce curvature, while avoiding complex tip‑joint mechanisms. To keep a camera or sensor at the tip during rapid growth, they introduced an origami‑folded material‑feeding system that separates the cable channel from the eversion stream, preventing the cable from being engulfed. A simple internal retraction channel, activated by pressurizing a sealed ring, enables the robot to pull itself back without external pulling forces. Demonstrations included a portable disaster‑response prototype that could navigate a collapsed‑building mockup, deliver a water bottle, and transmit live video—all from a compact control box. In collaboration with clinicians, the same platform was adapted for colonoscopy, showing safe, low‑force navigation through animal intestines and rapid self‑retraction. A separate self‑wearing garment project illustrated the robot’s potential for assistive clothing, leveraging its unfolding motion to dress users with limited mobility. If the challenges of sharp intestinal bends and reliable tip‑tool integration are resolved, vine robots could revolutionize emergency response by reaching victims in confined rubble and transform endoscopic procedures by reducing patient discomfort and infection risk. Their modular, low‑pressure design also opens avenues in wearable robotics and other soft‑automation fields.

Kevin Chen’s presentation spotlights a new generation of insect‑scale aerial robots that combine soft artificial muscles with rigid airframes, challenging the conventional view that soft robots are inherently slow and imprecise. By leveraging dielectric elastomer actuators capable of hundreds of...

Xifeng Yan, a UC Santa Barbara researcher, presented an adaptive inference framework for transformer models, highlighting its relevance to emerging robotics applications that increasingly rely on large‑scale language and vision transformers. He argued that the uniform computational cost per token...

The presentation focused on making autonomous robots transparent by integrating interpretable and explainable AI methods. Ramirez outlined a five‑layer model—intention, reasoning, capabilities, prediction, and context—designed to let humans understand a robot’s internal decision process. Key technical contributions include a semantic decision‑tree...

Fuchun Sun outlines a knowledge-guided approach to embodied vision-language-action (VLA) agents that integrates tactile sensing and physical awareness with large language models. He argues tactile feedback closes the semantic–physics gap—enabling fine force control, collision detection, and perception of material properties—critical...

Seoul National University researcher Hyoun Jin Kim reviewed advances and remaining hurdles in autonomous aerial manipulation, arguing that drones must move beyond sensing to physically interacting with environments. He highlighted core technical challenges—limited thrust, stability during contact, unknown interaction forces,...

Marco Hutter traced the rapid maturation of legged robotics from his ETH Zurich PhD work on dynamically balancing quadrupeds to commercial deployments today, highlighting advances in actuation, autonomy, sensing and system-level robustness. He described early field trials that exposed reliability...