Robotics Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Robotics Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
RoboticsVideosKarinne Ramirez Amaro- "Transparent Robot Decision-Making with Interpretable & Explainable Methods"
RoboticsAI

Karinne Ramirez Amaro- "Transparent Robot Decision-Making with Interpretable & Explainable Methods"

•January 29, 2026
0
IEEE Robotics and Automation Society
IEEE Robotics and Automation Society•Jan 29, 2026

Why It Matters

Transparent, interpretable robot decision‑making builds user trust and reduces costly failures, accelerating real‑world deployment of autonomous systems.

Key Takeaways

  • •Define robot transparency layers: intention, reasoning, capabilities, prediction, context.
  • •Use decision trees with description logic for interpretable intent recognition.
  • •Combine ontology and LLMs to classify unseen objects on the fly.
  • •Contrastive search predicts failures, adjusts parameters, improves sim-to-real transfer.
  • •Hybrid symbolic planning and RL enables adaptive low‑level execution.

Summary

The presentation focused on making autonomous robots transparent by integrating interpretable and explainable AI methods. Ramirez outlined a five‑layer model—intention, reasoning, capabilities, prediction, and context—designed to let humans understand a robot’s internal decision process. Key technical contributions include a semantic decision‑tree framework enriched with description‑logic ontologies for intent recognition, and the use of large language models as contextual back‑ends to classify objects absent from the ontology in real time. For reasoning, a contrastive search algorithm learns causal graphs from simulated runs, enabling the robot to anticipate failures and adjust parameters before they occur, achieving 80‑85% fidelity when transferred to physical hardware. Illustrative examples ranged from a virtual‑reality pasta‑making scenario, where a bottle was instantly classified via the LLM‑augmented ontology, to a cube‑tower task that exposed causality gaps and was resolved through the contrastive search. The system also merges high‑level symbolic planning with low‑level reinforcement‑learning policies, allowing on‑the‑fly adaptation to dynamic environments and demonstrating robustness when objects move or disappear. The work promises greater trust and reliability in human‑robot interaction, offering open‑source code and datasets that accelerate research on transferable, transparent robotic skills—critical for deploying robots in manufacturing, healthcare, and collaborative settings.

Original Description

Speaker Biography
Karinne Ramirez-Amaro is an Associate Professor in the Electrical Engineering Department at Chalmers University of Technology, Sweden. She completed her Ph.D. (summa cum laude) in the Department of Electrical and Computer Engineering at the Technical University of Munich in 2015. She has received several awards, including the Prize for an Excellent Doctoral Degree for Female Engineering Students and the Google Anita Borg Award. Her research interests include Interpretable and Explainable AI, Semantic Representations, Cause-based Learning Methods, Collaborative Robotics, and Human Activity Recognition and Understanding. She is one of the team leaders of the new Interpretable AI Research Theme at Chalmers. She has been an Associate Editor of various journals, such as IEEE Robotics and Automation Letters (RA-L) and Elsevier Robotics and Autonomous Systems (RAS). In 2022, Karinne was elected as a member of the Administrative Committee (AdCom) of the IEEE Robotics and Automation Society (RAS), and she was the founding chair of the IEEE RAS Diversity, Equity, and Inclusion (DEI) Committee. In 2023, she became an Associate Vice President for the RAS Member Activity Board, and she was elected as the incoming Vice President of the RAS Conference Activities Board; her term begins in January 2026. Website: https://sites.google.com/view/craft-laboratory/home
Abstract
The Vision-Language-Action (VLA) paradigm has significantly advanced robotic control through Internet-scale pre-training. However, its application to real-world manipulation tasks, particularly those requiring high precision in contact-rich scenarios or dealing with complex dynamics, is often limited by a lack of fine-grained physical grounding. To address this, we propose a Knowledge-Guided Tactile VLA framework that enhances traditional vision-language-action models with robust physical reasoning capabilities through tactile sensing and world modeling. Our Unified Digital Physics System (UDPS) incorporates tactile perception with physical knowledge prior via a novel tokenization scheme that encodes geometry, physics, and tactile cues into a unified representation. The cross-domain alignment distilled from geometry invariances substantially improving sim-to-real transfer for contact-rich manipulation. Simultaneously, physical token enables the modelling of dynamic and complex physical process, including soft-body deformation and contact transitions. The framework is rigorously validated in two demanding tasks: precision 3C assembly and humanoid handkerchief dancing. In 3C assembly, UDPS taking tactile feedback as position offset in sim-to-real transfer and achieves sub-millimeter precision in connector mating in a zero-shot manner. For handkerchief manipulation, the physical tokens models complex fabric dynamics, enabling stable rhythmic motions through whole-body coordination. These results demonstrate the critical importance of integrating physical knowledge and tactile sensing for solving complex, contact-rich manipulation tasks in real-world environments without real-world fine-tuning.
0

Comments

Want to join the conversation?

Loading comments...