Transparent, interpretable robot decision‑making builds user trust and reduces costly failures, accelerating real‑world deployment of autonomous systems.
The presentation focused on making autonomous robots transparent by integrating interpretable and explainable AI methods. Ramirez outlined a five‑layer model—intention, reasoning, capabilities, prediction, and context—designed to let humans understand a robot’s internal decision process. Key technical contributions include a semantic decision‑tree framework enriched with description‑logic ontologies for intent recognition, and the use of large language models as contextual back‑ends to classify objects absent from the ontology in real time. For reasoning, a contrastive search algorithm learns causal graphs from simulated runs, enabling the robot to anticipate failures and adjust parameters before they occur, achieving 80‑85% fidelity when transferred to physical hardware. Illustrative examples ranged from a virtual‑reality pasta‑making scenario, where a bottle was instantly classified via the LLM‑augmented ontology, to a cube‑tower task that exposed causality gaps and was resolved through the contrastive search. The system also merges high‑level symbolic planning with low‑level reinforcement‑learning policies, allowing on‑the‑fly adaptation to dynamic environments and demonstrating robustness when objects move or disappear. The work promises greater trust and reliability in human‑robot interaction, offering open‑source code and datasets that accelerate research on transferable, transparent robotic skills—critical for deploying robots in manufacturing, healthcare, and collaborative settings.
Comments
Want to join the conversation?
Loading comments...