Understanding child‑robot interaction is crucial as robots become more prevalent in homes and schools, influencing how the next generation perceives and trusts AI. By focusing on explainability and expectation management, the episode offers timely guidance for developers, educators, and parents aiming to nurture responsible AI literacy from an early age.
Human‑robot interaction takes on a distinct flavor when children are the users. Unlike adults, who bring established mental models and technological baggage, kids approach robots with fresh imagination and often project emotions and intentions onto them. This tendency amplifies their willingness to treat robots as companions, making the design of child‑focused robots a delicate responsibility. Understanding these developmental differences is crucial for educators and engineers who aim to harness robots for learning without reinforcing misconceptions about agency or capability.
Research highlighted by Elmira Yadalahi shows that parental attitudes, media exposure, and cultural context heavily shape children’s expectations of robotic behavior. Age groups between five and eleven display rapid shifts in perspective‑taking abilities, with a notable milestone around eleven when children begin to form more sophisticated mental models. Experiments reveal that subtle perspective‑taking cues—such as a robot referencing the child’s viewpoint—significantly boost prosocial actions, like longer engagement in collaborative tasks. These findings underscore the need for transparent, explainable robot interfaces that align with children’s evolving cognitive stages and prevent over‑trust or under‑trust in automated systems.
Design implications point toward explainable AI that communicates decisions in child‑appropriate language, fostering accurate mental models and ethical trust. Rather than generic post‑hoc explanations, developers should embed real‑time, context‑aware transparency, allowing children to grasp why a robot acted a certain way. This approach not only supports AI literacy but also mitigates potential disappointment when robots fail or behave unexpectedly. As robotics advances, cross‑cultural studies and age‑specific tailoring will become essential to ensure that educational robots empower children responsibly, shaping a generation that interacts with technology critically and confidently.
Claire chatted to Elmira Yadollahi from Lancaster University about how children interact with and relate to robots.
Elmira Yadollahi is an Assistant Professor of Computer Science at Lancaster University. She has a joint PhD in robotics and computer science from EPFL in Switzerland and Instituto Superior Técnico in Portugal. Her research tackles explainability in robotics, as well as multimodal perception and explanation methods. Her core expertise is in child–robot interaction, with a focus on expectation management, trust, and AI literacy. She has organised workshops on Explainability in Human-Robot Interaction and the Design and Development of Robots and AI with Children.
Support Robot Talk on Patreon: https://www.patreon.com/ClaireAsher
Comments
Want to join the conversation?
Loading comments...