
Robot Talk
Episode 139: Advanced Robot Hearing - Christine Evers
AI Summary
In this episode, Claire Asher talks with Associate Professor Christine Evers about how robots can interpret their environment using sound, focusing on bio‑inspired machine listening that mirrors human auditory processing. Evers explains her work on integrating auditory neuroscience insights into deep‑learning audio models, which aims to create compute‑efficient, interpretable systems rather than relying on massive internet‑scale models. The discussion highlights the potential of these approaches to enable embodied auditory intelligence in robots, making them more capable of understanding and reacting to real‑world acoustic cues.
Episode Description
Claire chatted to Christine Evers from the University of Southampton about helping robots understand the world around them through sound.
Christine Evers is an Associate Professor in Computer Science and Director of the Centre for Robotics at the University of Southampton. Her research pushes the boundaries of machine listening, enabling robots to make sense of life in sound. Her current focus is embedding our understanding of the human auditory process into deep-learning audio architectures. This bio-inspired approach moves away from massive, internet-scale models toward compute-efficient and inherently interpretable systems - opening the door to a new generation of embodied auditory intelligence.
Join the Robot Talk community on Patreon: https://www.patreon.com/ClaireAsher
Show Notes
Episode 139: Advanced robot hearing – Christine Evers
January 9, 2026
Claire chatted to Christine Evers from the University of Southampton about helping robots understand the world around them through sound.
Christine Evers is an Associate Professor in Computer Science and Director of the Centre for Robotics at the University of Southampton. Her research pushes the boundaries of machine listening, enabling robots to make sense of life in sound. Her current focus is embedding our understanding of the human auditory process into deep‑learning audio architectures. This bio‑inspired approach moves away from massive, internet‑scale models toward compute‑efficient and inherently interpretable systems – opening the door to a new generation of embodied auditory intelligence.
Join the Robot Talk community on Patreon: https://www.patreon.com/ClaireAsher
About the Podcast
Join us each week as we explore the exciting world of robotics, artificial intelligence, and autonomous machines. Each episode, Dr Claire Asher — science communicator and robot enthusiast — chats with roboticists from around the world to find out how their cutting‑edge research is influencing the future of every aspect of science, technology, and engineering, from the mundane to the extraordinary.
Comments
Want to join the conversation?
Loading comments...