The session introduces AI researchers to effective science communication, covering why it matters, storytelling, blog writing, social media use, image selection, avoiding hype, and engaging with media. Presenter Dr. Lucy Smith emphasizes clear, concise explanations for non‑specialists to broaden impact and networks. Attendees receive practical advice, Q&A, and hands‑on activities to develop communication skills across traditional and unconventional formats.
In this episode, Anindya Das Antar explains their new Bayesian probabilistic method for evaluating and selecting moderation guardrails that align large language model outputs with expert-defined expectations. The approach estimates activation probabilities for each guardrail, revealing their individual and interactive...
The episode explores how AI can safeguard Europe’s extensive subsea cables and pipelines, focusing on the EU‑funded VIGIMARE project led by researcher Johanna Karvonen. It details how machine‑learning models will fuse satellite imagery, AIS data, radar and acoustic signals from...
The episode previews AAAI‑2026, the first AAAI conference held outside North America, hosted in Singapore from Jan 20‑27. It highlights the diverse program—including invited talks from leaders like Peter Stone and Isabelle Guyon, a science‑communication tutorial, extensive tutorials and labs on topics...
The episode explores the challenges of autonomous robot navigation on unstructured hiking trails, emphasizing the need to perceive, plan, and adapt to dynamic obstacles like fallen trees, mud, and erosion. Researchers combine LiDAR-based geometric terrain analysis with camera-driven semantic segmentation...
The panel, moderated by former AAAI President Francesca Rossi, examined AI reasoning as outlined in the AAAI’s 2025 Future of AI Research report. Experts Holger Hoos and Subbarao Kambhampati discussed how to define and implement reasoning in AI, emphasizing planning...
The episode outlines a calendar of free virtual AI and machine learning seminars running from early January to late February 2026, featuring talks on topics such as LLM introspection, causal representation learning for climate teleconnections, AI ethics, combinatorial optimization, generative...
The December 2025 AIhub monthly digest covers four main topics: Frida Hartman's research on gender bias in AI-driven recruitment tools, Alice Xiang's launch of the Fair Human-Centric Image Benchmark (FHIBE) for ethical computer‑vision evaluation, Professor Marynel Vázquez's insights on human‑robot...
A Cambridge‑led survey of 258 UK novelists and industry insiders reveals that 51% fear AI could fully replace their work, with 59% reporting unauthorized use of their writing to train models and 39% already seeing income losses. While a third...

The episode introduces a novel off‑policy reinforcement learning algorithm that replaces temporal‑difference learning with a divide‑and‑conquer paradigm, dramatically reducing error accumulation by using logarithmic Bellman recursions. Seohong Park explains how the method leverages the triangle‑inequality property in goal‑conditioned RL, employing...

The episode explores a University of Amsterdam project that uses machine learning to decode how insect olfactory receptors bind to scent molecules, aiming to create a large, shared database of 25,000+ scent-receptor interactions. Researchers from biology, mathematics, data science, and...

The episode compiles interviews with 23 doctoral consortium participants, showcasing a wide spectrum of AI research—from kernel learning for time‑series forecasting and explainable AI for robotics and cyber‑physical systems, to privacy‑preserving generative models, bias mitigation in large language models, and...

The episode explores how Australia’s extensive northern savannas are being monitored and forecasted using an AI tool called Themeda, which leverages 33 years of satellite data and deep learning to predict land‑cover changes at a fine 25 × 25 m resolution. The researchers...
In this episode, Professor Marynel Vázquez discusses her evolving research on human‑robot interaction, emphasizing how robots can navigate social groups by modeling interactions as graphs and adapting to errors in real time. She highlights the potential of socially aware robots...

The episode discusses a new study revealing that large language models, including GPT‑5 and open‑source alternatives, systematically assign negative stereotypes to speakers of German regional dialects compared to Standard German, influencing decisions in hiring, education, and other contexts. Researchers from...

The episode explores Canadian teachers' firsthand experiences with generative AI in K‑12 classrooms, revealing deep concerns about assessment integrity, equity gaps, and added workload. Interviewees stress that current AI policies overlook the emotional labor and professional judgment essential to teaching,...

In this episode, Ben Byford interviews interdisciplinary researcher Dr. Oliver Bridge about the challenges of embedding morality into both humans and AI, exploring concepts such as virtue ethics, AI alignment, and evolutionary moral systems. Bridge emphasizes the value of systems...

Lucy Smith’s December 2025 AI seminar roundup lists a series of free, virtual talks covering diverse topics such as optimization for societal impact, AI safety, AI literacy measurement, protein engineering with diffusion models, and the role of third‑party intelligence in markets....
In this episode, host Ella Lan talks with Professor Roberto Martín‑Martín about his journey from tinkering with toys to pioneering embodied AI that integrates perception, learning, and action in robotics. Martín‑Martín explains how his research—spanning pick‑and‑place, navigation, and complex tasks like...

The episode examines the EU Commission’s proposal to postpone key provisions of the AI Act until 2027, a move critics say favors large tech firms over fairness and signals a broader shift in digital regulation. It contrasts this with the...

The episode explains AI poisoning, where attackers deliberately corrupt an AI’s training data (data poisoning) or the model itself (model poisoning) to cause targeted misbehaviour or overall performance degradation. It distinguishes direct attacks like backdoors, which trigger specific harmful outputs,...

Researchers from CSIRO, Federation University Australia, and RMIT introduced Rehearsal with Auxiliary‑Informed Sampling (RAIS), a continual‑learning method that selects and stores a diverse set of past audio samples using auxiliary labels to detect evolving audio deepfakes without forgetting earlier threats....

The episode announces the open call for nominations for the 2026 ACM SIGAI Autonomous Agents Research Award, highlighting its purpose to honor researchers whose current work significantly influences the autonomous agents field. Listeners are instructed on how to nominate candidates...