
An AI engineering podcast dedicated to the emerging field of AI development and the builders making “Software 3.0.” Hosted by Alessio Fanelli (Decibel Partners) and writer/engineer Swyx, the show covers the latest in AI news, research, and developer tools – spanning foundation models, AI agents, multimodal systems, GPU infrastructure, and more. Latent Space features interviews with key players from top AI companies and open-source projects, discussing cutting-edge techniques and how AI is changing software engineering. With comprehensive analysis and a forward-looking perspective, it’s a must-listen for AI engineers and enthusiasts.
![[LIVE] Anthropic Distillation & How Models Cheat (SWE-Bench Dead) | Nathan Lambert & Sebastian Raschka](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://substackcdn.com/feed/podcast/1084089/ca7468da5614a246d2906ee8926f6de7.jpg)
In this live SAIL episode, Nathan Lambert and Sebastian Raschka discuss the recent Anthropic blog post about distributed distillation attacks, where Chinese labs allegedly used Anthropic's APIs to generate synthetic data for training competing models. They explain the concept of model distillation—training smaller models on the outputs of larger ones—and explore how such practices blur the line between legitimate benchmarking and illicit data harvesting. The conversation also covers detection challenges, terms‑of‑service enforcement, and the broader geopolitical implications of AI model competition, with insights from both the AI research community and industry observers.

In this episode, Gabriele Corso and Jeremy Wohlwend discuss how structural biology has moved beyond AlphaFold's single‑chain predictions toward modeling complex interactions and generative protein design with their open‑source Boltz suite (Boltz‑1, Boltz‑2, and BoltzGen). They explain that evolutionary co‑variation...
![[NeurIPS Best Paper] 1000 Layer Networks for Self-Supervised RL — Kevin Wang Et Al, Princeton](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://assets.flightcast.com/static/01K4D8FDXRRRRZG0EBGNDF3SD3.jpg)
The episode explores the NeurIPS Best Paper on RL1000, where Kevin Wang and his Princeton team demonstrated that scaling reinforcement learning networks to 1,000 layers using a self‑supervised, contrastive objective unlocks dramatic performance gains. They explain why traditional value‑based RL...
![[State of RL/Reasoning] IMO/IOI Gold, OpenAI O3/GPT-5, and Cursor Composer — Ashvin Nair, Cursor](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://assets.flightcast.com/static/01K4D8FDXRRRRZG0EBGNDF3SD3.jpg)
sdw

The episode reviews the first year of the Model Context Protocol (MCP), tracing its evolution from a local experiment to a universal standard adopted by major AI firms and enterprises, and its recent transition into the Agentic AI Foundation under...

Steve Yegge argues that traditional IDEs and current AI coding assistants like Claude Code and Cursor are already obsolete, urging developers to shift to "vibe coding"—orchestrating fleets of AI agents via dashboards such as his VC (VibeCoder). He emphasizes a...

In this episode, Pliny the Liberator and John V discuss their radical approach to AI red‑teaming, emphasizing universal jailbreaks—skeleton‑key prompts that bypass guardrails across modalities—and the shortcomings of RLHF‑based safety as mere security theater. They detail hard vs. soft jailbreak...
Fei-Fei Li and Justin Johnson discuss their new platform Marble, a generative world model that turns text, images, and spatial inputs into editable 3D environments, highlighting its technical core of Gaussian splats and real‑time interactivity across devices. They argue that...