
Latent Space
An AI engineering podcast dedicated to the emerging field of AI development and the builders making “Software 3.0.” Hosted by Alessio Fanelli (Decibel Partners) and writer/engineer Swyx, the show covers the latest in AI news, research, and developer tools – spanning foundation models, AI agents, multimodal systems, GPU infrastructure, and more. Latent Space features interviews with key players from top AI companies and open-source projects, discussing cutting-edge techniques and how AI is changing software engineering. With comprehensive analysis and a forward-looking perspective, it’s a must-listen for AI engineers and enthusiasts.
![[LIVE] Anthropic Distillation & How Models Cheat (SWE-Bench Dead) | Nathan Lambert & Sebastian Raschka](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://substackcdn.com/feed/podcast/1084089/ca7468da5614a246d2906ee8926f6de7.jpg)
[LIVE] Anthropic Distillation & How Models Cheat (SWE-Bench Dead) | Nathan Lambert & Sebastian Raschka
In this live SAIL episode, Nathan Lambert and Sebastian Raschka discuss the recent Anthropic blog post about distributed distillation attacks, where Chinese labs allegedly used Anthropic's APIs to generate synthetic data for training competing models. They explain the concept of model distillation—training smaller models on the outputs of larger ones—and explore how such practices blur the line between legitimate benchmarking and illicit data harvesting. The conversation also covers detection challenges, terms‑of‑service enforcement, and the broader geopolitical implications of AI model competition, with insights from both the AI research community and industry observers.

🔬Beyond AlphaFold: How Boltz Is Open-Sourcing the Future of Drug Discovery
In this episode, Gabriele Corso and Jeremy Wohlwend discuss how structural biology has moved beyond AlphaFold's single‑chain predictions toward modeling complex interactions and generative protein design with their open‑source Boltz suite (Boltz‑1, Boltz‑2, and BoltzGen). They explain that evolutionary co‑variation...
![[NeurIPS Best Paper] 1000 Layer Networks for Self-Supervised RL — Kevin Wang Et Al, Princeton](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://assets.flightcast.com/static/01K4D8FDXRRRRZG0EBGNDF3SD3.jpg)
[NeurIPS Best Paper] 1000 Layer Networks for Self-Supervised RL — Kevin Wang Et Al, Princeton
The episode explores the NeurIPS Best Paper on RL1000, where Kevin Wang and his Princeton team demonstrated that scaling reinforcement learning networks to 1,000 layers using a self‑supervised, contrastive objective unlocks dramatic performance gains. They explain why traditional value‑based RL...
![[State of RL/Reasoning] IMO/IOI Gold, OpenAI O3/GPT-5, and Cursor Composer — Ashvin Nair, Cursor](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://assets.flightcast.com/static/01K4D8FDXRRRRZG0EBGNDF3SD3.jpg)
[State of RL/Reasoning] IMO/IOI Gold, OpenAI O3/GPT-5, and Cursor Composer — Ashvin Nair, Cursor
sdw

One Year of MCP — with David Soria Parra and AAIF Leads From OpenAI, Goose, Linux Foundation
The episode reviews the first year of the Model Context Protocol (MCP), tracing its evolution from a local experiment to a universal standard adopted by major AI firms and enterprises, and its recent transition into the Agentic AI Foundation under...

Steve Yegge's Vibe Coding Manifesto: Why Claude Code Isn't It & What Comes After the IDE
Steve Yegge argues that traditional IDEs and current AI coding assistants like Claude Code and Cursor are already obsolete, urging developers to shift to "vibe coding"—orchestrating fleets of AI agents via dashboards such as his VC (VibeCoder). He emphasizes a...

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security
In this episode, Pliny the Liberator and John V discuss their radical approach to AI red‑teaming, emphasizing universal jailbreaks—skeleton‑key prompts that bypass guardrails across modalities—and the shortcomings of RLHF‑based safety as mere security theater. They detail hard vs. soft jailbreak...
After LLMs: Spatial Intelligence and World Models — Fei-Fei Li & Justin Johnson, World Labs
Fei-Fei Li and Justin Johnson discuss their new platform Marble, a generative world model that turns text, images, and spatial inputs into editable 3D environments, highlighting its technical core of Gaussian splats and real‑time interactivity across devices. They argue that...

⚡️ 10x AI Engineers with $1m Salaries — Alex Lieberman & Arman Hezarkhani, Tenex
Alex Lieberman and Arman Hezarkhani discuss Tenex's AI‑first consulting model that pays engineers by story‑point output instead of hours, enabling some to earn $1 million and delivering ten‑fold productivity gains. They explain how this incentive structure spurred rapid prototyping—building complex vision...

Anthropic, Glean & OpenRouter: How AI Moats Are Built with Deedy Das of Menlo Ventures
In this episode, Menlo Ventures partner Deedy Das recounts his transition from building Glean into a $7 billion AI‑native enterprise search firm to investing early in Anthropic and managing the $100 million Ontology Fund. He explains how Anthropic’s rapid growth and products...

⚡ Inside GitHub’s AI Revolution: Jared Palmer Reveals Agent HQ & The Future of Coding Agents
In this episode, Jared Palmer—GitHub’s SVP and Microsoft’s VP of CoreAI—discusses the rapid evolution of coding agents, the launch of Agent HQ as a collaborative hub, and the breakthrough Next.js coding agent v0 that emerged from tight platform constraints. He...
![⚡ [AIE CODE Preview] Inside Google Labs: Building The Gemini Coding Agent — Jed Borovik, Jules](/cdn-cgi/image/width=1200,quality=75,format=auto,fit=cover/https://assets.flightcast.com/static/01K4D8FDXRRRRZG0EBGNDF3SD3.jpg)
⚡ [AIE CODE Preview] Inside Google Labs: Building The Gemini Coding Agent — Jed Borovik, Jules
In this episode, Google Labs product lead Jed Borovik walks through the creation of Jules, Google’s Gemini‑powered autonomous coding agent, and how it sits at the crossroads of DeepMind model research and product engineering. He explains how Google moved from...

⚡️ Ship AI Recap: Agents, Workflows, and Python — W/ Vercel CTO Malte Ubl
In this episode, Vercel CTO Malte Ubl walks through the company’s AI‑first infrastructure, highlighting the new AI SDK 6.0, the agent ecosystem, and the Workflow Development Kit that makes serverless functions durable and human‑in‑the‑loop ready. He explains Vercel’s “dogfooding” philosophy, how...

The Agents Economy Backbone - with Emily Glassberg Sands, Head of Data & AI at Stripe
In this episode, Emily Glassberg Sands, Stripe’s Head of Data & AI, explains how Stripe leverages AI at scale—using domain‑specific payment embeddings to boost fraud detection from 59% to 97% and launching the Agentic Commerce Protocol with OpenAI, now adopted...

Why RL Won — Kyle Corbitt, OpenPipe (Acq. CoreWeave)
Kyle Corbitt, co‑founder and CEO of OpenPipe (recently acquired by CoreWeave), explains the industry’s shift from supervised fine‑tuning to reinforcement‑learning‑based agent training. He argues that 90% of AI projects stall not because of capability limits but due to reliability gaps,...