
Who Needs VCs when You Have Friends Like These?
In this episode, Zen Liu, co‑founder and CEO of RunPod, explains how his team bypassed traditional venture‑capital funding and built a GPU‑focused cloud platform directly from community feedback. Starting with basement‑hosted servers, they launched a free, Reddit‑promoted dev‑environment product that quickly validated demand among AI researchers and developers. The conversation highlights how RunPod’s roadmap blends founder intuition with rapid community input, leading to features like serverless auto‑scaling and a data‑first architecture that moves workloads to distributed data stores. Liu emphasizes the importance of fast iteration, clear focus, and balancing signal versus noise in a democratized AI developer ecosystem.

The Messy Truth of Your AI Strategies
In this episode, host Ryan Donovan and guest Hima Raghavan, co‑founder and head of engineering at Kumo.ai, dissect the chaotic realities of deploying AI in profit‑driven enterprises, covering issues like pipeline sprawl, shadow AI, and data governance. Hima explains how...

Seizing the Means of Messenger Production
In this episode, host Ryan Donovan talks with Galen Wolf‑Pauly, CEO of Tlan, about building a decentralized, user‑owned messaging platform built on the Urbit virtual‑machine architecture. Wolf‑Pauly explains how early internet ideals of personal control gave way to cloud services,...

Prevent Agentic Identity Theft
In this episode, Stack Overflow host Ryan Donovan talks with Nancy Wang, CTO of 1Password, about the emerging security challenges of local AI agents. Wang explains how agents like ClaudeBot (now MoldBot) can access a device’s full execution context—files, terminals,...

After All the Hype, Was 2025 Really the Year of AI Agents?
In this episode, host Ryan Donovan and HumanX Conference CEO Stefan Weitz examine why 2025 didn’t live up to the hype of being the "year of AI agents." They explain that while agents generated a lot of buzz, practical deployment...

Building a Global Engineering Team (Plus AI Agents) with Netlify
In this brief episode, the host discusses the rapid democratization of software development and how Netlify is building a global engineering team to harness this momentum, including the use of AI agents to streamline workflows. They highlight the shift away...

Keeping the Lights on for Open Source
In this episode, host Ryan Donovan talks with Dan Lurink, CEO of ChainGuard, about the sustainability challenges facing open‑source projects, especially maintainer burnout and funding gaps. Lurink explains ChainGuard’s “Keeping the Lights On” program, which adopts archived or “done” repositories,...

Open Source for Awkward Robots
In this episode, host Ryan Donovan chats with Jan Lipart, CEO and co‑founder of OpenMind, about their open‑source robotics platform OM1 that lets humanoid robots communicate internally via natural language and be governed by immutable, blockchain‑stored rules like Asimov's laws....

Even the Chip Makers Are Making LLMs
In this episode, NVIDIA VP of Generative AI Keri Britsky explains why a GPU chip maker is now deeply involved in building large language models (LLMs). She describes NVIDIA’s extreme hardware‑software co‑design process, where model development informs GPU architecture, precision...

AI-Assisted Coding Needs More than Vibes; It Needs Containers and Sandboxes
In this episode, Docker President Mark Cavett discusses how containers are becoming essential for safely running AI‑generated code, emphasizing the need for hardened images to bridge the trust gap. He explains Docker’s new open‑source Docker Hardened Images (DHI) catalog, which...

No Need for Ctrl+C when You Have MCP
In this episode, Ryan Donovan interviews David Soria Parra, co‑creator of the Model Context Protocol (MCP) and a technical staff member at Anthropic. They discuss the origin of MCP as a solution to the copy‑paste friction when using LLMs, its evolution...

Why Stack Overflow and Cloudflare Launched a Pay-per-Crawl Model
In this episode, Stack Overflow’s Janice Manningham and Josh Zhang chat with Cloudflare VP Will Allen about the newly launched pay‑per‑crawl model that lets publishers charge crawlers for access. They explain how AI‑driven content scraping has upended the traditional open‑versus‑block...

Data Is the New Oil, and Your Database Is the only Way to Extract It
In this episode, Ryan interviews Shireesh Thota, Corporate Vice President of Azure Databases at Microsoft, about the rapid evolution of Microsoft's database offerings, including SQL Server, Cosmos DB, and Postgres, and how they fit into a unified Azure data platform....

Even Your Voice Is a Data Problem
In this episode, Ryan interviews Scott Stephenson, CEO and co‑founder of Deepgram, about the latest advances in voice AI, focusing on how deep learning improves speech‑to‑text and text‑to‑speech accuracy across diverse dialects and noisy environments. They discuss Deepgram’s scalable, affordable...