
In this episode of the Matt Podcast, DeepMind researcher Mostafa Dehghani explains how the AI field is moving from human‑driven model design to a regime where models iteratively build the next generation of models. He frames the shift as a series of "loops"—micro‑level loops that add compute at inference time and macro‑level loops that automate the entire development pipeline, effectively removing the human bottleneck that has historically limited progress. Dehghani highlights that today’s leading labs already train new architectures on the outputs of previous generations, but full‑automation and long‑horizon self‑improvement remain unfinished. He argues that once models can evaluate and update their own weights without external supervision, a dramatic acceleration—recursive self‑improvement—will follow, provided sufficient compute and robust evaluation metrics are in place. The conversation cites concrete examples such as the Kapathies auto‑research project, where models began contributing to research engineering tasks, and discusses formal verification as a promising, though incomplete, tool to ensure safe feedback loops. Dehghani also warns of model collapse when a closed loop lacks external grounding, defining it as loss of generalization after over‑optimizing on self‑generated data. If these challenges are solved, enterprises could see their data pipelines and retrieval‑augmented generation systems rebuilt around continuously learning AI, reshaping competitive dynamics and raising new governance questions about safety, evaluation, and control.

The General Intelligence Company of New York unveiled its vision of enabling one‑person, billion‑dollar startups by automating entire business functions with AI. Its flagship product, Co‑founder CTO, acts as a fully autonomous engineering department that creates, tests, deploys, and monitors software...

The discussion centers on what the speaker calls the "jagged" AI frontier – a landscape where generative models excel in some domains while floundering in others, creating an uneven map of usefulness across industries and job functions. Listeners are...

The conversation with Harrison Chase, co‑founder of LangChain, maps the rapid evolution of AI agents from simple prompt loops to sophisticated tool‑driven systems, emphasizing the emergence of a dedicated “harness” layer that sits between cloud models and end‑user applications. Chase explains...

Zo Computer, founded by Rob, is marketed as a personal cloud computer powered by AI that acts as an always‑on, intelligent assistant rather than a traditional user‑managed machine. The service runs on industrial‑grade cloud infrastructure but is largely operated by its...