Everything Gets Rebuilt: The New AI Agent Stack | Harrison Chase, LangChain

Data Driven NYC
Data Driven NYCMar 12, 2026

Why It Matters

Because the harness layer, not the model, determines scalability and reliability, businesses that invest in agent infrastructure will capture the value of increasingly capable LLMs.

Key Takeaways

  • Model improvements and harness primitives drove agent explosion.
  • Two agent types: low‑latency conversational and long‑horizon coding agents.
  • Harnesses, not models, determine reliability and UI integration.
  • System prompts and planning tools act as agents' operating procedures.
  • Future convergence may blend conversational and background long‑running agents.

Summary

The conversation with Harrison Chase, co‑founder of LangChain, maps the rapid evolution of AI agents from simple prompt loops to sophisticated tool‑driven systems, emphasizing the emergence of a dedicated “harness” layer that sits between cloud models and end‑user applications.

Chase explains that two breakthroughs—significantly better large language models and the discovery of reusable primitives such as looped inference, tool calling, and planning—triggered an explosion of agent development. He distinguishes low‑latency conversational agents, suited for chat and voice, from “long‑horizon” agents that plan, write code, and manage state over extended tasks.

He cites early work like the ReAct paper, AutoGPT, Claude Code, and Manis as prototypes that combined model loops with toolkits. The discussion highlights concrete components of a harness: system prompts that encode SOPs, planning tools that act as a mental scratchpad, and sub‑agents that isolate context, all of which are bundled into products such as Cloud Code and LangChain’s Langraph.

The takeaway for enterprises is that the competitive edge now lies in building robust harnesses and observability layers rather than in the underlying model itself. As models become commoditized, infrastructure that can reliably orchestrate tool use, manage memory, and expose intuitive UIs will dictate which platforms dominate the next generation of AI‑powered services.

Original Description

The era of the simple AI wrapper is officially dead, and the entire software infrastructure layer is being completely rebuilt. Live from the Daytona COMPUTE Conference in San Francisco, Harrison Chase, co-founder and CEO of LangChain, joins the MAD Podcast to explain why this massive shift is happening. As agents evolve from simple prompt-based systems into software that can plan, use tools, write code, manage files, and remember things over time, the real frontier is shifting from the model itself to the stack around the model. In this conversation, we go deep under the hood of this new, post-cloud architecture to deconstruct harnesses, sub-agents, context compaction, observability, memory, and the critical need for secure compute sandboxes. For anyone building in AI today, this episode cuts through the noise to reveal the new infrastructure required to make autonomous agents work in the real world.
Harrison Chase
LangChain
Matt Turck (Managing Director)
FirstMark
Listen on:
00:00 Intro - meet Harrison Chase
01:32 What changed in agents over the last year
03:57 Why coding agents are ahead
06:26 Do models commoditize the framework layer?
08:27 Harnesses, in plain English
10:11 Why system prompts matter so much
13:11 The upside — and downside — of subagents
15:31 Why a useful agent needs a filesystem
18:13 The core primitives of modern agents
19:12 Skills: the new primitive
20:19 What context compaction actually means
23:02 How memory works in agents
25:16 One mega-agent or many specialized agents?
27:46 Has MCP won?
29:38 Why agents need sandboxes
32:35 How sandboxes help with security
33:32 How Harrison Chase started LangChain
37:24 LangChain vs LangGraph vs Deep Agents
40:17 Why observability matters more for agents
41:48 Evals, no-code, and continuous improvement
44:41 What LangChain is building next
45:29 Where the real moat in AI lives

Comments

Want to join the conversation?

Loading comments...