Understanding these trends is crucial for anyone navigating the fast‑moving AI ecosystem, as they shape investment decisions, talent development, and competitive strategy across industry and academia. The episode’s timely synthesis of technical advances, market dynamics, and geopolitical factors provides a roadmap for anticipating how AI will reshape work, economics, and civilization in the coming years.
In episode #490, Lex Friedman sits down with AI veterans Sebastian Rashka and Nathan Lambert to map the rapidly evolving AI landscape of 2026. The trio dissects breakthroughs from large language models to scaling laws, while highlighting how recent open‑weight releases—most notably DeepSeek R1—have reshaped research dynamics. Listeners gain a clear picture of why model openness, hardware accessibility, and cross‑border competition now dominate strategic conversations, setting the stage for the next wave of AI innovation.
The conversation pivots to the intense rivalry between U.S. and Chinese AI ecosystems. DeepSeek’s surprise performance in early 2025 ignited a cascade of open‑weight models from Chinese firms such as Z‑AI, Minimax, and Moonshot, democratizing access and challenging the dominance of proprietary offerings like Anthropic’s Claude and Google’s Gemini. While American companies continue to pour resources into proprietary pipelines, the decisive factor increasingly hinges on budgetary constraints and GPU availability rather than exclusive algorithms. This fluid idea space means that breakthroughs spread quickly, but execution speed and infrastructure investment create a de‑facto hierarchy of influence.
Looking ahead, the hosts argue that the race toward fully autonomous coding agents and AGI‑level reasoning will be shaped by how organizations balance open‑source collaboration with commercial sustainability. Anthropic’s focus on code‑centric AI and its cloud‑based services exemplify a model that blends reliability with rapid iteration, whereas Chinese open‑weight initiatives aim to capture global market share despite limited subscription revenue. For business leaders, the takeaway is clear: investing in scalable hardware, staying abreast of emerging open models, and fostering a culture that can translate research into production will determine who thrives in the AI‑driven economy of 2026 and beyond.
Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch).
https://lexfridman.com/sponsors/ep490-sc
Transcript:
https://lexfridman.com/ai-sota-2026-transcript
CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact
SPONSORS:
Box: Intelligent content management platform.
https://box.com/ai
Quo: Phone system (calls, texts, contacts) for businesses.
https://quo.com/lex
UPLIFT Desk: Standing desks and office ergonomics.
https://upliftdesk.com/lex
Fin: AI agent for customer service.
https://fin.ai/lex
Shopify: Sell stuff online.
https://shopify.com/lex
CodeRabbit: AI-powered code reviews.
https://coderabbit.ai/lex
LMNT: Zero-sugar electrolyte drink mix.
https://drinkLMNT.com/lex
Perplexity: AI-powered answer engine.
https://perplexity.ai/
OUTLINE:
(00:00) – Introduction
(01:39) – Sponsors, Comments, and Reflections
(16:29) – China vs US: Who wins the AI race?
(25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning?
(36:11) – Best AI for coding
(43:02) – Open Source vs Closed Source LLMs
(54:41) – Transformers: Evolution of LLMs since 2019
(1:02:38) – AI Scaling Laws: Are they dead or still holding?
(1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training
(1:51:51) – Post-training explained: Exciting new research directions in LLMs
(2:12:43) – Advice for beginners on how to get into AI development & research
(2:35:36) – Work culture in AI (72+ hour weeks)
(2:39:22) – Silicon Valley bubble
(2:43:19) – Text diffusion models and other new research directions
(2:49:01) – Tool use
(2:53:17) – Continual learning
(2:58:39) – Long context
(3:04:54) – Robotics
(3:14:04) – Timeline to AGI
(3:21:20) – Will AI replace programmers?
(3:39:51) – Is the dream of AGI dying?
(3:46:40) – How AI will make money?
(3:51:02) – Big acquisitions in 2026
(3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta
(4:08:08) – Manhattan Project for AI
(4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters
(4:22:48) – Future of human civilization
Comments
Want to join the conversation?
Loading comments...