AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosDario Amodei — “We Are Near the End of the Exponential”
AI

Dario Amodei — “We Are Near the End of the Exponential”

•February 13, 2026
0
Dwarkesh Patel
Dwarkesh Patel•Feb 13, 2026

Why It Matters

The imminent ability of AI to automate most software development will dramatically reshape tech labor markets and accelerate product innovation, making strategic investment in AI tooling a critical competitive priority.

Key Takeaways

  • •Exponential AI progress aligns with early scaling predictions, nearing plateau
  • •Reinforcement learning now mirrors pre‑training scaling laws across tasks
  • •Human‑like learning still requires massive compute, unlike biological efficiency
  • •RL environments may be red herrings; generalization stems from broader data
  • •Dario predicts 90% code generation within months, full automation soon

Summary

Dario Amodei reflects on the past three years of AI development, arguing that the exponential growth of model capabilities has unfolded roughly as he anticipated and that we are now approaching the tail end of that exponential curve. He revisits his “Big Blob of Compute” hypothesis, emphasizing that raw compute, data quantity and quality, training duration, scalable objectives, and numerical stability remain the dominant drivers of progress, whether in pre‑training or reinforcement‑learning (RL) phases.

Amodei notes that scaling laws observed in language‑model pre‑training now appear in RL contexts as well, with performance on math contests and other tasks improving log‑linearly with training time. He points out that early models trained on narrow corpora failed to generalize, whereas broad internet‑scale data enabled the leap from GPT‑1 to GPT‑2 and beyond. The same principle applies to RL: starting with simple environments and expanding to diverse tasks yields broader generalization.

Key moments include his observation that “we are near the end of the exponential,” the claim that AI will write 90 % of code within months, and the analogy that model training sits between human evolution and on‑the‑spot learning. He also highlights the puzzling sample‑efficiency gap between humans and models, suggesting that while models are blank slates, humans benefit from evolutionary priors.

The implications are profound: if scaling truly plateaus soon, the next breakthroughs will hinge on data breadth and objective design rather than raw compute. Software engineering could see 90 % of code generated autonomously, reshaping talent demand and accelerating product cycles. Moreover, the convergence of pre‑training and RL scaling strengthens confidence that AGI‑level capabilities may emerge within a few years, prompting firms to reassess R&D strategies and workforce planning.

Original Description

Dario Amodei thinks we are just a few years away from “a country of geniuses in a data center”. In this episode, we discuss what to make of the scaling hypothesis in the current RL regime, how AI will diffuse throughout the economy, whether Anthropic is underinvesting in compute given their timelines, how frontier labs will ever make money, whether regulation will destroy the boons of this technology, US-China competition, and much more.
𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒
* Transcript: https://www.dwarkesh.com/p/dario-amodei-2
* Apple Podcasts: https://podcasts.apple.com/us/podcast/dario-amodei-the-highest-stakes-financial-model-in-history/id1516093381?i=1000749621800
* Spotify: https://open.spotify.com/episode/2ZNrpVSrgZMlDwQinl20Ay?si=9D4aG1l7S-2wzLsiILRLIg
𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒
- Labelbox can get you the RL tasks and environments you need. Their massive network of subject-matter experts ensures realism across domains, and their in-house tooling lets them continuously tweak task difficulty to optimize learning. Reach out at https://labelbox.com/dwarkesh
- Jane Street sent me another puzzle… this time, they’ve trained backdoors into 3 different language models — they want you to find the triggers. Jane Street isn’t even sure this is possible, but they’ve set aside $50,000 for the best attempts and write-ups. They’re accepting submissions until April 1st at https://janestreet.com/dwarkesh
- Mercury’s personal accounts make it easy to share finances with a partner, a roommate… or OpenClaw. Last week, I wanted to try OpenClaw for myself, so I used Mercury to spin up a virtual debit card with a small spend limit, and then I let my agent loose. No matter your use case, apply at https://mercury.com/personal-banking
To sponsor a future episode, visit https://dwarkesh.com/advertise.
𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒
00:00:00 - What exactly are we scaling?
00:12:36 - Is diffusion cope?
00:29:42 - Is continual learning necessary?
00:46:20 - If AGI is imminent, why not buy more compute?
00:58:49 - How will AI labs actually make profit?
01:31:19 - Will regulations destroy the boons of AGI?
01:47:41 - Why can’t China and America both have a country of geniuses in a datacenter?
0

Comments

Want to join the conversation?

Loading comments...