Large Language Mistake

Large Language Mistake

The Verge
The VergeNov 25, 2025

Why It Matters

If LLMs cannot achieve true general intelligence, investors and policymakers must temper expectations and redirect resources toward architectures that model real-world understanding, reshaping the trajectory of AI development and its economic impact.

Summary

Benjamin Riley argues that large language models (LLMs) are fundamentally limited because they model language, not thought. Citing a recent Nature commentary, he notes neuroscience evidence that human cognition operates independently of linguistic ability, and that language is a communication tool rather than the substrate of intelligence. The piece warns that industry hype about imminent AGI and superintelligence overlooks this distinction, and points to growing skepticism among AI researchers, including Yann LeCun’s shift toward world‑model approaches that capture physical and causal reasoning.

Large language mistake

Comments

Want to join the conversation?

Loading comments...