AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosThe Mathematical Foundations of Intelligence [Professor Yi Ma]
AI

The Mathematical Foundations of Intelligence [Professor Yi Ma]

•December 13, 2025
0
Machine Learning Street Talk
Machine Learning Street Talk•Dec 13, 2025

Why It Matters

By reframing intelligence as a problem of parsimonious, self‑consistent compression, Ma provides a roadmap for building AI systems that go beyond rote memorization toward genuine understanding, a shift that could unlock more reliable, adaptable, and trustworthy technologies.

Summary

In a recent interview, Professor Yi Ma, a leading figure in deep learning and the author of *Learning Deep Representations of Data Distributions*, outlines a new mathematical framework for intelligence built on two core principles – parsimony and self‑consistency. He argues that to treat intelligence as a scientific discipline we must move beyond empirical trial‑and‑error and articulate the mechanisms that underlie both natural and artificial cognition, from the animal‑level world‑model to the sophisticated large‑scale models that dominate today’s AI landscape.

Ma’s central insight is that intelligence is fundamentally a compression problem: the brain (or any intelligent system) seeks low‑dimensional structures that capture the predictable regularities of the world. This compression, he says, is inseparable from self‑consistency – the compressed representation must be able to reconstruct or simulate the environment without losing predictive power. He contrasts true understanding with mere memorization, noting that current large language models largely perform superficial semantic compression of text, lacking the deeper, multimodal world‑model that underpins human cognition.

The professor illustrates his thesis with vivid analogies: evolution encodes knowledge in DNA through a brutal, low‑efficiency compression process, while modern AI pipelines mimic this by “trial‑and‑error” training of massive networks. He cites the notion that language functions as a set of pointers to internal simulations, and points to his own “white‑box” transformer designs – the so‑called crate architectures – where every component follows from first‑principles rather than ad‑hoc heuristics. A memorable quote from the discussion is, “compression might be necessary for understanding,” underscoring his view that without parsimonious representations, AI cannot achieve genuine abstraction.

The implications are clear: to advance beyond the current generation of models, researchers must embed parsimony and self‑consistency into the core of AI design, moving toward systems that can form robust world‑models and synthesize new knowledge rather than merely regurgitate training data. Ma’s framework challenges the community to re‑evaluate the limits of large‑scale memorization, to invest in principled theory, and to recognize that true intelligence – natural or artificial – hinges on the ability to compress reality into simple, consistent structures.

Original Description

What if everything we think we know about AI understanding is wrong? Is compression the key to intelligence? Or is there something more—a leap from memorization to true abstraction?
In this fascinating conversation, we sit down with Professor Yi Ma—world-renowned expert in deep learning, IEEE/ACM Fellow, and author of the groundbreaking new book Learning Deep Representations of Data Distributions. Professor Ma challenges our assumptions about what large language models actually do, reveals why 3D reconstruction isn't the same as understanding, and presents a unified mathematical theory of intelligence built on just two principles: parsimony and self-consistency.
*SPONSOR MESSAGES START*
—
Prolific - Quality data. From real people. For faster breakthroughs.
https://www.prolific.com/?utm_source=mlst
—
cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy
Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst
Submit investment deck: https://cyber.fund/contact?utm_source=mlst
—
*END*
Key Insights:
LLMs Don't Understand—They Memorize
Language models process text (already compressed human knowledge) using the same mechanism we use to learn from raw data.
The Illusion of 3D Vision
Sora and NeRFs etc that can reconstruct 3D scenes still fail miserably at basic spatial reasoning
"All Roads Lead to Rome"
Why adding noise is necessary for discovering structure.
Why Gradient Descent Actually Works
Natural optimization landscapes are surprisingly smooth—a "blessing of dimensionality"
Transformers from First Principles
Transformer architectures can be mathematically derived from compression principles
—
INTERACTIVE AI TRANSCRIPT PLAYER w/REFS (ReScript):
https://app.rescript.info/public/share/Z-dMPiUhXaeMEcdeU6Bz84GOVsvdcfxU_8Ptu6CTKMQ
About Professor Yi Ma
Yi Ma is the inaugural director of the School of Computing and Data Science at Hong Kong University and a visiting professor at UC Berkeley.
https://people.eecs.berkeley.edu/~yima/
https://scholar.google.com/citations?user=XqLiBQMAAAAJ&hl=en
https://x.com/YiMaTweets
Slides from this conversation:
https://www.dropbox.com/scl/fi/sbhbyievw7idup8j06mlr/slides.pdf?rlkey=7ptovemezo8bj8tkhfi393fh9&dl=0
Related Talks by Professor Ma:
- Pursuing the Nature of Intelligence (ICLR): https://www.youtube.com/watch?v=LT-F0xSNSjo
- Earlier talk at Berkeley: https://www.youtube.com/watch?v=TihaCUjyRLM
TIMESTAMPS:
00:00:00 Introduction
00:02:08 The First Principles Book & Research Vision
00:05:21 Two Pillars: Parsimony & Consistency
00:09:50 Evolution vs. Learning: The Compression Mechanism
00:14:36 LLMs: Memorization Masquerading as Understanding
00:19:55 The Leap to Abstraction: Empirical vs. Scientific
00:27:30 Platonism, Deduction & The ARC Challenge
00:35:57 Specialization & The Cybernetic Legacy
00:41:23 Deriving Maximum Rate Reduction
00:48:21 The Illusion of 3D Understanding: Sora & NeRF
00:54:26 All Roads Lead to Rome: The Role of Noise
00:59:56 All Roads Lead to Rome: The Role of Noise
01:00:14 Benign Non-Convexity: Why Optimization Works
01:06:35 Double Descent & The Myth of Overfitting
01:14:26 Self-Consistency: Closed-Loop Learning
01:21:03 Deriving Transformers from First Principles
01:30:11 Verification & The Kevin Murphy Question
01:34:11 CRATE vs. ViT: White-Box AI & Conclusion
REFERENCES:
Book:
[00:03:04] Learning Deep Representations of Data Distributions
https://ma-lab-berkeley.github.io/deep-representation-learning-book/
[00:18:38] A Brief History of Intelligence
https://www.amazon.co.uk/BRIEF-HISTORY-INTELLIGEN-HB-Evolution/dp/0008560099
[00:38:14] Cybernetics
https://mitpress.mit.edu/9780262730099/cybernetics/
Book (Yi Ma):
[00:03:14] 3-D Vision book
https://link.springer.com/book/10.1007/978-0-387-21779-6
[00:03:24] Generalized PC Analysis
https://link.springer.com/book/10.1007/978-0-387-87811-9
[00:03:34] High-Dimensional Data Analysis book
https://book-wright-ma.github.io/
Slide:
[01:17:56] Slide 26: Neuroscience Evidence
https://arxiv.org/abs/2207.04630)
Person:
[01:30:26] Kevin Murphy
https://probml.github.io/pml-book/book1.html
Paper:
[00:27:44] On the Measure of Intelligence
https://arxiv.org/abs/1911.01547
[00:51:54] Eyes Wide Shut?
https://arxiv.org/abs/2401.06209
[00:59:58] A Global Geometric Analysis of Maximal Coding Rate Reduction
https://arxiv.org/pdf/2406.01909
[01:21:11] CRATE
https://arxiv.org/abs/2306.01129
[01:28:50] DINOv2
https://arxiv.org/abs/2304.07193
[01:34:21] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (ViT)
https://arxiv.org/abs/2010.11929
Benchmark:
[00:28:24] ARC-AGI: The Abstraction and Reasoning Corpus
https://github.com/fchollet/ARC-AGI
0

Comments

Want to join the conversation?

Loading comments...