AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIPodcastsBI 229 Tomaso Poggio: Principles of Intelligence and Learning
                    BI 229 Tomaso Poggio: Principles of Intelligence and Learning
AI

Brain Inspired

BI 229 Tomaso Poggio: Principles of Intelligence and Learning

Brain Inspired
•January 14, 2026•1h 41m
0
Brain Inspired•Jan 14, 2026

Key Takeaways

  • •Sparse compositionality enables efficient, generalizable AI systems.
  • •Learning is the fourth level of Marr’s analysis framework.
  • •AI currently between Volta’s battery and Maxwell’s theory.
  • •Theory-first approach drives applications in vision, genetics, driving.
  • •Neocortex may implement compositional sparse functions like deep networks.

Pulse Analysis

In this hour‑long conversation, MIT’s Tomaso Poggio reflects on a career that bridges brain science and artificial intelligence. He argues that modern AI is still in the "Volta" stage—engineered breakthroughs without a unifying theory—much like early electricity before Maxwell’s equations. By tracing his work from early kernel‑machine theory to today’s deep networks, Poggio highlights how a theory‑first mindset has allowed him to apply learning across vision, genomics, and autonomous driving, turning abstract principles into concrete breakthroughs.

A central theme is sparse compositionality: complex intelligence emerges from many simple, low‑dimensional functions combined hierarchically. This principle explains why depth is essential in neural nets and mirrors the neocortex’s modular architecture. Poggio also revisits the fourth level of Marr’s analysis, positioning learning as the pivotal bridge between computational algorithms and biological implementation. By formalizing sparsity and compositionality, he demonstrates how these mathematical constraints yield both generalization and computational efficiency.

Looking ahead, Poggio cautions that a single set of Maxwell‑style equations may never capture intelligence. Instead, a collection of guiding principles—akin to molecular biology’s DNA replication rules—will likely shape future research. He envisions a collaborative dance between theory and engineering, where rigorous theorems inform new architectures while empirical advances inspire deeper insights. As AI systems grow more capable, understanding whether brain processes are also compositionally sparse remains an open, high‑stakes question that could redefine both neuroscience and machine learning.

Episode Description

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

To explore more neuroscience news and perspectives, visit thetransmitter.org.

Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines.

Tomaso believes we are in-between building and understanding useful AI That is, we are in between engineering and theory. He likens this stage to the period after Volta invented the battery and Maxwell developed the equations of electromagnetism. Tomaso has worked for decades on the theory and principles behind intelligence and learning in brains and machines. I first learned of him via his work with David Marr, in which they developed "Marr's levels" of analysis that frame explanation in terms of computation/function, algorithms, and implementation. Since then Tomaso has added "learning" as a crucial fourth level. I will refer to you his autobiography to learn more about the many influential people and projects he has worked with and on, the theorems he and others have proved to discover principles of intelligence, and his broader thoughts and reflections.

Right now, he is focused on the principles of compositional sparsity and genericity to explain how deep learning networks can (computationally) efficiently learn useful representations to solve tasks.

Lab website.

Tomaso's Autobiography 

Related papers

Position: A Theory of Deep Learning Must Include Compositional Sparsity

The Levels of Understanding framework, revised

Blog post:

Poggio lab blog.

The Missing Foundations of Intelligence

0:00 - Intro

9:04 - Learning as the fourth level of Marr's levels

12:34 - Engineering then theory (Volta to Maxwell)

19:23 - Does AI need theory?

26:29 - Learning as the door to intelligence

38:30 - Learning in the brain vs backpropagation

40:45 - Compositional sparsity

49:57 - Math vs computer science

56:50 - Generalizability

1:04:41 - Sparse compositionality in brains?

1:07:33 - Theory vs experiment

1:09:46 - Who needs deep learning theory?

1:19:51 - Does theory really help? Patreon

1:28:54 - Outlook

Show Notes

0

Comments

Want to join the conversation?

Loading comments...