BI 229 Tomaso Poggio: Principles of Intelligence and Learning
AI

Brain Inspired

BI 229 Tomaso Poggio: Principles of Intelligence and Learning

Brain InspiredJan 14, 2026

AI Summary

In this episode, Tomaso Poggio discusses extending Marr's three levels of analysis by adding learning as a fourth level, arguing that understanding intelligence requires both engineering breakthroughs and theoretical foundations—much like the era between Volta's battery and Maxwell's equations. He explains how principles such as compositional sparsity and genericity can account for the efficiency and generalizability of deep learning, contrasting biological learning mechanisms with backpropagation. Poggio emphasizes that theory is essential for guiding AI development, improving interpretability, and linking neuroscience insights to machine learning, while also acknowledging the ongoing dialogue between experimental data and mathematical models.

Episode Description

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

To explore more neuroscience news and perspectives, visit thetransmitter.org.

Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines.

Tomaso believes we are in-between building and understanding useful AI That is, we are in between engineering and theory. He likens this stage to the period after Volta invented the battery and Maxwell developed the equations of electromagnetism. Tomaso has worked for decades on the theory and principles behind intelligence and learning in brains and machines. I first learned of him via his work with David Marr, in which they developed "Marr's levels" of analysis that frame explanation in terms of computation/function, algorithms, and implementation. Since then Tomaso has added "learning" as a crucial fourth level. I will refer to you his autobiography to learn more about the many influential people and projects he has worked with and on, the theorems he and others have proved to discover principles of intelligence, and his broader thoughts and reflections.

Right now, he is focused on the principles of compositional sparsity and genericity to explain how deep learning networks can (computationally) efficiently learn useful representations to solve tasks.

Lab website.

Tomaso's Autobiography 

Related papers

Position: A Theory of Deep Learning Must Include Compositional Sparsity

The Levels of Understanding framework, revised

Blog post:

Poggio lab blog.

The Missing Foundations of Intelligence

0:00 - Intro

9:04 - Learning as the fourth level of Marr's levels

12:34 - Engineering then theory (Volta to Maxwell)

19:23 - Does AI need theory?

26:29 - Learning as the door to intelligence

38:30 - Learning in the brain vs backpropagation

40:45 - Compositional sparsity

49:57 - Math vs computer science

56:50 - Generalizability

1:04:41 - Sparse compositionality in brains?

1:07:33 - Theory vs experiment

1:09:46 - Who needs deep learning theory?

1:19:51 - Does theory really help? Patreon

1:28:54 - Outlook

Show Notes

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

To explore more neuroscience news and perspectives, visit thetransmitter.org.

Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines.

Tomaso believes we are in-between building and understanding useful AI That is, we are in between engineering and theory. He likens this stage to the period after Volta invented the battery and Maxwell developed the equations of electromagnetism. Tomaso has worked for decades on the theory and principles behind intelligence and learning in brains and machines. I first learned of him via his work with David Marr, in which they developed "Marr's levels" of analysis that frame explanation in terms of computation/function, algorithms, and implementation. Since then Tomaso has added "learning" as a crucial fourth level. I will refer to you his autobiography to learn more about the many influential people and projects he has worked with and on, the theorems he and others have proved to discover principles of intelligence, and his broader thoughts and reflections.

Right now, he is focused on the principles of compositional sparsity and genericity to explain how deep learning networks can (computationally) efficiently learn useful representations to solve tasks.

0:00 - Intro 9:04 - Learning as the fourth level of Marr's levels 12:34 - Engineering then theory (Volta to Maxwell) 19:23 - Does AI need theory? 26:29 - Learning as the door to intelligence 38:30 - Learning in the brain vs backpropagation 40:45 - Compositional sparsity 49:57 - Math vs computer science 56:50 - Generalizability 1:04:41 - Sparse compositionality in brains? 1:07:33 - Theory vs experiment 1:09:46 - Who needs deep learning theory? 1:19:51 - Does theory really help? Patreon 1:28:54 - Outlook

Comments

Want to join the conversation?

Loading comments...