Books Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsSocialBlogsVideosPodcastsDigests

Books Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Tuesday recap

NewsSocialBlogsVideosPodcasts
HomeLifeBooksPodcastsBI 233 Tom Griffiths: The Laws of Thought
BI 233 Tom Griffiths: The Laws of Thought
AIBooks

Brain Inspired

BI 233 Tom Griffiths: The Laws of Thought

Brain Inspired
•March 11, 2026•1h 40m
0
Brain Inspired•Mar 11, 2026

Why It Matters

Understanding the interplay of logic, probability, and neural networks offers a unified framework for both human cognition and artificial intelligence, guiding the development of more robust, explainable AI. This perspective is especially timely as researchers grapple with the limitations of current models and seek principled ways to align AI behavior with human-like reasoning.

Key Takeaways

  • •Logic, probability, neural networks form cognition's three pillars.
  • •Probability theory extends logic for uncertain inference.
  • •Large language models blend neural nets with probabilistic inference.
  • •Computational level uncovers universal laws like Shepard's generalization.
  • •Bridging psychology and CS reconciles brain limits with algorithms.

Pulse Analysis

Tom Griffiths’ new book, *The Laws of Thought*, argues that cognition rests on three complementary pillars: logic, neural networks, and probability theory. By tracing the historical roots from George Boole’s symbolic logic to today’s deep‑learning architectures, Griffiths shows how each framework captures a different slice of intelligent behavior. Logic supplies deductive certainty, probability offers a principled way to handle uncertainty, and neural networks provide the algorithmic machinery to approximate optimal solutions. This tripartite view reframes the long‑standing debate between psychologists, who emphasize human limitations, and computer scientists, who celebrate algorithmic power, suggesting a unified mathematical theory of the mind.

At the heart of Griffiths’ argument is the claim that probability theory is simply logic extended to uncertain worlds. By assigning probabilities to each possible world, Bayesian inference lets agents update beliefs as new evidence arrives, turning deductive certainty into graded expectation. This extension explains why large language models, which are neural networks trained to predict token sequences, succeed: they implicitly learn a probabilistic distribution over language and code, a symbolic substrate rich in hierarchy and compositionality. Understanding LLMs therefore requires both the neural implementation level and the probabilistic computational level, linking deep learning performance to the centuries‑old mathematics of inference.

Griffiths also emphasizes Marr’s three levels of analysis—computational, algorithmic, and implementation—to locate where universal laws of thought are most likely to emerge. The computational level defines the optimal problem‑solving strategy, where logic and Bayesian inference appear as law‑like prescriptions. The algorithmic level, populated by neural‑network learning rules, shows how brains approximate those prescriptions despite biological constraints. By bridging psychology’s focus on human limits with computer science’s drive for efficient algorithms, the book offers a roadmap for future AI research: develop models that respect probabilistic rationality while exploiting the representational power of deep networks. This synthesis points toward a more complete mathematical theory of mind.

Episode Description

Support the show to get full episodes, full archive, and join the Discord community.

The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

Read more about our partnership.

Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

To explore more neuroscience news and perspectives, visit thetransmitter.org.

Tom Griffiths directs both the Computational Cognitive Science Lab and the Princeton Laboratory for Artificial Intelligence at Princeton University. He's been on brain inspired before to talk about his previous book Algorithms to Live By: The Computer Science of Human Decisions, which he co-wrote with Brian Christian. Today he's here to talk about his new book, The Laws of Thought: The Quest for a Mathematical Theory of the Mind. In this book, Tom explains how the three pillars of logic, neural networks, and probability theory complement each other to explain cognition, arguing we are on the doorstep to settling what mathematical principles - the so-called "laws of thought" - underly our cognition. So we discuss a little bit about a lot of things, including the concepts themselves, the people who have generated and worked on those concepts. I should also mentioned, Tom recorded a bunch of his interviews with people he writes about, and he's edited and polished those into a podcast called the Cognition Project, which I have enjoyed after reading the book, and I think you'd enjoy it either before or after you read the book.

Computational Cognitive Science Lab

Princeton Laboratory for Artificial Intelligence

Social: @cocosci_lab; @cocoscilab.bsky.social

Book:

The Laws of Thought: The Quest for a Mathematical Theory of the Mind.

Podcast: The Cognition Project

Read the transcript.

0:00 - Intro

3:20 - Tom's approach

7:19 - 3 pillars of the laws of thought

28:24 - Logic and formal systems strip away meaning

39:04 - Nature of thought

50:35 - Kahneman and Tversky

1:015:12 - Enabling constraints and inductive bias

1:12:51 - Hidden layers, probability, and hidden markov models

1:20:47 - Conscious vs nonconscious

1:23:43 - Feelings

1:31:26 - Personal

Show Notes

0

Comments

Want to join the conversation?

Loading comments...