AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosThe Algorithm That IS The Scientific Method [Dr. Jeff Beck]
AI

The Algorithm That IS The Scientific Method [Dr. Jeff Beck]

•December 31, 2025
0
Machine Learning Street Talk
Machine Learning Street Talk•Dec 31, 2025

Why It Matters

Understanding the brain as a Bayesian, causal inference engine informs both AI development and scientific methodology, enabling more efficient learning, better decision‑making, and clearer pathways from data to actionable knowledge.

Key Takeaways

  • •Bayesian inference mirrors the scientific method’s hypothesis testing cycle.
  • •Human perception combines cues optimally, reflecting Bayesian computation.
  • •The brain prioritizes uncertainty, deciding what information to ignore.
  • •Causal models simplify prediction and guide effective interventions.
  • •Technology expands our affordances, letting us act on microscopic causality.

Summary

Dr. Jeff Beck frames Bayesian inference as the algorithmic core of the scientific method, arguing that the brain implements this same normative approach when interpreting data. He traces his own journey from studying pattern formation in complex systems to embracing Bayesian reasoning after witnessing experiments that showed humans and animals combine sensory cues in a statistically optimal way. The talk highlights several empirical findings: cue‑combination experiments demonstrate that subjects weight visual and auditory information according to trial‑by‑trial reliability, effectively performing Bayesian updates. He emphasizes that the brain constantly evaluates uncertainty, deciding which inputs to ignore—a process he likens to the 90 % of neural activity devoted to filtering irrelevant data. Beck also connects these ideas to modern machine‑learning practices, noting that self‑supervised models such as large language models embody the brain’s habit of forming priors from raw experience. Illustrative quotes reinforce his points: “the brain is Bayesian,” and “causal models reduce the number of variables we must track, making prediction and intervention tractable.” He uses the physics concept of momentum as a hidden variable that renders dynamics Markovian, arguing that we choose such variables for computational convenience rather than because they are ontologically fundamental. The discussion of macro versus micro causation underscores that useful causal models are those aligned with our actionable affordances, which technology can extend. The implications are twofold. For AI and cognitive science, adopting Bayesian and causal‑model frameworks can yield systems that learn efficiently, handle uncertainty, and plan actions like humans. For scientific practice, recognizing the algorithmic nature of hypothesis testing encourages more explicit model specification and intervention‑based validation, potentially accelerating discovery across disciplines.

Original Description

Dr. Jeff Beck, mathematician turned computational neuroscientist, joins us for a fascinating deep dive into why the future of AI might look less like ChatGPT and more like your own brain.
*SPONSOR MESSAGES START*
—
Prolific - Quality data. From real people. For faster breakthroughs.
https://www.prolific.com/?utm_source=mlst
—
*END*
What if the key to building truly intelligent machines isn't bigger models, but smarter ones?
In this conversation, Jeff makes a compelling case that we've been building AI backwards. While the tech industry races to scale up transformers and language models, Jeff argues we're missing something fundamental: the brain doesn't work like a giant prediction engine. It works like a scientist, constantly testing hypotheses about a world made of objects that interact through forces — not pixels and tokens.
The Bayesian Brain — Jeff explains how your brain is essentially running the scientific method on autopilot. When you combine what you see with what you hear, you're doing optimal Bayesian inference without even knowing it. This isn't just philosophy — it's backed by decades of behavioral experiments showing humans are surprisingly efficient at handling uncertainty.
AutoGrad Changed Everything — Forget transformers for a moment. Jeff argues the real hero of the AI boom was automatic differentiation, which turned AI from a math problem into an engineering problem. But in the process, we lost sight of what actually makes intelligence work.
The Cat in the Warehouse Problem — Here's where it gets practical. Imagine a warehouse robot that's never seen a cat. Current AI would either crash or make something up. Jeff's approach? Build models that know what they don't know, can phone a friend to download new object models on the fly, and keep learning continuously. It's like giving robots the ability to say "wait, what IS that?" instead of confidently being wrong.
Why Language is a Terrible Model for Thought — In a provocative twist, Jeff argues that grounding AI in language (like we do with LLMs) is fundamentally misguided. Self-report is the least reliable data in psychology — people routinely explain their own behavior incorrectly. We should be grounding AI in physics, not words.
The Future is Lots of Little Models — Instead of one massive neural network, Jeff envisions AI systems built like video game engines: thousands of small, modular object models that can be combined, swapped, and updated independently. It's more efficient, more flexible, and much closer to how we actually think.
Whether you're an AI researcher, a robotics enthusiast, or just curious about how minds — biological or artificial — actually work, this conversation offers a refreshingly different perspective on where intelligence comes from and where it's going.
Rescript: https://app.rescript.info/public/share/D-b494t8DIV-KRGYONJghvg-aelMmxSDjKthjGdYqsE

TIMESTAMPS:
00:00:00 Introduction & The Bayesian Brain
00:01:25 Bayesian Inference & Information Processing
00:05:17 The Brain Metaphor: From Levers to Computers
00:10:13 Micro vs. Macro Causation & Instrumentalism
00:16:59 The Active Inference Community & AutoGrad
00:22:54 Object-Centered Models & The Grounding Problem
00:35:50 Scaling Bayesian Inference & Architecture Design
00:48:05 The Cat in the Warehouse: Solving Generalization
00:58:17 Alignment via Belief Exchange
01:05:24 Deception, Emergence & Cellular Automata

REFERENCES:
Paper:
[00:00:24] Zoubin Ghahramani (Google DeepMind)
https://pmc.ncbi.nlm.nih.gov/articles/PMC3538441/pdf/rsta201
[00:19:20] Mamba: Linear-Time Sequence Modeling
https://arxiv.org/abs/2312.00752
[00:27:36] xLSTM: Extended Long Short-Term Memory
https://arxiv.org/abs/2405.04517
[00:41:12] 3D Gaussian Splatting
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
[01:07:09] Lenia: Biology of Artificial Life
https://arxiv.org/abs/1812.05433
[01:08:20] Growing Neural Cellular Automata
https://distill.pub/2020/growing-ca/
[01:14:05] DreamCoder
https://arxiv.org/abs/2006.08381
[01:14:58] The Genomic Bottleneck
https://www.nature.com/articles/s41467-019-11786-6
Person:
[00:16:42] Karl Friston (UCL)
https://www.youtube.com/watch?v=PNYWi996Beg
0

Comments

Want to join the conversation?

Loading comments...