AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosIf You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]
AI

If You Can't See Inside, How Do You Know It's THINKING? [Dr. Jeff Beck]

•January 25, 2026
0
Machine Learning Street Talk
Machine Learning Street Talk•Jan 25, 2026

Why It Matters

Understanding and measuring agency clarifies when AI systems truly exhibit autonomous decision‑making, guiding both safe deployment and scientific interpretation of intelligent behavior.

Key Takeaways

  • •Agency defined by internal policy computation, not just input-output mapping.
  • •Planning and counterfactual reasoning distinguish true agents from sophisticated functions.
  • •Transfer entropy can quantify degree of agency in observed systems.
  • •Physical embodiment remains essential for considering something an actual agent.
  • •Energy‑based models embed inductive priors, improving interpretability over pure function approximators.

Summary

The conversation centers on what it means for a system to "think" and how to recognize agency when internal computations are hidden. Dr. Jeff Beck argues that an agent is distinguished by having internal states that generate policies over long time scales, rather than being a simple input‑output device. He ties this to geometric deep learning, noting that incorporating physical symmetries improves modeling of the world, but the deeper question remains how to infer agency from observable behavior.

Key insights include the need for planning and counterfactual reasoning as hallmarks of genuine agency. Metrics such as transfer entropy can estimate how much information a system integrates over time, offering a quantitative, though non‑normative, gauge of agency. Beck also stresses that physical embodiment matters; a high‑fidelity simulation of a brain may replicate behavior, yet without a material substrate he hesitates to call it an agent.

Illustrative examples range from labeling a rock as an agent under a broad definition, to dissecting a chess engine that appears to plan but could be reduced to a sophisticated policy function. The dialogue also touches on energy‑based models, contrasting them with standard feed‑forward networks by highlighting their built‑in inductive priors that constrain input‑output relationships, thereby offering clearer interpretability.

The implications are twofold: for AI research, developing metrics that capture planning depth and information integration could refine how we label and evaluate autonomous systems; philosophically, the discussion underscores that agency may be a continuum rather than a binary label, urging practitioners to adopt probabilistic, degree‑based frameworks rather than strict categorical distinctions.

Original Description

What makes something truly intelligent? Is a rock an agent? Could a perfect simulation of your brain actually be you? In this fascinating conversation, Dr. Jeff Beck takes us on a journey through the philosophical and technical foundations of agency, intelligence, and the future of AI.
Jeff doesn't hold back on the big questions. He argues that from a purely mathematical perspective, there's no structural difference between an agent and a rock – both execute policies that map inputs to outputs. The real distinction lies in sophistication – how complex are the internal computations? Does the system engage in planning and counterfactual reasoning, or is it just a lookup table that happens to give the right answers?
Key topics explored in this conversation:
The Black Box Problem of Agency – How can we tell if something is truly planning versus just executing a pre-computed response? Jeff explains why this question is nearly impossible to answer from the outside, and why the best we can do is ask which model gives us the simplest explanation.
Energy-Based Models Explained – A masterclass on how EBMs differ from standard neural networks. The key insight: traditional networks only optimize weights, while energy-based models optimize both weights and internal states – a subtle but profound distinction that connects to Bayesian inference.
Why Your Brain Might Have Evolved from Your Nose – One of the most surprising moments in the conversation. Jeff proposes that the complex, non-smooth nature of olfactory space may have driven the evolution of our associative cortex and planning abilities.
The JEPA Revolution – A deep dive into Yann LeCun's Joint Embedding Prediction Architecture and why learning in latent space (rather than predicting every pixel) might be the key to more robust AI representations.
AI Safety Without Skynet Fears – Jeff takes a refreshingly grounded stance on AI risk. He's less worried about rogue superintelligences and more concerned about humans becoming "reward function selectors" – couch potatoes who just approve or reject AI outputs. His proposed solution? Use inverse reinforcement learning to derive AI goals from observed human behavior, then make small perturbations rather than naive commands like "end world hunger."
Whether you're interested in the philosophy of mind, the technical details of modern machine learning, or just want to understand what makes intelligence tick, this conversation delivers insights you won't find anywhere else.

TIMESTAMPS:
00:00:00 Geometric Deep Learning & Physical Symmetries
00:00:56 Defining Agency: From Rocks to Planning
00:05:25 The Black Box Problem & Counterfactuals
00:08:45 Simulated Agency vs. Physical Reality
00:12:55 Energy-Based Models & Test-Time Training
00:17:30 Bayesian Inference & Free Energy
00:20:07 JEPA, Latent Space, & Non-Contrastive Learning
00:27:07 Evolution of Intelligence & Modular Brains
00:34:00 Scientific Discovery & Automated Experimentation
00:38:04 AI Safety, Enfeeblement & The Future of Work

REFERENCES:
Concept:
[00:00:58] Free Energy Principle (FEP)
https://en.wikipedia.org/wiki/Free_energy_principle
[00:06:00] Monte Carlo Tree Search
https://en.wikipedia.org/wiki/Monte_Carlo_tree_search
Book:
[00:09:00] The Intentional Stance
https://mitpress.mit.edu/9780262540537/the-intentional-stance/
Paper:
[00:13:00] A Tutorial on Energy-Based Learning (LeCun 2006)
http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf
[00:15:00] Auto-Encoding Variational Bayes (VAE)
https://arxiv.org/abs/1312.6114
[00:20:15] JEPA (Joint Embedding Prediction Architecture)
https://openreview.net/forum?id=BZ5a1r-kVsf
[00:22:30] The Wake-Sleep Algorithm
https://www.cs.toronto.edu/~hinton/absps/ws.pdf
[00:22:45] Barlow Twins: Self-Supervised Learning
https://arxiv.org/abs/2103.03230
[00:30:40] GFlowNets (Generative Flow Networks)
https://arxiv.org/abs/2111.09266
[00:45:00] Maximum Entropy Inverse Reinforcement Learning
https://www.aaai.org/Papers/AAAI/2008/AAAI08-227.pdf
Challenge:
[00:27:15] ARC Prize (Abstraction and Reasoning Corpus)
https://arcprize.org/

RESCRIPT:
https://app.rescript.info/public/share/DJlSbJ_Qx080q315tWaqMWn3PixCQsOcM4Kf1IW9_Eo
PDF:
https://app.rescript.info/api/public/sessions/0efec296b9b6e905/pdf
0

Comments

Want to join the conversation?

Loading comments...