Tensor Logic "Unifies" AI Paradigms [Pedro Domingos]

Machine Learning Street Talk
Machine Learning Street TalkDec 7, 2025

Why It Matters

TensorLogic could unify reasoning and learning in a single, GPU‑optimizable language, dramatically lowering the engineering complexity and improving the reliability of AI systems for high‑stakes enterprise applications.

Summary

TensorLogic, introduced by Professor Pedro Domingos, is presented as a new programming language that unifies the disparate paradigms of artificial intelligence—symbolic reasoning, deep learning, kernel methods, and graphical models—under a single mathematical construct: the tensor equation. Domingos argues that the Einstein summation (einsum) operation, which underlies all tensor algebra in modern deep‑learning frameworks, is mathematically identical to the rule‑based inference mechanisms of logic programming, differing only in the data type (real numbers versus Booleans). By treating both numeric and symbolic computation as variations of the same tensor equation, TensorLogic promises a seamless blend of automated reasoning and gradient‑based learning.

The core insight of the talk is that existing AI toolkits either excel at differentiable computation (e.g., PyTorch, TensorFlow) or at logical inference (e.g., Prolog, Datalog), but lack a unified, efficient abstraction. TensorLogic addresses three practical shortcomings: (1) a concise, human‑readable syntax that compresses verbose einsum expressions; (2) a GPU‑optimizable implementation that can execute the unified tensor‑logic operations orders of magnitude faster than ad‑hoc combinations; and (3) native support for predicate invention and differentiable reasoning, enabling models to discover new symbolic relations while being trained end‑to‑end. Domingos illustrates these points with a concrete example where a logical OR over Boolean variables is expressed as an einsum followed by a Heaviside step function, showing the equivalence of symbolic disjunction and numeric summation.

Notable quotes underscore the ambition: “A field cannot take off until it finds its language,” and “TensorLogic is the first language that has automated reasoning and auto‑differentiation built in.” Domingos also references historical parallels—Einstein’s summation notation for relativity and Boolean logic for circuit design—to argue that TensorLogic could become for AI what calculus is for physics. He acknowledges that no single language can solve every problem, yet contends that the tensor‑equation abstraction captures the essential operations needed across the AI spectrum, making it a strong candidate for a universal AI substrate.

If TensorLogic lives up to its promise, it could reshape how enterprises build AI systems, eliminating the brittle pipelines that stitch together separate symbolic and neural components. Companies would gain a single, mathematically grounded stack that ensures transparent reasoning (reducing hallucinations) while retaining the scalability of GPU‑accelerated learning. This could accelerate adoption of trustworthy AI in regulated sectors such as finance and healthcare, where explainability and reliability are non‑negotiable.

Original Description

Pedro Domingos, author of the bestselling book "The Master Algorithm," introduces his latest work: Tensor Logic - a new programming language he believes could become the fundamental language for artificial intelligence.
Think of it like this: Physics found its language in calculus. Circuit design found its language in Boolean logic. Pedro argues that AI has been missing its language - until now.
*SPONSOR MESSAGES START*
Build your ideas with AI Studio from Google - http://ai.studio/build
Prolific - Quality data. From real people. For faster breakthroughs.
cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy
*END*
Current AI is split between two worlds that don't play well together:
Deep Learning (neural networks, transformers, ChatGPT) - great at learning from data, terrible at logical reasoning
Symbolic AI (logic programming, expert systems) - great at logical reasoning, terrible at learning from messy real-world data
Tensor Logic unifies both. It's a single language where you can:
Write logical rules that the system can actually learn and modify
Do transparent, verifiable reasoning (no hallucinations)
Mix "fuzzy" analogical thinking with rock-solid deduction
The Killer Feature: The Temperature Knob
Why Should You Care?
Pedro makes a provocative claim: 01:24:50 → 01:27:47 “We've wasted trillions of dollars” on brute-force compute because we're ignoring 40 years of AI research. Companies are "reinventing reasoning" when they could just read a textbook and save billions.
INTERACTIVE TRANSCRIPT:
TOC:
00:00:00 - Introduction
00:04:41 - What is Tensor Logic?
00:09:59 - Tensor Logic vs PyTorch & Einsum
00:17:50 - The Master Algorithm Connection
00:20:41 - Predicate Invention & Learning New Concepts
00:31:22 - Symmetries in AI & Physics
00:35:30 - Computational Reducibility & The Universe
00:43:34 - Technical Details: RNN Implementation
00:45:35 - Turing Completeness Debate
00:56:45 - Transformers vs Turing Machines
01:02:32 - Reasoning in Embedding Space
01:11:46 - Solving Hallucination with Deductive Modes
01:16:17 - Adoption Strategy & Migration Path
01:21:50 - AI Education & Abstraction
01:24:50 - The Trillion-Dollar Waste
REFS
Tensor Logic: The Language of AI [Pedro Domingos]
The Master Algorithm [Pedro Domingos]
Einsum is All you Need (TIM ROCKTÄSCHEL)
More Is Different [P. W. Anderson] (not "Ross" we misremembered name in interview)
Autoregressive Large Language Models are Computationally Universal (Dale Schuurmans et al - GDM)
Memory Augmented Large Language Models are Computationally Universal [Dale Schuurmans]
On the computational power of NNs [95/Siegelmann]
Sebastian Bubeck
I am a strange loop - Hofstadter
Stephen Wolfram
The Complex World: An Introduction to the Foundations of Complexity Science [David C. Krakauer]
Geometric Deep Learning
Andrew Wilson (NYU)
Yi Ma
Roger Penrose - road to reality
Artificial Intelligence: A Modern Approach [Russel and Norvig]
Best Moments:
01:01:50 → 01:02:15 [The Universal Induction Machine] - Pedro's quest for the "Turing Machine of Learning"
00:20:41 → 00:24:37 [Predicate Invention] - How the system learns to see "objects" instead of pixels, like humans do
01:12:55 → 01:13:30 [Why This Matters Now] - The hallucination problem explained

Comments

Want to join the conversation?

Loading comments...