AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosTensor Logic "Unifies" AI Paradigms [Pedro Domingos]
AI

Tensor Logic "Unifies" AI Paradigms [Pedro Domingos]

•December 7, 2025
0
Machine Learning Street Talk
Machine Learning Street Talk•Dec 7, 2025

Why It Matters

TensorLogic could unify reasoning and learning in a single, GPU‑optimizable language, dramatically lowering the engineering complexity and improving the reliability of AI systems for high‑stakes enterprise applications.

Summary

TensorLogic, introduced by Professor Pedro Domingos, is presented as a new programming language that unifies the disparate paradigms of artificial intelligence—symbolic reasoning, deep learning, kernel methods, and graphical models—under a single mathematical construct: the tensor equation. Domingos argues that the Einstein summation (einsum) operation, which underlies all tensor algebra in modern deep‑learning frameworks, is mathematically identical to the rule‑based inference mechanisms of logic programming, differing only in the data type (real numbers versus Booleans). By treating both numeric and symbolic computation as variations of the same tensor equation, TensorLogic promises a seamless blend of automated reasoning and gradient‑based learning.

The core insight of the talk is that existing AI toolkits either excel at differentiable computation (e.g., PyTorch, TensorFlow) or at logical inference (e.g., Prolog, Datalog), but lack a unified, efficient abstraction. TensorLogic addresses three practical shortcomings: (1) a concise, human‑readable syntax that compresses verbose einsum expressions; (2) a GPU‑optimizable implementation that can execute the unified tensor‑logic operations orders of magnitude faster than ad‑hoc combinations; and (3) native support for predicate invention and differentiable reasoning, enabling models to discover new symbolic relations while being trained end‑to‑end. Domingos illustrates these points with a concrete example where a logical OR over Boolean variables is expressed as an einsum followed by a Heaviside step function, showing the equivalence of symbolic disjunction and numeric summation.

Notable quotes underscore the ambition: “A field cannot take off until it finds its language,” and “TensorLogic is the first language that has automated reasoning and auto‑differentiation built in.” Domingos also references historical parallels—Einstein’s summation notation for relativity and Boolean logic for circuit design—to argue that TensorLogic could become for AI what calculus is for physics. He acknowledges that no single language can solve every problem, yet contends that the tensor‑equation abstraction captures the essential operations needed across the AI spectrum, making it a strong candidate for a universal AI substrate.

If TensorLogic lives up to its promise, it could reshape how enterprises build AI systems, eliminating the brittle pipelines that stitch together separate symbolic and neural components. Companies would gain a single, mathematically grounded stack that ensures transparent reasoning (reducing hallucinations) while retaining the scalability of GPU‑accelerated learning. This could accelerate adoption of trustworthy AI in regulated sectors such as finance and healthcare, where explainability and reliability are non‑negotiable.

Original Description

Pedro Domingos, author of the bestselling book "The Master Algorithm," introduces his latest work: Tensor Logic - a new programming language he believes could become the fundamental language for artificial intelligence.
Think of it like this: Physics found its language in calculus. Circuit design found its language in Boolean logic. Pedro argues that AI has been missing its language - until now.
*SPONSOR MESSAGES START*
—
Build your ideas with AI Studio from Google - http://ai.studio/build
—
Prolific - Quality data. From real people. For faster breakthroughs.
https://www.prolific.com/?utm_source=mlst
—
cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy
Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst
Submit investment deck: https://cyber.fund/contact?utm_source=mlst
—
*END*
Current AI is split between two worlds that don't play well together:
Deep Learning (neural networks, transformers, ChatGPT) - great at learning from data, terrible at logical reasoning
Symbolic AI (logic programming, expert systems) - great at logical reasoning, terrible at learning from messy real-world data
Tensor Logic unifies both. It's a single language where you can:
Write logical rules that the system can actually learn and modify
Do transparent, verifiable reasoning (no hallucinations)
Mix "fuzzy" analogical thinking with rock-solid deduction
The Killer Feature: The Temperature Knob
Why Should You Care?
Pedro makes a provocative claim: 01:24:50 → 01:27:47 “We've wasted trillions of dollars” on brute-force compute because we're ignoring 40 years of AI research. Companies are "reinventing reasoning" when they could just read a textbook and save billions.
INTERACTIVE TRANSCRIPT:
https://app.rescript.info/public/share/NP4vZQ-GTETeN_roB2vg64vbEcN7isjJtz4C86WSOhw
TOC:
00:00:00 - Introduction
00:04:41 - What is Tensor Logic?
00:09:59 - Tensor Logic vs PyTorch & Einsum
00:17:50 - The Master Algorithm Connection
00:20:41 - Predicate Invention & Learning New Concepts
00:31:22 - Symmetries in AI & Physics
00:35:30 - Computational Reducibility & The Universe
00:43:34 - Technical Details: RNN Implementation
00:45:35 - Turing Completeness Debate
00:56:45 - Transformers vs Turing Machines
01:02:32 - Reasoning in Embedding Space
01:11:46 - Solving Hallucination with Deductive Modes
01:16:17 - Adoption Strategy & Migration Path
01:21:50 - AI Education & Abstraction
01:24:50 - The Trillion-Dollar Waste
REFS
Tensor Logic: The Language of AI [Pedro Domingos]
https://arxiv.org/abs/2510.12269
The Master Algorithm [Pedro Domingos]
https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543
Einsum is All you Need (TIM ROCKTÄSCHEL)
https://rockt.ai/2018/04/30/einsum
https://www.youtube.com/watch?v=6DrCq8Ry2cw
More Is Different [P. W. Anderson] (not "Ross" we misremembered name in interview)
https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_different_PWA.pdf
Autoregressive Large Language Models are Computationally Universal (Dale Schuurmans et al - GDM)
https://arxiv.org/abs/2410.03170
Memory Augmented Large Language Models are Computationally Universal [Dale Schuurmans]
https://arxiv.org/pdf/2301.04589
On the computational power of NNs [95/Siegelmann]
https://binds.cs.umass.edu/papers/1995_Siegelmann_JComSysSci.pdf
Sebastian Bubeck
https://www.reddit.com/r/OpenAI/comments/1oacp38/openai_researcher_sebastian_bubeck_falsely_claims/
I am a strange loop - Hofstadter
https://www.amazon.co.uk/Am-Strange-Loop-Douglas-Hofstadter/dp/0465030793
Stephen Wolfram
https://www.youtube.com/watch?v=dkpDjd2nHgo
The Complex World: An Introduction to the Foundations of Complexity Science [David C. Krakauer]
https://www.amazon.co.uk/Complex-World-Introduction-Foundations-Complexity/dp/1947864629
Geometric Deep Learning
https://www.youtube.com/watch?v=bIZB1hIJ4u8
Andrew Wilson (NYU)
https://www.youtube.com/watch?v=M-jTeBCEGHc
Yi Ma
https://www.patreon.com/posts/yi-ma-scientific-141953348
Roger Penrose - road to reality
https://www.amazon.co.uk/Road-Reality-Complete-Guide-Universe/dp/0099440687
Artificial Intelligence: A Modern Approach [Russel and Norvig]
https://www.amazon.co.uk/Artificial-Intelligence-Modern-Approach-Global/dp/1292153962
Best Moments:
01:01:50 → 01:02:15 [The Universal Induction Machine] - Pedro's quest for the "Turing Machine of Learning"
00:20:41 → 00:24:37 [Predicate Invention] - How the system learns to see "objects" instead of pixels, like humans do
01:12:55 → 01:13:30 [Why This Matters Now] - The hallucination problem explained
0

Comments

Want to join the conversation?

Loading comments...