Tensor Logic “Unifies” AI Paradigms [Pedro Domingos]

Tensor Logic “Unifies” AI Paradigms [Pedro Domingos]

More

Descriptions:

Pedro Domingos, Professor of Computer Science at the University of Washington and author of the bestselling book “The Master Algorithm,” returns to Machine Learning Street Talk to discuss his latest research: a formal language called Tensor Logic. His central claim is that Tensor Logic provides the first unified representation spanning deep learning, symbolic AI, kernel machines, and graphical models — a goal Domingos has pursued since his PhD.

The core technical insight is that a neural network neuron and a logic programming rule are mathematically equivalent objects under Tensor Logic’s formulation. This equivalence allows a single system to perform both gradient-based learning and strict deductive reasoning, with a temperature parameter controlling the transition between probabilistic inference and exact logical reasoning. Domingos argues this property is essential for enterprise AI deployments where business rules, security constraints, and correctness guarantees must be enforced — something he contends current transformer architectures cannot reliably provide, even at temperature zero.

The conversation also covers predicate invention — the system’s ability to discover new relational structures not present in training data — which Domingos describes as the “holy grail” of AI and a prerequisite for genuine generalization. He connects this to broader questions about universal induction, the limits of computational reducibility (with a pointed comparison to Stephen Wolfram’s framing), and what distinguishes abstraction from compression. The interview is a technically grounded perspective on the neurosymbolic frontier, valuable for researchers and practitioners tracking alternatives to pure scaling approaches.


📺 Source: Machine Learning Street Talk · Published December 07, 2025
🏷️ Format: Interview

1 Item

Channels