
Discusses AI safety, alignment, and existential risks with researchers and experts.
In this episode, Alex and guest Caspar Oesterheld explore the concept of program equilibrium—how game theory changes when agents are fully transparent computer programs that can read each other's source code. They discuss desiderata for robust program equilibria, compare proof‑based and simulation‑based approaches, and examine the efficiency and compatibility of ε‑grounded π‑bots and CooperateBot strategies. Caspar presents recent work on characterising simulation‑based equilibria and outlines open questions for future research, including how to design bots that cooperate without being exploitable.
In this episode, Guive Assadi makes the case for granting AI systems property rights, arguing that embedding AIs within our property framework would give them a vested interest in respecting human ownership and avoiding theft or violence. He explores how...
In this episode, host interviews Adam Shai and Paul Riechers about applying computational mechanics—a physics subfield for predicting random processes—to understand and scale transformer models. They explain how computational mechanics differs from other approaches, describe the fractal geometry of belief‑state...
The post announces the launch of new Patreon tiers for the AI X‑risk Research Podcast, outlining the added benefits and how listeners can support the show. It also highlights the MATS (Machine Intelligence Research Institute's Alignment Training) application process, encouraging...