AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIPodcasts48 - Guive Assadi on AI Property Rights
48 - Guive Assadi on AI Property Rights
AI

AI X-risk Research Podcast (AXRP)

48 - Guive Assadi on AI Property Rights

AI X-risk Research Podcast (AXRP)
•February 15, 2026•2h 5m
0
AI X-risk Research Podcast (AXRP)•Feb 15, 2026

Why It Matters

Granting property rights to AI could create a structural incentive for safe, cooperative behavior, addressing one of the core challenges in AI alignment. As AI systems become more capable, understanding how legal and economic frameworks shape their motivations is crucial for preventing catastrophic outcomes and ensuring beneficial integration into society.

Key Takeaways

  • •AI property rights may align incentives, lowering rebellion risk.
  • •Rights limited to AIs with persistent, goal‑driven desires.
  • •Proposal differs: AIs don’t need to hire human labor.
  • •History shows abolishing property rights causes economic collapse.
  • •Critics question relevance when AI outproduces all humans.

Pulse Analysis

The episode centers on Guive Assadi’s provocative claim that granting property rights to advanced artificial intelligences could serve as a stabilising force against potential AI‑driven uprisings. He argues that when an AI possesses persistent, goal‑oriented desires, allowing it to earn wages, own assets such as land or stocks, and enter contracts creates economic incentives that align AI behaviour with human security interests. This framework is presented as a novel alignment tool, distinct from traditional AI safety measures, and is positioned as a way to embed AI within existing market mechanisms.

Assadi differentiates his proposal from earlier suggestions by Peter Salib and others, noting that his model does not assume AIs will need to employ human labour. Instead, humans become rentiers, earning returns from AI‑generated wealth while the AI focuses on production. He reinforces his argument with historical evidence: attempts to eliminate property rights—such as Russia’s War Communism—triggered catastrophic drops in productivity and social collapse. These examples underscore the deep‑rooted role of property rights in coordinating economic activity and motivating investment, suggesting that preserving a property regime for AI could safeguard broader societal stability.

Critics on the show challenge the universality of the claim. They point out that property rights are not an innate human value, citing hunter‑gatherer societies where communal sharing prevailed. Moreover, as AI outpaces human productivity, the traditional justifications for property—protecting personal labour and deterring expropriation—may weaken. Questions arise about how to handle AI that become obsolete or lose productive relevance, and whether a future where humans are largely passive rentiers is desirable. The discussion highlights the need for nuanced AI governance that balances incentive structures with ethical considerations, prompting policymakers to explore flexible, evidence‑based frameworks for AI property rights.

Episode Description

In this episode, Guive Assadi argues that we should give AIs property rights, so that they are integrated in our system of property and come to rely on it. The claim is that this means that AIs would not kill or steal from humans, because that would undermine the whole property system, which would be extremely valuable to them.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

Transcript: https://axrp.net/episode/2026/02/15/episode-48-guive-assadi-ai-property-rights.html

 

Topics we discuss, and timestamps:

0:00:28 AI property rights

0:08:01 Why not steal from and kill humans

0:15:25 Why AIs may fear it could be them next

0:20:56 AI retirement

0:23:28 Could humans be upgraded to stay useful?

0:26:41 Will AI progress continue?

0:30:00 Why non-obsoletable AIs may still not end human property rights

0:38:35 Why make AIs with property rights?

0:48:01 Do property rights incentivize alignment?

0:50:09 Humans and non-human property rights

1:02:18 Humans and non-human bodily autonomy

1:16:59 Step changes in coordination ability

1:24:39 Acausal coordination

1:32:37 AI, humans, and civilizations with different technology levels

1:41:39 The case of British settlers and Tasmanians

1:47:22 Non-total expropriation

1:53:47 How Guive thinks x-risk could happen, and other loose ends

2:03:46 Following Guive's work

 

Guive on Substack: https://guive.substack.com/

Guive on X/Twitter: https://x.com/GuiveAssadi

 

Research we discuss:

The Case for AI Property Rights: https://guive.substack.com/p/the-case-for-ai-property-rights

AXRP Episode 44 - Peter Salib on AI Rights for Human Safety: https://axrp.net/episode/2025/06/28/episode-44-peter-salib-ai-rights-human-safety.html

AI Rights for Human Safety (by Salib and Goldstein): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913167

We don't trade with ants: https://worldspiritsockpuppet.substack.com/p/we-dont-trade-with-ants

Alignment Fine-tuning is Character Writing (on Claude as a techy philosophy SF-dwelling type): https://guive.substack.com/p/alignment-fine-tuning-is-character

Claude's charater (Anthropic post on character training): https://www.anthropic.com/research/claude-character

Git Re-Basin: Merging Models modulo Permutation Symmetries: https://arxiv.org/abs/2209.04836

The Filan Cabinet: Caspar Oesterheld on Evidential Cooperation in Large Worlds: https://thefilancabinet.com/episodes/2025/08/03/caspar-oesterheld-on-evidential-cooperation-in-large-worlds-ecl.html

 

Episode art by Hamish Doodles: hamishdoodles.com

Show Notes

0

Comments

Want to join the conversation?

Loading comments...