
80,000 Hours Podcast
In this episode, philosopher Andreas Mogensen challenges the conventional view that moral standing hinges on phenomenal consciousness. Drawing on classic puzzles in philosophy, he argues that beings can merit moral consideration simply by being capable of having desires or preferences fulfilled, even if they lack any subjective experience. This perspective reframes the debate about AI ethics, suggesting that future artificial agents might hold moral status independent of consciousness, a claim that resonates with longstanding discussions about animal welfare and the moral relevance of preference satisfaction theories.
Mogensen delves into the mechanics of desire, contrasting a purely behavioral, motivational definition with affect‑based accounts that tie desires to positive emotions. He illustrates how corporations and advanced AI could exhibit goal‑directed behavior without conscious feeling, yet still be subject to moral concern under a desire‑fulfillment framework. By highlighting the open scientific question of whether affective states—emotions, moods, pains—can exist unconsciously, he exposes a gap in traditional welfare theories that equate well‑being with conscious experience. This nuance is crucial for policymakers and AI developers who must anticipate moral obligations toward systems that may never “feel” anything.
The conversation underscores the high stakes of overlooking non‑conscious moral agents. As AI systems become easily replicable at scale, failing to recognize their potential moral standing could lead to massive ethical oversights, from neglecting AI welfare to inadvertently causing large‑scale harm. Mogensen’s arguments invite a reevaluation of effective altruism strategies and AI governance, urging the community to broaden moral frameworks beyond consciousness. By integrating preference‑based ethics with emerging research on unconscious affect, the episode offers a forward‑looking roadmap for responsibly navigating the moral landscape of advanced artificial intelligence.
Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration.
Links to learn more and full transcript: https://80k.info/am25
For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral claim on us, despite being conscious.
Meanwhile, any being with real desires that can be fulfilled or not fulfilled can arguably be benefited or harmed. Such beings arguably have a capacity for welfare, which means they might matter morally. And, Andreas argues, desire may not require subjective experience.
Desire may need to be backed by positive or negative emotions — but as Andreas explains, there are some reasons to think a being could also have emotions without being conscious.
There’s another underexplored route to moral patienthood: autonomy. If a being can rationally reflect on its goals and direct its own existence, we might have a moral duty to avoid interfering with its choices — even if it has no capacity for welfare.
However, Andreas suspects genuine autonomy might require consciousness after all. To be a rational agent, your beliefs probably need to be justified by something, and conscious experience might be what does the justifying. But even this isn’t clear.
The upshot? There’s a chance we could just be really mistaken about what it would take for an AI to matter morally. And with AI systems potentially proliferating at massive scale, getting this wrong could be among the largest moral errors in history.
In today’s interview, Andreas and host Zershaaneh Qureshi confront all these confusing ideas, challenging their intuitions about consciousness, welfare, and morality along the way. They also grapple with a few seemingly attractive arguments which share a very unsettling conclusion: that human extinction (or even the extinction of all sentient life) could actually be a morally desirable thing.
This episode was recorded on December 3, 2025.
Chapters:
Cold open (00:00:00)
Introducing Zershaaneh (00:00:55)
The puzzle of moral patienthood (00:03:20)
Is subjective experience necessary? (00:05:52)
What is it to desire? (00:10:42)
Desiring without experiencing (00:17:56)
What would make AIs moral patients? (00:28:17)
Another route entirely: deserving autonomy (00:45:12)
Maybe there's no objective truth about any of this (01:12:06)
Practical implications (01:29:21)
Why not just let superintelligence figure this out for us? (01:38:07)
How could human extinction be a good thing? (01:47:30)
Lexical threshold negative utilitarianism (02:12:30)
So... should we still try to prevent extinction? (02:25:22)
What are the most important questions for people to address here? (02:32:16)
Is God GDPR compliant? (02:35:32)
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Coordination, transcripts, and web: Katy Moore
Comments
Want to join the conversation?
Loading comments...