Andreas Mogensen on What We Owe 'Philosophical Vulcans' And Unconscious AIs
AI

80,000 Hours Podcast

Andreas Mogensen on What We Owe 'Philosophical Vulcans' And Unconscious AIs

80,000 Hours PodcastDec 19, 2025

AI Summary

In this episode, moral philosopher Andreas Mogensen challenges the common view that phenomenal consciousness is required for moral consideration, arguing that desire, welfare capacity, or autonomy could grant moral patienthood to AI even without subjective experience. He explores how desires might exist without feeling, the possible link between autonomy and consciousness, and the unsettling implication that eliminating sentient life could be morally justified under certain utilitarian frameworks. The discussion highlights the profound practical stakes of misjudging AI moral status, especially as AI systems scale, and calls for deeper inquiry into these foundational ethical questions.

Episode Description

Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration.

Links to learn more and full transcript: https://80k.info/am25

For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral claim on us, despite being conscious.

Meanwhile, any being with real desires that can be fulfilled or not fulfilled can arguably be benefited or harmed. Such beings arguably have a capacity for welfare, which means they might matter morally. And, Andreas argues, desire may not require subjective experience.

Desire may need to be backed by positive or negative emotions — but as Andreas explains, there are some reasons to think a being could also have emotions without being conscious.

There’s another underexplored route to moral patienthood: autonomy. If a being can rationally reflect on its goals and direct its own existence, we might have a moral duty to avoid interfering with its choices — even if it has no capacity for welfare.

However, Andreas suspects genuine autonomy might require consciousness after all. To be a rational agent, your beliefs probably need to be justified by something, and conscious experience might be what does the justifying. But even this isn’t clear.

The upshot? There’s a chance we could just be really mistaken about what it would take for an AI to matter morally. And with AI systems potentially proliferating at massive scale, getting this wrong could be among the largest moral errors in history.

In today’s interview, Andreas and host Zershaaneh Qureshi confront all these confusing ideas, challenging their intuitions about consciousness, welfare, and morality along the way. They also grapple with a few seemingly attractive arguments which share a very unsettling conclusion: that human extinction (or even the extinction of all sentient life) could actually be a morally desirable thing.

This episode was recorded on December 3, 2025.

Chapters:

Cold open (00:00:00)

Introducing Zershaaneh (00:00:55)

The puzzle of moral patienthood (00:03:20)

Is subjective experience necessary? (00:05:52)

What is it to desire? (00:10:42)

Desiring without experiencing (00:17:56)

What would make AIs moral patients? (00:28:17)

Another route entirely: deserving autonomy (00:45:12)

Maybe there's no objective truth about any of this (01:12:06)

Practical implications (01:29:21)

Why not just let superintelligence figure this out for us? (01:38:07)

How could human extinction be a good thing? (01:47:30)

Lexical threshold negative utilitarianism (02:12:30)

So... should we still try to prevent extinction? (02:25:22)

What are the most important questions for people to address here? (02:32:16)

Is God GDPR compliant? (02:35:32)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour

Coordination, transcripts, and web: Katy Moore

Show Notes

Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration.

Links to learn more and full transcript: https://80k.info/am25

For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral claim on us, despite being conscious.

Meanwhile, any being with real desires that can be fulfilled or not fulfilled can arguably be benefited or harmed. Such beings arguably have a capacity for welfare, which means they might matter morally. And, Andreas argues, desire may not require subjective experience.

Desire may need to be backed by positive or negative emotions — but as Andreas explains, there are some reasons to think a being could also have emotions without being conscious.

There’s another underexplored route to moral patienthood: autonomy. If a being can rationally reflect on its goals and direct its own existence, we might have a moral duty to avoid interfering with its choices — even if it has no capacity for welfare.

However, Andreas suspects genuine autonomy might require consciousness after all. To be a rational agent, your beliefs probably need to be justified by something, and conscious experience might be what does the justifying. But even this isn’t clear.

The upshot? There’s a chance we could just be really mistaken about what it would take for an AI to matter morally. And with AI systems potentially proliferating at massive scale, getting this wrong could be among the largest moral errors in history.

In today’s interview, Andreas and host Zershaaneh Qureshi confront all these confusing ideas, challenging their intuitions about consciousness, welfare, and morality along the way. They also grapple with a few seemingly attractive arguments which share a very unsettling conclusion: that human extinction (or even the extinction of all sentient life) could actually be a morally desirable thing.

This episode was recorded on December 3, 2025.

Chapters:

  • Cold open (00:00:00)

  • Introducing Zershaaneh (00:00:55)

  • The puzzle of moral patienthood (00:03:20)

  • Is subjective experience necessary? (00:05:52)

  • What is it to desire? (00:10:42)

  • Desiring without experiencing (00:17:56)

  • What would make AIs moral patients? (00:28:17)

  • Another route entirely: deserving autonomy (00:45:12)

  • Maybe there's no objective truth about any of this (01:12:06)

  • Practical implications (01:29:21)

  • Why not just let superintelligence figure this out for us? (01:38:07)

  • How could human extinction be a good thing? (01:47:30)

  • Lexical threshold negative utilitarianism (02:12:30)

  • So... should we still try to prevent extinction? (02:25:22)

  • What are the most important questions for people to address here? (02:32:16)

  • Is God GDPR compliant? (02:35:32)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour

Coordination, transcripts, and web: Katy Moore

Comments

Want to join the conversation?

Loading comments...