Hide table of contents

This paper was published as a GPI working paper in January 2025.

Abstract

I argue for a pluralist theory of moral standing, on which both welfare subjectivity and autonomy can confer moral status. I argue that autonomy doesn’t entail welfare subjectivity, but can ground moral standing in its absence. Although I highlight the existence of plausible views on which autonomy entails phenomenal consciousness, I primarily emphasize the need for philosophical debates about the relationship between phenomenal consciousness and moral standing to engage with neglected questions about the nature of autonomy and its possible links to consciousness, especially if we’re to face up to the ethical challenges future AI systems may pose.

Introduction

Some things matter morally in their own right and for their own sake. That includes you and me. We aren’t mere things. We merit concern and respect. If you prick us, not only do we bleed; we are wronged. We have moral standing (or moral status[1]).

What does it take to have moral standing? Many think the capacity for phenomenal consciousness is necessary (Singer 1993; Korsgaard 2018; Nussbaum 2022). To have moral standing, they think, there needs to be something it’s like to be you. However, not everyone is convinced, and the dissenters appear to be growing their ranks (Levy 2014b; Kagan 2019; Bradford 2022; Shepherd 2024). Who is in the right?

Most of the recent discussion of this issue focuses on the relationship between phenomenal consciousness and being a welfare subject. Roughly speaking, a welfare subject is a being whose life can go better or worse for them. It’s plausible that being a welfare subject is sufficient for moral standing. Is it also necessary? If not, we might be missing an important piece of the puzzle.

On its face, not all the obligations we owe to other people aim at promoting their welfare. Quinn (1984) distinguishes between the morality of respect and the morality of humanity. These aren’t rival moral theories. Instead, they are partially overlapping systems of obligation that respond to different morally significant properties of humans and non-human animals. In this context, ‘humanity’ denotes the virtue of being humane or beneficent. The morality of humanity is thus concerned with promoting others’ welfare; not just human welfare, but welfare more broadly. The morality of respect instead involves constraints on our behaviour that stem from recognition of the authority of rational agents to direct their own lives, even if they do so imprudently.

Suppose we grant that there are these two distinct dimensions to morality. It’s plausible that there are morally statused beings, including many non-human animals, who fall outside the scope of the morality of respect and are protected only by the morality of humanity (Quinn 1984: 51; McMahan 2002: 245–246). Are there also metaphysically and/or nomologically possible beings who fall outside the scope of the morality of humanity and are protected only by the morality of respect – individuals whose autonomy merits respect but who are not welfare subjects? If so, what does this imply about the relationship between moral standing and consciousness?

These are the questions I’ll address in this paper, arguing that there are indeed possible individuals who are protected only by the morality of respect, and exploring the potential implications for what we should think about the link between consciousness and moral status. Here’s the plan. In section 2, I outline a collection of conditions that I take to be jointly sufficient for an agent to be autonomous. In section 3, I argue that being a welfare subject isn’t necessary to satisfy those conditions. In section 4, I argue that welfare subjectivity is also unnecessary for someone’s autonomy to merit respect. In section 5, I outline the potential implications of my argument for the relationship between phenomenal consciousness and moral standing. Finally, in section 6, I summarize my conclusions and discuss their practical significance.

Read the rest of the paper

  1. ^

    I use these terms interchangeably.

Comments1


Sorted by Click to highlight new comments since:

Executive summary: Andreas Mogensen argues for a pluralist theory of moral standing based on welfare subjectivity and autonomy, challenging the necessity of phenomenal consciousness for moral status.

Key points:

  1. Mogensen introduces a pluralist theory that supports moral standing through either welfare subjectivity or autonomy, independent of each other.
  2. He questions the conventional belief that phenomenal consciousness is necessary for moral standing, introducing autonomy as an alternative ground.
  3. The paper distinguishes between the morality of respect and the morality of humanity, highlighting their relevance to different beings.
  4. It explores the possibility that certain beings could be governed solely by the morality of respect without being welfare subjects.
  5. Mogensen outlines conditions for autonomy that do not require welfare subjectivity, suggesting that autonomy alone can merit moral respect.
  6. The implications of this theory for future ethical considerations of AI systems are discussed, stressing the need to revisit the relationship between consciousness and moral standing.

 

 This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Recent opportunities in Animal welfare
10