Hide table of contents

Key Takeaways

Here are the key takeaways for the full report:

  1. Based on the split-brain condition in humans, some people have wondered whether some humans “house” multiple subjects.
  2. Based on superficial parallels between the split-brain condition and the apparent neurological structures of some animals—such as chickens and octopuses—some people have wondered whether those animals house multiple subjects too.
  3. To assign a non-negligible credence to this possibility, we’d need evidence that parts of these animals aren't just conscious, but that they have valenced conscious states (like pain), as that’s what matters morally (given our project’s assumptions).
  4. This evidence is difficult to get:
    1. The human case shows that unconscious mentality is powerful, so we can’t infer consciousness from many behaviors.
    2. Even when we can infer consciousness, we can’t necessarily infer a separate subject. After all, there are plausible interpretations of split-brain cases on which there are not separate subjects.
    3. Even if there are multiple subjects housed in an organism in some circumstances, it doesn’t follow that there are always multiple subjects. These additional subjects may only be generated in contexts that are irrelevant for practical purposes.
  5. If we don’t have any evidence that parts of these animals are conscious or that they have valenced conscious states, then insofar as we’re committed to having an empirically-driven approach to counting subjects, we shouldn’t postulate multiple subjects in these cases.
  6. That being said, the author is inclined to place up to a 0.1 credence that there are multiple subjects in the split-brain case, but no higher than 0.025 for the 1+8 model of octopuses. 

 

Introduction

This is the sixth post in the Moral Weight Project Sequence. The aim of the sequence is to provide an overview of the research that Rethink Priorities conducted between May 2021 and October 2022 on interspecific cause prioritization—i.e., making resource allocation decisions across species. The aim of this post, which was written by Joe Gottlieb, is to summarize his full report on the phenomenal unity and cause prioritization, which explores whether, for certain species, there are empirical reasons to posit multiple welfare subjects per organism. That report is available here.

Motivations and the Bottom Line

We normally assume that there is one conscious subject—or one entity who undergoes conscious experiences—per conscious animal. But perhaps this isn’t always true: perhaps some animals ‘house’ more than one conscious subject. If those subjects are also welfare subjects—beings with the ability to accrue welfare goods and bad—then this might matter when trying to determine whether we are allocating resources in a way that maximizes expected welfare gained per dollar spent. When we theorize about these animals’ capacity for welfare, we would no longer be theorizing about a single welfare subject, but multiple such subjects.[1]

In humans, people have speculated about this possibility based on “split-brain” cases, where the corpus callosum has been wholly or partially severed (e.g., Bayne 2010; Schechter 2018). Some non-human animals, like birds, approximate the split-brain condition as the norm, and others, like the octopus, exhibit a striking lack of integration and highly decentralized nervous systems, with surprising levels of peripheral autonomy. And in the case of the octopus, Peter Godfrey-Smith suggests that “[w]e should…at least consider the possibility that an octopus is a being with multiple selves”, one for central brain, and then one for each arm (2020: 148; cf. Carls-Diamante 2017, 2019, 2022).

What follows is a high-level summary of my full report on this topic, focusing on Octopodidae, as if that’s the family for which we have the best evidence for multiple subjects per organism.[2] In assessing this possibility, I make three key assumptions:

  • Experiential hedonism: an entity can accrue welfare goods and bads if and only if it can undergo either positively or negatively conscious valenced mental states.
  • Mental states can be unconscious: most, if not all, conscious mental states have unconscious mental counterparts. Moreover, many sophisticated behaviors are caused by unconscious states, and even if caused by conscious states, they are not always caused by those states in virtue of being conscious. Unconscious mentality is quite powerful and routinely underestimated. 
  • Default to One Subject Assumption: we begin by provisionally assuming that there is only one subject per animal, per organism, etc. Thus, absent sufficiently robust positive evidence against this default assumption, we should continue to assume that there is one subject per octopus. 

With these assumptions in mind, there are two hypotheses of interest:

  • The Action-Relevant Hypothesis: The default condition for members of Octopodidae is that they house up to 9 welfare subjects, such that for any harm or benefit to any token octopus, we get a 9x impact in expectation. 
  • The Non-Action-Relevant Hypothesis: There are some rare contexts—when all arms are amputated, or when the brain is not exerting central control over the arms—where members of Octopodidae either house up to 9 welfare subjects or can ‘splinter’ into 9 welfare subjects, such that for any harm or benefit to any token octopus, we get a 9x impact in expectation. 

The bottom line is that, based on the arguments I discuss at length in the full report, I assign a credence of 0.025 to the Action-Relevant Hypothesis and a credence of 0.035 to the Non-Action-Relevant Hypothesis.

Four Questions about There Being Multiple Subjects Per Octopus 

In the academic philosophical literature, there is only clear endorsement of, and extended argument for, the claim that there can be more than one subject per animal: namely, Schechter’s (2018) examination of the split-brain condition. However, her arguments do not readily carry over to octopuses. There are several reasons for this, but the most relevant one is this. Schechter’s case starts from the claim that both the right and left hemispheric systems in humans can independently support conscious experience. Then, she infers that the reason why these experiences are not part of a single phenomenally unified experiential perspective is because they fail to be ­access unified.[3] Since subjects, according to Schechter, are individuated by experiential perspectives, it follows that split-brain patients house two subjects. 

This argument makes a highly contentious assumption: namely, that failures of access unity entail failures of phenomenal unity.[4] Even if we grant it, we can’t make an analogous starting assumption to the effect that each octopus arm is, on its own, sufficient for consciousness.[5] The question—or at least one of our questions—is whether this assumption is true. 

So, we can split our task into three questions, where a ‘yes’ answer to each subsequent one is predicated on a ‘yes’ to the prior question:

  1. The Mind Question: Do octopuses generally house 9 minded subjects (or at least more than one minded subject)?
  2. The Conscious Mind Question: Do octopuses generally house 9 consciously minded subjects—that is, 9 subjects capable of being in conscious mental states (or at least more than one consciously minded subject)?
  3. The Welfare Subject Question: Do octopuses generally house 9 conscious, affectively minded subjects—that is, 9 subjects capable of being in conscious affective mental states (or at least more than one conscious, affectively minded subject)?
  4. The Correlation Question: If octopuses generally house more than one conscious, affectively minded subject, are the harms and benefits of those subjects correlated, such that harming one subject affects the welfare of other subjects housed in the same organism?

Q3 and Q4, of course, are what ultimately matter—but we have to get through Q1 and Q2 to get to them. My take is that there’s some evidence for a ‘yes’ answer to Q1, but no evidence for a ‘yes’ answer to Q2 and Q3. So, Q4 doesn’t even come up.

Question 1: The Mind Question

Carls-Diamante (2019) provides the most sustained case that each arm constitutes an independent cognitive system, i.e., not merely a cognitive subsystem. If we let each independent cognitive system count as a subject, it follows that each arm constitutes a subject. Carls-Diamante’s case hinges largely on the cognitive role of the arms in storing and carrying out stereotypic motor programs (as seen in fetching), along with their functional autonomy and self-sufficiency, illustrated by the sparse connections between them and the central components of the octopus nervous system. This autonomy is most striking in cases of amputation, where the arms retain much of their sensorimotor control and processing functions, including grasping behavior elicited by tactile stimulation of the suckers (Rowell 1963).

But there are at least two problems with Carls-Diamante’s position. First, by her own lights, she’s working with a “relaxed” (2019: 465) stance on what counts as cognition, saying that at best sensorimotor coordination is the most “rudimentary” form of cognition. This is a controversial and deflationary interpretation of cognition. That’s fine in itself, but it significantly weakens the inference to consciousness. Second, if we only have multiple subjects when the octopus arms are amputated, as Carls-Diamante suggests (2019: 478), then we might get a case for conscious arms, but it will be a case that’s basically irrelevant to The Action-Relevant Hypothesis, since arms usually aren’t amputated prior to death.[6] 

Question 2: The Conscious Mind Question

There’s little doubt that whole octopuses are conscious (Mather 2008). But the animal’s being conscious doesn’t imply that each octopus arm is individually conscious. If we grant that each arm is a subject (because it constitutes an independent cognitive system), then we can ask whether the states of those arm-based subjects are conscious in much the same way as we would ask this question of anything else.

How do we assess consciousness? Typically, we either look for proxies for consciousness, such as exhibiting trace conditioning (Birch 2022), or we reason from a theory. Either way, there doesn’t seem to be any positive evidence for thinking that the octopus arms are conscious. Sensory-motor processing isn’t necessarily conscious, we have no evidence that the arm-based systems have a global workspace or are capable of higher-order representation, and the arms don’t exhibit trace-conditioning, rapid-reversal learning, or anything else that might serve as a positive marker. Thus, given that Mental States can be Unconscious and the Default to One Subject Assumption, there is no reason to think that the octopus houses more than one conscious subject.[7]

Question 3: The Welfare Subject Question

Suppose that each arm-based system has its own conscious states. This does not mean that these states are affective. Given experiential hedonism, this is a necessary (and sufficient) condition for these systems to constitute welfare subjects. But whether the arms have conscious affective states depends on the kinds of states they have, which even those sympathetic to there being multiple subjects, like Carls-Diamante (2017: 1279), take to be quite limited. Of course, an octopus arm-based subject only needs to instantiate one kind of affective state to be a welfare subject. But we have no evidence that arm-based subjects can feel sad or happy or undergo prolonged bouts of depression, nor even that they can be in pain. Now again—keeping with a common refrain—we do have evidence that octopuses can be in pain. For example, in a study by Crook (2021) of responses to injury in the pygmy octopus (Octopus bocki), directed grooming at the location of acetic acid injection was demonstrated only for that grooming to cease upon application of lidocaine. In addition, the octopuses preferred chambers in which they had been placed after being given lidocaine over chambers in which they were given the initial injection. This could be evidence of valenced pain. However, Crook (Ibid.) is clear that noxious sensory information is not processed in the arms, but in the central brain. This suggests that when acetic acid is injected into one of the arms, pain may be felt in the arm but not by the arm.

On the other hand, in Rowell’s (1963) experiments on amputated octopus arms, it was found that pricking am amputated arm with a pin resulted in flinching of the skin and the arm moving away from the direction of the stimulus. Does this suggest that the arm-based systems are in conscious pain? There are three points to make here. First, given that this involves amputated arms, there is again the question of whether this speaks to the Action-Relevant or the Non-Action-Relevant Hypothesis. Second, it isn’t obvious that such behavior marks valenced pain instead of (just) pain since we have evidence from pain asymbolia (Bain 2014) but also more mundane cases that pain is not necessarily painful. Valence requires more than mere withdrawal and reactive behaviors (Shevlin 2021).[8]

Finally, while this behavior can be by caused conscious pain states, as before, this doesn’t mean that such states cause such behaviors in virtue of being conscious. Indeed, we have evidence that withdrawal behavior is frequently unconscious. Noxious stimulation can cause humans in vegetative states to yell, withdraw or display ‘pained’ facial expressions (Laureys 2007). In addition, the lower limbs of complete spinal cord patients, in which the patients cannot feel anything, still exhibit the withdrawal flexion reflex (Dimitrijević & Nathan 1968; Key 2016: 4).[9]

Conclusion

The upshot here is straightforward. We don’t seem to have a good reason to suppose that independent octopus arms are conscious subjects, much less welfare subjects. And if they aren’t, then we should assign very low credences to the key hypotheses:

  • The Action-Relevant Hypothesis: The default condition for members of Octopodidae is that they house up to 9 welfare subjects, such that for any harm or benefit to any token octopus, we get a 9x impact in expectation. 
  • The Non-Action-Relevant Hypothesis: There are some rare contexts—when all arms are amputated, or when the brain is not exerting central control over the arms—where members of Octopodidae either house up to 9 welfare subjects or can ‘splinter’ into 9 welfare subjects, such that for any harm or benefit to any token octopus, we get a 9x impact in expectation. 

In the full report, I argue for all this in more detail; I also make the same point about chickens. Whatever the intuitive appeal of the possibility of there being multiple subjects per organism in these cases, that possibility probably isn’t the way things are.

 

Acknowledgments


This research is a project of Rethink Priorities. It was written by Joe Gottlieb. Thanks to Bob Fischer for helpful feedback on this research. If you’re interested in RP’s work, you can learn more by visiting our research database. For regular updates, please consider subscribing to our newsletter.

 

References

Bain, D. (2014). Pains that don’t hurt. Australasian Journal of Philosophy, 92(2), 305-320. https://doi.org/10.1080/00048402.2013.822399

Bayne, T. (2010). The Unity of Consciousness. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199215386.001.0001

Birch J. (2022) The search for invertebrate consciousness. Nous. 56(1): 133-153. 

Block, N. (2007). Consciousness, accessibility, and the mesh between psychology and neuroscience. Behavioral and Brain Sciences, 30(5-6), 481-499. https://doi.org/10.1017/S0140525X07002786

Bublitz A, Dehnhardt G and Hanke FD (2021). Reversal of a Spatial Discrimination Task in the Common Octopus (Octopus vulgaris). Front. Behav. Neurosci. 15:614523.
doi: 10.3389/fnbeh.2021.614523

Carruthers, P. (2018). Valence and Value. Philosophy and Phenomenological Research, 97, 658-680. https://doi.org/10.1111/phpr.12395

Carls-Diamante, S. (2017). The octopus and the unity of consciousness. Biology and Philosophy, 32, 1269-1287. https://doi.org/10.1007/s10539-017-9604-0

Carls-Diamante, S. (2019). Out on a limb? On multiple cognitive systems within the octopus nervous system. Philosophical Psychology, 32(4), 463–482. https://doi.org/10.1080/09515089.2019.1585797

Dimitrijevi´c, M. R., & Nathan, P. W. (1968). Studies of spasticity in man: Analysis of reflex activity evoked by noxious cutaneous stimulation. Brain, 91(2), 349–368. https://doi.org/10.1093/brain/91.2.349

Dung, L. (2022). Assessing Tests of Animal Consciousness. Consciousness and Cognition 105: 103410.

Godfrey-Smith, P. (2020). Metazoa: Animal life and the birth of the mind. Farrar, Straus and Giroux.

Irvine, E. (2020). Developing Valid Behavioral Indicators of Animal Pain. Philosophical Topics, 48(1), 129–153. https://doi.org/10.5840/philtopics20204817

Key, B. (2016). Why fish do not feel pain. Animal Sentience, 1(3).

Laureys, S. (2007). Eyes Open, Brain Shut. Scientific American, 296(5), 84–89. https://doi.org/10.1038/scientificamerican0507-84

Marks, C. (1980). Commissurotomy, Consciousness, and Unity of Mind. MIT Press.

Mather, J. (2008). Cephalopod consciousness: Behavioural evidence. Consciousness and Cognition, 17: 37–48

Rowell, C. H. F. (1963). Excitatory and inhibitory pathways in the arm of octopus. Journal of Experimental Biology, 40, 257–270.

Schechter, E. (2018). Self- Consciousness and “Split” Brains: The Minds’ I. Oxford University Press.

Shevlin, H. (2021). Negative valence, felt unpleasantness, and animal welfare. https://henryshevlin.com/wp-content/uploads/2021/11/Felt-unpleasantness.pdf

Tye, M. (2003). Consciousness and Persons: Unity and Identity. MIT Press.
 

Notes

[1] This, of course, is setting aside the welfare ranges for each of these constituent subject; it could be that, within an individual animal, while there are multiple subjects, not all subjects have same welfare ranges, with some being far narrower than others. 

[2] The full report also includes extensive discussion of the split-brain condition in humans, along with a discussion of whether members of Gallus gallus domesticus house more than one welfare subject. 

[3] Two experiences E1 and E2 are phenomenally unified when there is something it is like to have E1 and E2 together in a way that is not merely conjunctive. Two experiences E1 and E2 are access unified when their contents can be jointly reported on, and jointly employed in the rational control of reasoning and behavior.

[4] This is rejected, for example, by Bayne (2010). Also relevant here is experimental evidence (e.g., from the Sperling paradigm) for phenomenal overflow: phenomenally conscious states that are not accessed, if not accessible at all (Block 2007).

[5] This is probably why one of the few people to write on this topic, Carls-Diamante’s (2017), conditionalizes her thesis: “if the brain and the arms can generate local conscious fields, the issue arises as to whether subjective experience in an octopus would be integrated or unified, given the sparseness of interactions between the components of its nervous system” (2017: 1273, emphasis added).

[6] This is reminiscent of the “contextualist model” of split-brain patients, where we only have two subjects when under experimental conditions (Marks 1980; Tye 2003). Godfrey-Smith (2020) favors something like this approach for the octopus, but even then, he still thinks there is so-called partial unity for affective states. Roughly, this means that while there are contextually two subjects, there is only one (e.g.) token pain “shared” across each subject in those contexts. 

[7] Interestingly, it has been argued that octopuses do have something akin to a global workspace (Mather 2008) and are capable of rapid-reversal learning (Bublitz et al 2021), but again, this does not tell us that the arms have and can do these things. Presumably, if (somehow) my mental states were in your global workspace, that wouldn’t make me have conscious experiences. 

[8] Even if these states were valenced, this wouldn’t necessarily show that these states were conscious (see fn. 4), or that octopus arms were conscious (by having any other conscious states, for instance). Motivational trade-offs are a hallmark of valenced states, but Irvine (2020) argues that even C. elegans perform such trade-offs. Presumably, C. elegans are not conscious in any way.

[9] For further discussion on this, see Dung (2022). 

Comments19
Sorted by Click to highlight new comments since: Today at 9:31 PM

Indeed, we have evidence that withdrawal behavior is frequently unconscious. Noxious stimulation can cause humans in vegetative states to yell, withdraw or display ‘pained’ facial expressions (Laureys 2007).

While I think the point is correct that pain reflexes can occur unconsciously, I would be wary of using humans in a vegetative state as a clean example of unconsciousness. A significant minority of patients in a diagnosed vegetative state without behavioural responses are nevertheless able to follow auditory commands. The classic paradigm uses instructions to 'imagine playing tennis' or 'imagine walking around your home' and detect the resulting distinctive fMRI or EEG signatures - where some patients can use their choice of imagined activity to communicate and answer binary questions!

Thanks for this observation, Ben. I agree that this isn't the cleanest example. Maybe the best examples are in animals--e.g., headless frogs whose legs still try to remove acid from their skin. But I'm sure Joe has his own thoughts about this.

Hi Ben - Thanks for this. I agree that the PVS case is tricky, and probably not the best example. I assume that you are claiming that PVS patients are still phenomenally conscious, and that you are pointing to this study. (Note though that the authors never use "phenomenally conscious".) However, as expected, the Owen et al study is controversial. I find  this paper helpful when it comes to understanding some of the underlying methodological issues. One issue is whether these patients actually have intentional agency--perhaps suggested by their task responses--as this is often used as the diagnostic criterion for inferring that these subjects are (minimally?) conscious. It's unclear whether they have such agency (see here), although this would not itself eliminate consciousness. So, fair point! 

I agree - it's very unclear what the level of intention, awareness, or phenomenal consciousness is in these cases. The 2006 study is definitely the foundational one, but there's a decent amount of subsequent literature (though without much further clarity). I thought of this point because I'm currently reading Anil Seth's 2021 book on the neuroscience of consciousness, "Being You", which covers the topic quite well. (I'd highly recommend it to other interested readers!)
But this was a very minor nitpick on a fascinating report - well done!

Thanks for this report! I 100% agree with Ben Stewart this is really really cool. However, minor gripe: I do wish this had been edited for clarity of language. Even by EA Forum standards this the prose here is about as twisty as a pissed off octopus' tentacles. 

Oh, also:  

  • I was confused by references to amputation until I understood that amputated tentacles can act autonomously for some amount of time. A brief, direct description of this would be useful.
  • Your 0.025 and 0.035 are extremely specific; it would be interesting to get a brief description of how you ended up with those numbers without having to delve into the full report.

Usually, people end up with such specific numbers by starting with several round numbers and multiplying. We didn't do that in this case, though we could. Instead, I asked Joe to use a heuristic that I sometimes find helpful. Imagine being presented with the same information ten times. How many times do you think you'd come to a different conclusion if you were reasoning about it in good faith? In other words, try to run many simulations of your own sincere deliberations to assess how much variation there might be in the results. If none, then imagine going to 100 simulations. If still none, imagine going to 1000. Etc. And when Joe did that, those are the numbers that struck him as plausible.

Oh, that's interesting. Did you folks come up with that methodology? 

[anonymous]1y1
0
0

We did. I don’t know how helpful it is for others, but I find it useful.

Bob - thanks for this fascinating post. (And for all your other cool work on animal ethics -- I'd encourage EA folks to check out your other papers on your website here.)

Your post focused mostly on a sort of philosophy-of-neuroscience approach to these issues. I didn't see much discussion of the evolutionary/functional/adaptive reasons why an animal might evolve an integrated, unitary consciousness versus separate consciousnesses.

From my evolutionary psychology perspective, I interpret sentience (consciousness, subjective awareness) as an adaptation that arises  mostly to coordinate learning and behavior at the level of the whole organism (the complete phenotype)-- e.g. where it goes, what it does, what it pays attention to, how it updates & learns in response to positive and negative fitness affordances, etc. 

From that functional perspective, it seems like your 'Default to One Subject Assumption' is pretty reasonable.  

As Richard Dawkins has argued, although an animal's phenotype contains many different organs, tissue types, and subsystems, there are some final common pathway for survival and reproduction that typically require whole-body coordinated responses to environmental threats and opportunities. It seems reasonable that a 'central executive' should prioritize responses, make decisions, and plan behaviors -- rather than splitting up the body-coordination task into separate consciousnesses, which could easily give rise to conflicts, incompatible movements, and incoherent behavioral tactics.

I guess an interesting edge case might be whether eusocial insect colonies (ants, termites,  bees, etc) might have a functionally unified sentience to coordinate whole-colony perception, decision-making, and behavioral responses (even if this colony-level sentience relies on spatially distributed computation among thousands of 'individual' organisms -- just as our sentience relies on spatially distributed computation among billions of neurons).

Anyway, I'm curious whether you think an evolutionary-functional analysis of animal consciousness could reinforce the case that the 'One Subject Assumption' is usually valid.  (Or at least that it's a bit implausible that an octopus would evolve a sort of 1+8 structure to its sentience.)

Hi Geoffrey -

I will let Bob speak for himself in case he has any thoughts on this, but speaking for what went into the report, it's true that we did not take much of an evolutionary psychology perspective, certainly not directly. I actually think it's a live possibility that consciousness is a spandrel--see here and here--although nothing in the report hinged on this. (Note that consciousness having little to no utility is consistent with consciousness being causally efficacious in the production of behavior etc. So no epiphenomenalism here.)

With that said, one consideration that did come up related to your question is the idea that an organism has at most one independent cognitive system. (These are different from cognitive subsystems, of which an organism can have many.) The prevailing idea, as noted in this paper by Carls-Diamante--is that having more than one would be counter-productive in various ways due to, e.g., failures of complete information transfer across the systems. So perhaps this could be connected to your point: having more than one independent system per organism is maladaptive. But of course, Carls-Diamante goes on to suggest that the octopus may be an exception to what is otherwise a rule. As we argue though, even granting this, we are still away far from the core issue, which is whether the octopus houses more than one welfare subject. 

Thanks so much for your comments, Geoffrey! Just to clarify, Joe really deserves all the credit for this great report; I only provided feedback on drafts. That aside, I'm very sympathetic to the functional perspective you outline, which I've borrowed in other (unpublished) work from Peter Godfrey-Smith. Seems exactly right to me.

Bob (and Joe) - thanks for the thoughtful replies. Will mull this over a bit more before writing a proper response.

Fun fact: I used to hang out with Peter Godfrey-Smith in grad school, and his philosophy of evolutionary biology/psychology influenced some of my thinking as well (and maybe vice-versa).

This is really, really cool. Thanks!

Thanks for writing this up! It was helpful to give a concrete example to bolster the argument you made in the last post.

I'm curious if anybody has considered temporality as an element of this question? Some (maybe) relevant questions:

  • Does my current brain constitute the "same" conscious system as I did when I was 2 years old?
  •  What about somebody who undergoes a traumatic brain injury and becomes a "new person" with regard to personality/memory/learned behaviors?
  • If congruity/consistency between one's self(ves) at different life-stages is related to memory, might non-human animals actually have far more conscious (sub)systems over their lifetimes than humans?
  • What does this imply about death? If some factory-farmed animals have lives that are worse than worth living, do we have even more evidence that they would be better off dying sooner (before they develop more conscious systems/subsystems)?
  • If these ideas have any amount of credence, how might we possibly quantify this?

I have little background in philosophy of mind, and there's a good chance your team has already considered/debunked these ideas, but I wanted to throw them out there as food for thought anyway.

Hi Jasper - Thanks for these interesting questions. So speaking for myself, I did not take up the temporality issue--at least not in the way you seem to be suggestion without these cases. I can say something about your brain injury question though. The term 'person' is used in different ways. Sometimes it is used to just mean whatever we are fundamentally. So, if a traumatic brain injury resulted in a numerically distinct person in this sense, then it would be the same thing as death and then ‘birth’ of a new person. In that case, if there was a welfare subject pre-trauma, then there will be a new welfare subject post-trauma, so long as whatever capacities necessary and sufficient for being a welfare subject are preserved. 

On another usage, being a “new person” is just metaphorical, as in what I might say to my daughter if she came home from college super interested in Goth stuff. (My daughter is not quite 5 yet, but who knows…)

Finally, some use ‘person’ in a Lockean (after John Locke) forensic sense, where forensic persons are the kinds of things which can be held morally responsible for their actions, for which prudential concern is rational, etc. There are all sorts of tricky issues here, but one possibility is that *you* can survive even if you do not survive as the same forensic person. Perhaps something like that can happen in certain cases of brain trauma. For example, maybe whatever survives post-trauma is not morally responsible for any pre-trauma actions—precisely because there are none of the same memories, personality, beliefs, and behavioral dispositions. I’d have to think more on how this connection to questions about being/counting welfare subjects, though.

I think which of these different sense of ‘person’ is apt for saying someone is a ‘new person’ post trauma depends a whole lot on the actual details of the trauma in question. 

Thanks for the detailed reply to the trauma case. Your delineation between various definitions of personhood are helpful for interrogating my other questions as well. 

If it is the case that a "new" welfare subject can be "created" by a traumatic brain injury, then it might well be the case that new welfare subjects are created as one's life progresses. This implies that, as we age, welfare subjects effectively die and new ones are reborn. However, we don't worry about this because 1. we can't prevent it and 2. it's not clear when this happens / if it ever happens fully (perhaps there is always a hint of one's old self in one's new self, so a "new" welfare subject is never truly created).

Given the same argument applies to non-human animals, we could reasonably assume that we can't prevent this loss and recreation of welfare subjects. Moreover, we would probably come to the same conclusions about the badness of the death of the animals, even if throughout their lives they exist as multiple welfare subjects that we should care about. Where it becomes morally questionable is in considering non-human animals whose lives are worse than not worth living. Then, there should be increased moral concern for factory farmed animals given we accept that: 1. their lives are worse than not worth living; 2. they instantiate different welfare subjects throughout there life and 3. there is something worse about 2 different subjects each suffering for 1 year than 1 subject suffering for 2 years. (Again I don't think I accept premise 2 or 3 of this conclusion, I just wanted to take the hypothetical to its fullest conclusion.)

Hi again. Regarding this comment: 

“If it is the case that a "new" welfare subject can be "created" by a traumatic brain injury, then it might well be the case that new welfare subjects are created as one's life progresses. This implies that, as we age, welfare subjects effectively die and new ones are reborn.”

I am not sure this follows. Even if we granted that traumatic brain injury could result in a new welfare subject—which would depend on (i) what welfare subjects are, and (ii) what happens in in brain injury—whether the same thing would happen during the aging process would depend on whether whatever relevant thing happens in the brain injury happens in aging. (For my part, I do not see why this would be the case. Maybe you are thinking of natural neurological changes that happens as we get older?)   

And let me add this. The most neutral way of understanding welfare subjects, to my mind, is just what we say in the report: an individual S be a welfare subject if and only if things can be non-instrumentally good or bad for S. Assuming that our theory of welfare subjects is subordinate to our theory of well-being or welfare, then a welfare subject will just be the kind of thing that can accrue welfare goods and bads—whatever those are. 

Suppose now that x has a traumatic brain injury at t1.  We can then ask:

  1. Is there still a welfare subject at t2?
  2. Is at t2 (post trauma) the same welfare subject as at t1?

The answer to (1) depends on whether whatever is there at t2 can accrue welfare goods and bads. And that depends on what those goods and bads are. If, for example, we adopted a desire-satisfaction view, and the brain injury knocked out the ability to have desires, then tjere would no longer be a welfare subject at t2.

The answer to (2) depends not just on whether there is still a welfare subject at t2 [so a ‘yes’ answer to (1)], but also the kind of thing x fundamentally is—maybe a forensic person, maybe something else--which will determine its persistence conditions, and thus whether it can survive brain injury. (Compare: I am a resident of Texas, but this does have anything to do with what I am fundamentally, as I can survive if I move somewhere else. If I am a forensic person but only in the way I am a Texan, then I can survive not being a forensic person. And if being a welfare subject has nothing to do with being a forensic person, then I can survive as a welfare subject without surviving as a forensic person.) I would assume that if x at t1 = y at t2, then we have the very same welfare subject too, so long as being a welfare subject comes automatically with whatever it takes for us to persist over time.  

Thanks for fleshing this out - that all makes sense to me.

Curated and popular this week
Relevant opportunities