Partial Aggregation's Utility Monster


Discuss this post on the EA Forum!

Author’s Note: this post is based on my final paper, “Troubles in Grounding Principles of Partial Aggregation”

I recently wrote a paper for class on problems with the class of theories attempting to formalize “partial aggregation” as described by philosophers like Alex Voorhoeve and Victor Tadros. I thought most of the paper was a bit of a mess, I had a hard time communicating and clarifying my points, and in the process of working on them, I came up with many objections to specific points I made that I didn’t have time to flesh out responses for. There was one thought experiment in it, however, that got a positive reception from pretty much everyone I showed it to, I think because it presents an apparently devastating reductio against the set of assumptions it is derived from. I decided it warranted its own write-up, independent of the less successful elements of the paper.

So, first to introduce the concepts involved in this thought experiment. I have written about pure aggregation and its issues before, but to briefly recap, aggregation is the principle that, in moral terms, there is something comparable between one person experiencing two units of pain and two people each experiencing one unit of pain (you can also have an aggregative theory that places different significance on pain depending on its level, such as prioritarianism, so in those cases it is better thought of as two people each experiencing one unit of intrinsic disvalue versus one person experiencing two units of intrinsic disvalue, even if that corresponds to something more like one unit of pain versus 1.5 units of pain). This can get notoriously counterintuitive in cases in which there is aggregative equivalence between two pools you could help, and the difference in benefits one could give each is very very large, as for instance described in this famous post on torture versus dustspecks.

Conventional deontology doesn’t fix this issue either, unless you subscribe to an incredibly extreme version of deontology, under which beneficence has no moral importance, and all side-constraint violations are equally bad. Otherwise, you will still run into situations in which you can choose to benefit one of two pools in a way that violates no side-constraints, and may have to choose aggregatively, or in which you can be judged to be at more fault or less fault depending on how severe of a constraint you violated and for how many people. What is needed to “fix” aggregation, as a rule, is some distributive theory that competes with it in these cases directly.

Fully non-aggregative theories aren’t very appealing either though. Pure leximin, for instance, has the apparently terrible implication that if person A is being tortured for a million years and person B is being tortured for 999,999 years, you should prefer to save A from one second of their torture than person B from all of their torture. Versions of this that look at how much you can benefit someone rather than how badly off someone is avoid this implication, but still wind up being counterintuitive. For instance if you have rented out heaven for a million years, and you can either send all of humanity there after one year of waiting, or send Bob, and only ever Bob to heaven, but can do so immediately, you ought to send Bob to heaven for a million years rather than waiting to send everyone there for 999,999 years.

And that is where partial aggregation comes in. Partially aggregative theories appear to give you everything you want in these situations. If you can benefit the members of each group a similar amount, you aggregate claims to decide which group to help. If the groups have claims that are too far apart, you always help the one with the stronger per-individual claim.

The other principle crucial to my thought experiment is the “separateness of persons”. Opposition to pure aggregation by non-consequentialists often appeals to the principle of the “separateness of persons”. That is, aggregation may make sense within a life, because any trade-off between parts of that life will be directly experienced by the same subject (you may suffer pain now for the benefit of some later point in your life, but that is alright because the you who experiences this pain will also get to experience that benefit). The supposed confusion of the utilitarian is to apply this between lives, which do not have the same subject of experience, and only have one shot at having a good existence. Where it is justified to trade-off within a life it is not necessarily justified to trade-off between lives. I believe this was most influentially spelled out by John Rawls in chapter I part 5 of “A Theory of Justice”, although it has been highly influential on other opponents of utilitarian aggregation with contractualist leanings, Voorhoeve’s account of partial aggregation at least explicitly appeals to the “separateness of persons” as part of its grounding.

Both of these positions, the separateness of persons and partial aggregation, have some basic issues in my opinion. Theories of partial aggregation for instance have straight-forwardly appealing implications in cases where:

  1. You can choose between benefiting two different groups.

  2. The strength of the claims of each individual in a particular group is roughly the same as the others in that group.

  3. You can provide this benefit with relative certainty.

  4. You are only making this choice once, or at least you are focusing on the application of this principle to one decision in isolation.

Ambiguities and bizarre implications can be proven for pretty much any theory of partial aggregation I have seen so far by taking the theory out of one of these conditions in the right way. I won’t go through these arguments, but I think Derek Parfit (by tweaking premise 1) and Joe Horton (by tweaking premise 2) in particular have highlighted especially strong problems for partial aggregation. And yet, partial aggregationists like Voorhoeve and Tadros have seemed very willing to bite what I see as devastating bullets in each case highlighted by opponents. I take this, at least in part, to be because the answer partial aggregation chooses in the case it works best for (1-4 being the case) is so desirable compared to the alternatives.

Separateness of persons, likewise, seems to me to have serious problems. One of the clearest, as William MacAskill has mentioned, is that it is just not clearly true, and if it is false, then it does seem to undermine many commonsense non-consequentialist principles. If it is true on the other hand, it is not clear that it automatically rules out utilitarianism. It would just undermine its competitors less. A somewhat more novel critique of it I have is that, it seems to me, it does not actually account for our intuitions against aggregation. As I hope to show, by only partially addressing these intuitions, it gives a fingerhold to pull it in some highly repugnant directions.

I will allow that it feels more immediately intuitive that it is wrong to tradeoff a large number of irritated eyes for intense torture than that it is imprudent to cure a very long-lived person’s occasionally irritated eyes with a procedure that induces days of similar torment. And yet, I think it is very intuitive that the latter case is still quite repugnant. Insofar as people find it less repugnant, I think that is largely because non-consequentialists often feel you shouldn’t force someone to be prudent. Therefore the practical relevance of this idea of prudence is fairly academic if the person in question does not actually want to undergo this torturous procedure. This is an area where I have found many people, including opponents of utilitarian aggregation, are ready to bite the bullet, and say that aggregation, even extreme aggregation, can determine what is beneficial within a life. I think this is a mistake, or at least it is a mistake to treat this as significantly less repugnant at the extremes than interpersonal aggregation.

To draw on a thought experiment I mentioned in an earlier talk, imagine we developed a “pain-killer” that spread out a given experience of pain over a very very long time, at a barely noticeable level of increased pain in each moment. I contend that to most, this would be intuitively an effective pain killer, one someone would be happy to use during a painful surgery for instance. I further contend that if such a pain killer had somewhat less risk of complications than a more conventional anesthetic, it could become the standard prescription in such cases, considered better even if it doesn’t alter the aggregate pain at all. I still further contend, that any philosopher who wrote a thinkpiece about this painkiller arguing that it was no good, that it wasn’t even really a painkiller, and that it was imprudent to choose it over the conventional anesthetic for these reasons, would be viewed by most people as laughably academic.

I want to emphasize here that I am not trying to prove that the aggregative answer is wrong in this case, but merely that we are uncomfortable with intrapersonal aggregation, not merely interpersonal aggregation. I think this on its own should cast some doubt on whether it is really the interpersonal aspect of the aggregation that bothers us in the situations partial aggregation seeks to address, and so whether “separateness of persons” is a good premise for anti-aggregation intuitions. But again, I think this is a bullet many could see themselves biting, especially since it really does feel like there is at least some difference.

So let’s keep on pushing on this asymmetry a bit. Imagine combining the intrapersonal and interpersonal elements. Let’s say that there is some extremely long-lived being, who has a minor eye problem. Something like having slightly blurry vision for a few minutes after waking up each morning. You are a doctor, and you have a serum that can either cure this being’s condition for the rest of their life, or cure another patient from enduring a condition that induces unceasing, torture-level suffering for days. There is some length of time this long-lived being could live for (to put a number on it, let’s just a say a trillion years), such that you ought to give the serum to them rather than the tortured patient.

This seems to me even more repugnant, and yet conventional separateness of persons doesn’t seem to recognize it as a problem, and many who reject torture versus dust specks have philosophical commitments that would imply you ought to give the serum to the long-lived being. If you can bite even this bullet however, and are a principled enemy of aggregation, partial aggregation makes this case far far worse. There is some factor of difference between the severity of cases such that, under partial aggregation, no number of cases of the lesser severity can outweigh the claim of the greater severity. Again, let’s put some arbitrary number on it, say a factor of 100 difference. Those who have stuck with separateness of persons and partial aggregation so far seem to be committed to the claim that, if this being lived for 100 trillion years, you ought to help them rather than infinitely many people with the torture condition.

More speculatively, it might get even worse than this. In the paper I wrote for class, I spend some time thinking about how partial aggregation might relate to classic deontic side-constraints, like those related to act/omission and intention. I concluded that, at the very least, it seems like even a deontologist interested in partial aggregation should allow that sometimes the conclusion of partial aggregation should be preferred over the deontic constraints, that it is highly intuitive that we should not only choose to help the torture-ee over the dustspeck-ees, but that we should be willing to personally inflict the dustspecks if it will stop the torture. I sketch out a possible way of doing this I think a partial aggregationist should find appealing, which first of all applies partial aggregation to the violation of these side constraints, and second of all allows these constraints to be traded off in a partially aggregative way against benefits. For instance, there is some number of people you could personally maim that would be worse than killing one person, but it may be that there is no number of white lies you can tell that would be worse than killing one person, and likewise, there is no number of white lies you shouldn’t tell if it will save someone’s life.

If we accept this, we might not accept a straightforward exchange rate, so let’s add in another factor of 1,000 to be safe, and extend the being’s life to 100 quadrillion years. If I am right about all of this, about what features partial aggregation is generally thought to have, along with which ones it seems highly intuitive that it ought to, then it may turn out that grounding it on or even just combining it with “separateness of persons” leads you to the following conclusion:

If it will spare a being with a long-but-finite lifespan from slightly blurry eyes in the morning, we ought to personally torture infinitely many people, unceasingly, for weeks.

I think anyone looking at this conclusion in isolation would agree that utilitarianism has no utility monster a fraction as horrible as this.

This conclusion is very repugnant, but the cost that it pays is that it does not fall out of a single popular theory, but leaves many angles for escape. I still think that it shows something important that the combination of some commonsense and popular anti-aggregation principles appears to allow for or even outright imply a conclusion that seems worse than standard objections to extreme aggregation. It goes beyond the strange structures and weird implications Parfit and Horton highlighted and shows that there are bullets that are unpleasant, not merely absurd, to bite in this space. But I want to go through the ways someone could differ slightly from my version of these principles in order to save a good deal, without being committed to my conclusions.

The obvious thing that bears mentioning first is the deontic side-constraint thing. This is a principle that I only briefly defended, made up for the purposes of my paper, and then in this same piece showed can make a bad situation worse. Although I think there are important dilemmas for partial aggregation that it addresses, and so people hoping to develop the theory further will have to contend with how partial aggregation juggles helping others with not violating side constraints, I will admit this feature is very suspicious on the meta-level. So much so, that for the purposes of the rest of this post, I will concede it and only consider objections to the weaker form of the partial aggregation utility monster (PA monster from here on), in which one can choose between saving the long-lived being or the tortured people. Maybe consider the stronger version a sort of sidenote, saying that the PA monster situation can plausibly get even worse for a non-consequentialist.

Another escape route, this from the weaker version of the PA monster, is to in some way reorient how you treat these cases in terms of person-moments rather than persons. This seems to me to involve the rejection of “separateness of persons”, which would be highly revisionary for some, but there are some ways to do this structurally while preserving a version of separateness of persons.

One possibility is to concede that separateness of persons only works against the extremes of interpersonal aggregation, but that nevertheless there is something else wrong with the extremes of intrapersonal aggregation, which allows you to treat the claim of the long-lived being as weaker than a strong, briefer interest. It is true that separateness of persons does not seem to commit you to allowing aggregation within a life, but neither does it commit you to denying it. If you think what is wrong with pure aggregation in ethics is the separateness of persons, it seems suspicious to me that you need an entirely different principle to fix the repugnant implications of intrapersonal aggregation. In particular, if you take the framing of the separateness of persons to be a diagnosis of some mistake that utilitarians make, as Rawls at least frames it, this seems to me to directly imply that treating a group of people like one person’s life would lend support for aggregating across them, and so, by implication, that aggregating within a life makes much more sense than aggregating across lives.

The other way of trying to hang onto separateness of persons is to interpret it a bit differently. Say that the point of separateness of persons is just that there is a difference between how we can decide prudential decisions (aggregation) and how we can decide ethical decisions. It is true that prudent decisions are intrapersonal, and ethical decisions are interpersonal, but that doesn’t mean that you can make intrapersonal judgements using the same assumptions as prudence when you are in an ethical situation; the context itself changes things. There is even some apparent precedent to this type of distinction. As I mentioned, it is commonsense that while it may be prudent to do something that will make you happy, like making friends or getting married, if you are trying to help someone else (that is, you are making decisions in the ethical context), forcing someone to make friends and get married is not ethical. Likewise, you might say that it is prudent for the long lived being to undergo torture in order stop their eye problems, but that does not mean that it is ethical to help them with this eye problem rather than the torture-ee.

Although it seems like there is at least some precedent for this type of interpretation, I don’t think it is actually a very promising route either. It is true that we consider forcing someone to be prudent to be unethical, but otherwise, when we are doing something a person wants, we generally consider how good that something is for them to correspond roughly to how much ethical weight helping them has. If someone is undergoing five units of suffering they want to stop, it is prudent for them to escape the suffering, and it is ethically valuable to help them escape that suffering, and it seems as though this is for most of the same reasons. Some compelling principled exception like the issue of consent seems to be needed to reformulate separateness of persons such that it opposes both the interpersonal and intrapersonal aggregation in the ethical decision.

A final plausible escape route would be to limit partial aggregation. That is, you could say that there is both a feature of “relevance” and “seriousness” that is part of the partial aggregation equation. If all benefits you can provide are sufficiently “serious”, then they all automatically become “relevant”, and you can return to pure aggregation. This would say, for example, dustspecks are not a serious harm, therefore a different harm that is sufficiently worse than it (like torture) can render the claim of the dustspecks irrelevant. However, if a harm is very serious (like the torture), then no matter how much stronger the claim competing with it is, it still remains relevant. This would structurally look something like, there is no number of dustspecks that could outweigh torture, but there is some number of tortures that could outweigh supersupersupertorture.

This could solve the PA monster, because even if you concede that the long-lived being has a much much stronger claim than each person undergoing torture, torture may be serious enough that it isn’t rendered irrelevant by partial aggregation. This modification has some intuitively appealing features, but there are a couple reasons that I don’t like it.

For one thing, although it doesn’t entail the infinite number of people being tortured implication of the PA monster, it still seems to concede a great deal. The previous version of the thought experiment, in which you are choosing between the torture-ee and the trillion-year lifespan of the being, still gives you the implication that you should save the long-lived being. Indeed, in the versions where I extended the life to reach the infinitely many tortures point, it will still scale up the number of torture-ees. The 100 trillion year lifespan corresponds to 100 people undergoing torture, the 100 quadrillion year lifespan corresponds to 100 thousand people undergoing torture. It’s true that in all of these situations the modified version of partial aggregation gives you the same answer as pure aggregation, while fixing some specific unpleasant cases like torture versus dustspecks, but this feels inconsistent.

If the PA monster relied on an asymmetry in aggregation being pushed over a cliff to crazyville, this version of PA still allows for the root asymmetry, and some of its strangeness. Imagine the strongest claim someone can have that does not cross this “seriousness” threshold where pure aggregation kicks in no matter what, say a broken leg or something. This modified version partial aggregation will still have the fairly intuitive implication that there is some harm serious enough, say supersupersupertorture, such that no number of broken legs matters more than it, while retaining the implication that you ought to save the trillion year-old being from the fuzzy eyes rather than infinitely many broken legs. It all feels like sandpapering the issue down rather than getting at its root.

The other problem with this approach is that, I think, it only seems like an appealing fix because we are incapable of imagining and forming intuitions about something so unspeakably terrible, that it stands in relation to torture as torture stands in relation to dustspecks. If we were able to imagine this, not just something that adds up to it in aggregate, but something that is, uncontroversially, in the moment, this bad, I think those attracted to partial aggregation might want their irrelevance criterion back.

In the end, I believe the thing to do with this thought experiment is to ditch separateness of persons as the reason against aggregation, and to then restructure partial aggregation so that it concerns person-moments rather than persons. This is not the only thing I think partial aggregation should do (it is usually framed non-axiologically, I think mostly Norcross’ doing, but I think this also proves too little of the relevant intuition), but it is maybe the most revisionary.

I also tend to think that more general critiques of partial aggregation are right, and, per the title of Horton’s piece, we probably are just forced to “always aggregate”. As I have discussed before, this sometimes seems highly repugnant to me, but I see little way around it. It is also beyond the scope of this particular piece. My own takeaway however, for the record, is something like this. The right moral principle is probably (with plurality credence) the one that chooses to prevent the dustspecks over the torture, but this principle nonetheless cannot convince me in this particular case. I would not, in fact, act on or even endorse the conclusion myself. But more generally, I think I do endorse pure aggregation. Make of that set of views what you will.


If
, help us write more by donating to our Patreon.


Tagged: