Hide table of contents

Introduction

It seems that "ex ante" views (like ex ante prioritarianism) haven't been discussed much within the EA community. Basically, the approach is to aggregate the utility in each individual first, over their life and by taking the expectation, and then apply whatever social welfare function you like to the resulting individually aggregated utilities.

Furthermore, you could take these individual aggregations/expectations conditional on existence (past, current or future), and only include the terms for actual (past, current or future) individuals; so the set of individuals to aggregate over would be a random variable. You'd then take another expectation, this time of of the social welfare function applied to these aggregated utilities over the set of existing individuals.

The main benefit here is to avoid objections of overriding individual interests while still being prioritarian or negative-leaning, since we can treat personal and interpersonal tradeoffs differently.


Math formalism

We define to be the aggregated utility of individual over all time (or just the future), in a given determined outcome (no expectations applied yet); in the outcomes in which they haven't existed and won't exist, is left undefined. Then we define

and we apply our social welfare function to the set

E.g., for some function which is increasing (or non-decreasing) and concave. Some examples here. Total utilitarianism has for all , and the ex ante view applied to it actually makes no difference. A fairly strong form of negative utilitarianism could be defined by for all , i.e. if and , otherwise; this means that as long as an individual is expected to have a good life (net positive value), what happens to them doesn't matter, or could be lexically dominated by concerns for those expected to have negative lives (i.e. only if we can't improve any negative lives, can we look to improving positive ones).

Finally, we rank decisions based on the expectation of over :


Consequences

We can be both prioritarian or negative-leaning and avoid overriding individual interests; we don't give greater weight to the bad over the good in any individual's life, but we give greater weight to bad lives over good lives. Personal and interpersonal tradeoffs would be treated differently. You would be permitted, under an ex ante prioritarian or negative-leaning view, to choose great suffering together with great bliss or risk great suffering for great bliss all within one individual, but you can't impose great suffering on one individual to give great bliss to another (depending on the exact form of the social welfare function).

Let's look at an illustrative example where the ex ante view disagrees with the usual ("ex post") one, taken from "Prioritarianism and the Separateness of Persons" by Michael Otsuka (2012):

Two-person case with risk and inversely correlated outcomes: There are two people, each of whom you know will develop either the very severe or the slight impairment and each of whom has an equal chance of developing either impairment. You also know that their risks are inversely correlated: i.e., whenever one of them would suffer the very severe impairment, then the other would suffer the slight impairment. You can either supply both with a treatment that will surely improve a recipient's situation if and only if he turns out to suffer the very severe impairment or supply both with a treatment that will surely improve a recipient's situation if and only if he turns out to suffer the slight impairment. An effective treatment for the slight impairment would provide a somewhat greater increase in utility than would an effective treatment for the very severe impairment.

An ex ante prioritarian would choose to treat the slight impairment, while the usual (ex post) prioritarian who does not first aggregate or take expectations over the individual would choose to treat the very severe impairment. From the point of view of each individual, treating the slight impairment would be preferable.

EDIT: Here's an example which might seem pretty weird to some but also a bit intuitive to others:

Suppose there are two people who are equally well off, and you are considering benefitting exactly one of them by a fixed given amount (the amount of benefit would be the same regardless of who receives it).

Then, if you are an ex ante prioritarian, it would be better to choose one to benefit at random than to use a deterministic rule to choose one. However, the actual outcome will be the same, up to swapping the two individuals' utilities.


For what it's worth, under empty individualism (the view that one physical person over time should really be treated as a sequence of distinct individuals from moment to moment, person-moments), applying this ex ante modification actually doesn't make any difference. It'll look like we're overriding preferences, but under empty individualism, there are only interpersonal tradeoffs, no personal tradeoffs. See also "Does Negative Utilitarianism Override Individual Preferences?" by Brian Tomasik.


References and other reading

"Prioritarianism and the Separateness of Persons" by Michael Otsuka (2012) describes this approach, and gives examples to raise objections to prioritarianism and ex ante prioritarianism.

That issue of Utilitas is focused on prioritarianism, with a paper by Parfit which also discusses ex ante views (I have yet to read it).

"Decide As You Would With Full Information! An Argument Against Ex Ante Pareto" by Marc Fleurbaey and Alex Voorhoeve (2013) cited in "Prioritairanism and the Separateness of Persons", has more criticism of ex ante views.

Toby Ord's objections to prioritarianism and negative utilitarianism which do not apply to the ex ante view:

"Why I'm Not a Negative Utilitarian"

"A New Counterexample to Prioritarianism" (2015)

11

0
0

Reactions

0
0

More posts like this

Comments13
Sorted by Click to highlight new comments since: Today at 12:41 PM
tc
5y2
0
0

This is an interesting idea that sands off some of the unfortunate Pareto-suboptimal edges of prioritarianism. But it has some problems.

Ex-ante prioritarianism looks good in the example cases given where it gives an answer that disagrees with regular prioritarianism but agrees with utilitarianism. However, the cases where ex-ante prioritarianism disagrees with

For instance, consider an extension of your experiment:

Suppose there are two people who are equally well off, and you are considering benefitting exactly one of them by a fixed given amount (the amount of benefit would be the same regardless of who receives it).

Suppose there are two people, A and B, who are equally well off with utility 100. Suppose we have the choice between two options. In Lottery 1, A gets a benefit of 100 with certainty, while B gets nothing. In Lottery 2, either A gets 50 with probability 0.4; B gets 50 with probability 0.4, or no-one gets anything (probability 0.2).

Prioritarianism prefers Lottery 1 to Lottery 2, since one person having a welfare of 100 and the other a welfare of 200 is preferred to an 80% chance of (150, 100) and a 20% chance of (100, 100).

Utilitarianism of course prefers the outcome with expected utility 300 to the outcome with expected utility 240.

But a sufficiently concave ex-ante prioritarianism prefers Lottery 2 because B's lower expected value in Lottery 1 is weighted more highly.

It seems perverse to prefer an outcome which is with certainty worse both on utilitarian and prioritarian grounds just to give B a chance to be the one who is on top.

I won't say I'm convinced by my own responses here, but I'll offer them anyway.

I think B could reasonably claim that Lottery 1 is less fair to them than Lottery 2, while A could not claim that Lottery 2 is less fair to them than Lottery 1 (it benefits them less in expectation, but this is not a matter of fairness). This seems a bit clearer with the understanding that von Neumann-Morgenstern rational agents maximize expected (ex ante) utility, so an individual's ex ante utility could matter to that individual in itself, and an ex ante view respects this. (And I think the claim that ex post prioritarianism is Pareto-suboptimal may only be meaningful in the context of vNM-rational agents; the universe doesn't give us a way to make tradeoffs between happiness and suffering (or other values) except through individual preferences. If we're hedonistic consequentialists, then we can't refer to preferences or the veil of ignorance to justify classical utilitarianism over hedonistic prioritarianism.)

Furthermore, if you would imagine repeating the same lottery with the same individuals and independent probabilities over and over, you'd find in the long run, either in Lottery 1, A would benefit by 100 on average and B would benefit by 0 on average, or with Lottery 2, A would benefit by 20 on average and B would benefit by 20 on average. On these grounds, a prioritarian could reasonably prefer Lottery 2 to Lottery 1. Of course, an ex post prioritarian would come to the same conclusion if they're allowed to consider the whole sequence of independent lotteries and aggregate each individual's own utilities within each individual before aggregating over individuals.

(On the other hand, if you repeat Lottery 1, but swap the positions of A and B each time, then Lottery 1 benefits A by 50 on average and B by 50 on average, and this is better than Lottery 2. The utilitairan, ex ante prioritarian and ex post prioritarian would all agree.)

A similar problem is illustrated in "Decide As You Would With Full Information! An Argument Against Ex Ante Pareto" by Marc Fleurbaey & Alex Voorhoeve (I read parts of this after I wrote the post). You can check Table 1 on p.6 and the surrounding discussion. I'm changing the numbers here. EDIT: I suppose the examples can be used to illustrate the same thing (except the utilitarian preference for Lottery 1): Ex post you prefer Lottery 1 and would realize you'd made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, you'd also prefer Lottery 1 and want to change your mind.

Suppose there are two diseases, SEVERE and MILD. An individual with SEVERE will have utility 10, while an individual with MILD will have utility 100. If SEVERE is treated, it will instead have utility 20, a gain of 10. If MILD is treated, it will instead have utility 120, a gain of 20.
Now, suppose there are two individuals, A and B. One will have SEVERE, and the other will have MILD. You can treat either SEVERE or MILD, but not both. Which should you treat?
1. If you know who will have SEVERE with certainty, then with a sufficiently prioritarian view, you should treat SEVERE. To see why, suppose you know A has SEVERE. Then, by treating SEVERE, the utilities would be (20, 100) for A and B, respectively, but by treating MILD, they would be (10, 120). (20, 100) is better than (10, 120) if you're sufficiently prioritarian. Symmetrically, if you know B has SEVERE, you get (100, 20) for treating SEVERE or (120, 10) for treating MILD, and again it's better to treat SEVERE.
2. If you think each will have SEVERE or MILD with probability 0.5 each (and one will have SEVERE and the other, MILD), then you should treat MILD. This is because the expected utility if you treat MILD is (10+120)*0.5 = 65 for each individual, while the expected utility if you treat SEVERE is (20+100)*0.5 = 60 for each individual. Treating MILD is ex ante better than treating SEVERE for each of A and B. If neither of them knows who has which, they'd both want to you treat MILD.
What's the difference from your point of view between 1 and 2? Extra information in 1. In 1, whether you find out that A will have SEVERE or B will have SEVERE, it's better to treat SEVERE. So, no matter which you learn is the case in reality, it's better to treat SEVERE. But if you don't know, it's better to treat MILD.

So, in your ignorance, you would treat MILD, but if you found out who had SEVERE and who had MILD, no matter which way it goes, you'd realize you had made a mistake. You also know that seeking out this information of who has which ahead of time, no matter which way it goes, will cause you to change your mind about which disease to treat. EDIT: I suppose both of these statements are true of your example. Ex post you prefer Lottery 1 and would realize you'd made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, you'd also prefer Lottery 1.

Interesting ideas. :)

If I understand the view correctly, it would say that a world where everyone has a 49.99% chance of experiencing pain with utility of -10^1000 and a 50.01% chance of experiencing pleasure with utility of 10^1000 is fine, but as soon as anyone's probability of the pain goes above 50%, things start to become very worrisome (assuming the prioritarian weighting function cares a lot more about negative than positive values)? This is despite the fact that in terms of realized outcomes, the difference between one person having 49.99% chance of the pain vs 50.01% is pretty minimal.

What probability distribution are the expectations taken with respect to? If you were God and knew everything that would happen, there would be no uncertainty (except maybe due to quantum randomness depending on one's view about that). If there's no randomness, I think ex ante prioritarianism collapses to regular prioritarianism.

One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do "I" stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.

One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do "I" stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.

Here are some interesting examples I thought of. If I rearranged someone's brain cells (and maybe atoms) to basically make a (possibly) completely different brain structure with (possibly) completely different memories and personality, should we consider these different individuals? Consider the following cases:

1. What if all brain function stops, I rearrange their brain, and then brain function starts again?

2. What if all brain function stops, I rearrange their brain to have the same structure (and memories and personality), but with each atom/cell in a completely different area from where it started, and then brain function starts again?

3. What if all brain function stops, the cells and atoms move or change as they naturally would without my intervention, and then brain function starts again?

To me, 1 clearly brings about a completely different individual, and unless we're willing to say that two physically separate people with the same brain structure, memories and personality are actually one individual, I think 2 should also bring about a completely different individual. 3 only really differs from 1 and 2 only by degree of change, so I think it should also bring about a completely different individual, too.

What this tells me is that if we're going to use some kind of continuity to track identity at all, it should also include continuity of conscious experiences. Then we have to ask:

Are there frequent (e.g. daily) discontinuities or breaks in a person's conscious experiences?

Whether there are or not, should our theory of identity even depend on this fact? If it happened to be the case that sleep involved such discontinuities/breaks and people woke up as completely different individuals, would our theory of identity be satisfactory?

Maybe a way around this is to claim that there are continuous degrees of identification between a person at different moments in their life, e.g. me now and me in a week are only 99% the same individual. I'm not sure how we could do ethical calculus with this, though.

Thought experiments like these are why I regard personal identity, and any moral theories that depend on it, as non-starters (including versions of prioritarianism that consider lifetime wellbeing collectively). I think it's best to think either in terms of empty individualism or open individualism. Empty individualism tends to favor suffering-focused views because any given moment of unbearable suffering can't be compensated by other moments of pleasure even within what we normally call the same individual, because the pleasure is actually experienced by a different individual. Open individualism tends to undercut suffering-focused intuitions by saying that torturing one person for the happiness of a billion others is no different than one person experiencing pain for later pleasure.

As others have pointed out before, it is legitimate to try to salvage some ethical concern for personal identity despite the paradoxes. By analogy, the idea of consciousness has many paradoxes, but I still try to salvage it for my ethical reasoning. Neither personal identity nor consciousness "actually exists" in any deep ontological sense, but we can still care about them. It's just that I happen not to care ethically about personal identity.

If I understand the view correctly, it would say that a world where everyone has a 49.99% chance of experiencing pain with utility of -10^1000 and a 50.01% chance of experiencing pleasure with utility of 10^1000 is fine, but as soon as anyone's probability of the pain goes above 50%, things start to become very worrisome (assuming the prioritarian weighting function cares a lot more about negative than positive values)?

Yes, although it's possible that a single individual even having a 100% possibility of pain might not outweigh the pleasure of the others, if the number of other individuals is large enough and the social welfare function is sufficiently continuous and "additive", e.g. it takes the form for strictly increasing everywhere.

What probability distribution are the expectations taken with respect to? If you were God and knew everything that would happen, there would be no uncertainty (except maybe due to quantum randomness depending on one's view about that). If there's no randomness, I think ex ante prioritarianism collapses to regular prioritarianism.

I intended for your own subjective probability distribution to be used, but what you say here leads to some more weird examples (besides collapsing to regular prioritarianism (possibly while aggregating actual utilities over each individual first before aggregating across them)):

I've played a board game where the player who gets to go first is the one who has the pointiest ears. The value of this outcome would be different if you knew ahead of time who this would be compared to if you didn't. In particular, if there's were morally significant tradeoff between utilities, then this rule could be better or worse than a more (subjectively) random choice, depending on whether the worse off players are expected to benefit more or less. Of course, a random selection could be better or worse than one whose actual outcome you know in advance for utilitarians, but there are some differences.

For ex ante prioritarianism, this is also the case before and after you would realize the outcome of the rolls of dice or coin flips; once you realize what the outcome of the random selection is, it's no longer random, and the value of following through with it changes. In particular, if each person had the same wellbeing before the rolls of the dice and stood to gain or lose the same amount if they won (regardless of the selection process), then random selection would be optimal and better than any fixed selection with whose outcome you know in advance, but once you know the outcome of the random selection process, before you apply it, it reduces to using any particular rule whose outcome you know in advance.

One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do "I" stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.

Yes, I think it's basically the same issue. If we can use something like spatiotemporal continuity (I am doubtful that this can be made precise and coherent enough in a way that's very plausible), then we could start before a person is even conceived. Right before conception, the sperm cells and ova could be used to determine the identities of the potential future people. Before the sperm cell used in conception even exists, you could imagine two sperm cells with different physical (spatiotemporal) origins in different outcomes that happen to carry the same genetic information, and you might consider the outcomes in which one is used to have a different person than the outcomes in which the the other is. Of course, you might have to divide up these two groups of outcomes further still. For example, you wouldn't want to treat identical twins as a single individual, even if they originated from some common group of cells.

your own subjective probability distribution to be used

Would that penalize people who hold optimistic beliefs? Their expected utilities would often be pretty high, so it'd be less important to help them. As an extreme example, someone who expects to spend eternity in heaven would already be so well off that it would be pointless to help him/her, relative to helping an atheist who expects to die at age 75. That's true even if the believer in heaven gets a terminal disease at age 20 and dies with no afterlife.

Sorry that was unclear: I meant the subjective probabilities of the person using the ethical system ("you") applied to everyone, not using their own subjective probabilities.

Allowing each individual to use their own subjective probabilities would be interesting, and have problems like you point out. It could respect individual autonomy further, especially for von Neumann-Morgenstern rational agents with vNM utility as our measure of wellbeing; we would rank choices for them (ignoring other individuals) exactly as they would rank these choices for themselves. However, I'm doubtful that this would make up for the issues, like the one you point out. Furthermore, many individuals don't have subjective probabilities about most things that would be important for ethical deliberation in practical cases, including, I suspect, most people and all nonhuman animals.

Another problematic example would be healthcare professionals (policy makers, doctors, etc.) using the subjective probabilities of patients instead of subjective probabilities informed by actual research (or even their own experience as professionals).

I meant the subjective probabilities of the person using the ethical system ("you") applied to everyone, not using their own subjective probabilities.

I see. :) It seems like we'd still have the same problem as I mentioned. For example, I might think that currently elderly people signed up for cryonics have very high expected lifetime utility relative to those who aren't signed up because of the possibility of being revived (assuming positive revival futures outweigh negative ones), so helping currently elderly people signed up for cryonics is relatively unimportant. But then suppose it turns out that cryopreserved people are never actually revived.

(This example is overly simplistic, but the point is that you can get similar scenarios as my original one while still having "reasonable" beliefs about the world.)

Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/wealth), so people who have higher EU for their lives should be given (slightly) less weight.

If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.

We could be wrong either way.

Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there's no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 - 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn't bother because other uses of the money would be more cost-effective, even though it's basically guaranteed that this person's life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.

I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn't work out, that's the price you pay for taking the risk. But that stance feels pretty harsh.

I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.

I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldn't value anything at values 3^^^^3 (or -3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.

Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?

Also, FWIW, it's possible to blend ex ante and ex post views. An individual's actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.

Interesting. :)

giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse.

I was thinking that it's not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.

Curated and popular this week
Relevant opportunities