This is a thought experiment designed to help clarify different aspects of one's value system relating to intertemporal decision making under risk with large effects on different forms of inequality and fairness: ex ante inequality ("unfairness"), ex post intragenerational interpersonal inequality, and ex post intergenerational inequality. As every thought experiment, it is highly stylized and unrealistic to be able to focus clearly on the aspects under investigation.

The thought experiment is the following: Assume you are a decision maker who faces a choice between five possible interventions (e.g., different public health policies), only one of which you can enact. Each intervention would have severe consequences for everyone on Earth in the current and the next generation. Let us assume these two generations are roughly the same size and no later generations are affected.

An intervention's effects may differ between homogametic/XX (mostly female) and heterogametic/XY (mostly male) subpopulations but will affect everyone inside one of these subpopulations in the same way, measurable by a per person gain or loss in WELLBYs (or, if you prefer, QALYs or DALYs).

All but one intervention's effects are uncertain and depend on a "coin toss by nature" (landing heads or tails with roughly equal probability) that will only get known after the intervention is completed:

  • If intervention A "succeeds", it gives everyone 10 additional WELLBYs, but if it "fails", it costs everyone 5 WELLBYs.
  • Intervention B gives everyone in one generation 10 additional WELLBYs at the cost of -5 WELLBYs for everyone in the other generation. Nature decides which generation is the winning one in this case.
  • Similarly, C gives either every XX person in both generations 10 additional WELLBYs, or every XY person in both generations. Nature decides which. All others lose 5 WELLBYs.
  • D either makes the XX subpopulation of generation 1 and the XY subpopulation of generation 2 the winners, or makes the XXs of generation 2 and the XYs of generation 1 the winners. Again, nature decides which of the two scenarios applies.
  • Only intervention E gives each member of the current generation 10 WELLBYs but costs everyone in the next generation 5 WELLBYs without uncertainty.

The following table sumarizes these effects of all five interventions:

coin lands: h(eads) h h h t(ails) t t t
generation: c(urrent) c n(ext) n c c n n
subpopulation: XX XY XX XY XX XY XX XY
intervention: A +10 +10 +10 +10 -5 -5 -5 -5
B +10 +10 -5 -5 -5 -5 +10 +10
C +10 -5 +10 -5 -5 +10 -5 +10
D +10 -5 -5 +10 -5 +10 +10 -5
E +10 +10 -5 -5 +10 +10 -5 -5

Table 1: Gains and losses in WELLBYs (or, if you prefer, QALYs or DALYs) per person of five different hypothetical interventions with uncertain effects, by generation and subpopulation.

As you can see, for all you know all five interventions give the same expected net gain of +2.5 WELLBYs per person averaged over both generations' total population.

Still, I suspect that you (like me) might find some of the interventions clearly or at least tentatively preferable to some others from the list, while between certain other pairs you might be undecided. Such preferences might relate to how different interventions would lead to different forms and degrees of risk, inter- or intra-generational inequality, equal or unequal chances (fairness), and different intertemporal distributions of gains and losses.

In order to clarify your values, you might want to first think about each of the pairs (A v B, A v C, B v C, ...) separately and see whether one of the two seems clearly preferable, or both seem equally desirable, or neither of these three possibilities. It might be helpful to do this exercise first without applying some formal aggregation formula (such as a nonlinear welfare function).

Once you have written down your pairwise preference relation (which might turn out to be anything between a full ranking and a very incomplete, maybe even cyclic relation), you might then want to think of ways how the listed WELLBY quantities could be aggregated into an intervention's overall evaluation (a "utility function") that would be consistent with your pairwise preferences.

(In my case, I have a hard time with the latter task since my preference relation seem to contain two incomparable pairs: I can't decide between B and C, and I can't decide between B and D, but I clearly prefer D to C.)

As a last twist, you might also think about whether and how your preferences would change if "+10" was replaced by "live to the age of 150" and "-5" was replaced by "die during infancy", while "XX" and "XY" are replaced by "people with gene Z" and "people without gene Z", where Z is a hypothetical gene occurring in roughly half the population, not correlated in any obvious way with their phenotype.

I'm very curious about your comments and preference disclosures!

PS: reacting to a comment by Roman (see below), I note that the "coin toss by nature" uncertainty need not be interpreted as aleatoric uncertainty manifesting after your choice, but can of course also be interpreted as purely subjective ("Bayesian") probabilities representing your limited knowledge about some relevant aspects of the laws of nature that already exist before your choice.

9

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 2:14 PM

Still, I suspect that you (like me) might find some of the interventions clearly or at least tentatively preferable to some others from the list

FYI the way the table was presented I found it very hard to have such intuitions. E seems a bit better maybe because of risk aversion but worse because there is a downward trajectory? But it is hard to distinguish the others.

Thank you for this hint, @Larks. Would you be willing to suggest a different way of presenting the table that would have made it easier? For now, I will add some verbal descriptions to the text.

The "nature coin" complicates this experiment a lot. Also, it sounds like a source of inherent randomness in policy's outcomes, i.e., aleatoric uncertainty, which is perhaps rarely or never the case for actual policies and therefore evaluating such policies ethically is unnatural: the brain is not trained to do this. When people discuss and think about the ethics of policies, even the epistemic uncertainty is often assumed away, though it is actually very common that we don't know whether a potential policy will turn out good or bad.

Due to this, I would say I have a preference for the intervention E because it's the only one which actually doesn't depend on the "nature coin".

Would I be right in interpreting your choice for E as a strong form of risk aversion that is stronger than any fairness considerations? Because E is also the only option that does not give everyone the same ex ante chance of belonging to the winners.

Thanks, @Roman, for your remark regarding the interpretation of the coin toss. I have added a PS to the post in reaction.

I can't turn this into a utility function because there's too much agnosticism (and I think human utility functions are fictitious anyway). I will say that my preferences seem to be guided not only by a desire for intergenerational equality, but also for intergenerational agency.

If I'm a decision maker I'm going to consult all the relevant parties, but I can't do that for the next generation. The next generation gets no say in the matter and yet feels the consequences just as vividly. There is no option where the next generation is (ex-ante) better off than the previous generation, but there is an option where they're worse off (E). E is (imo) the worst option and if there was an opposite to E; a guaranteed -5 for this generation for a guaranteed +10 for the next, then I would consider that the best option.

Notice that the intergenerational inequality between the two is the same, but because the next generation has no agency I actually want there to be inequality (in their favor) as a kind of compensation.
I think this extends to other moral decision processes too. Whenever a party can't consent to a decision (because they're in the future, far away, don't understand me...), I'm inclined to make the payoff more unequal in their favor as a rectification.

EDIT: Maybe we can construct other thought experiments to see to what degree agency has value. Clearly we value it in ourselves (e.g people pay for it) and in others (e.g people die for democracy), but to what extend? If I have a 100% agency in a situation I feel that is valuable enough to compensate those without it with some WELLBY’s, even though I dislike inequality. What happens if we shift the parameters, (e.g more total wellbeing with less equality in wellbeing, but less total agency with more equal agency)?

I get a sense that I value equality in agency more than equality in WELLBY. I think I also value total agency (without increasing agency inequality) more than WELLBY equality. If we add risk aversion for parameters it seems I have more risk aversion for creating unequal agency than unequal WELLBY. For the other payoffs it’s hard to say. Do other people have the same inclination? Might be interesting to create more thought experiments.

Another hypothesis is that I don’t value agency, but want to minimize blame. If I construct the thought experiments such that no-one knows I did it, I still feel the same way. So it can’t be blame.
Maybe it’s not blame but blameworthiness, or maybe responsibility. However, responsibility and blameworthiness are entwined with agency so this might not be a useful distinction. If you have a thought experiment that untangles them, please let me know.

Curated and popular this week
Relevant opportunities