Hide table of contents

Maximal cluelessness is a Global Priorities Institute Working Paper by Andreas Mogensen. This post is part of my sequence of GPI Working Paper summaries.

If you’d like a very brief summary, skip to “Brief summary.”

Introduction

Suppose you may choose to donate to the Against Malaria Foundation (AMF), which is estimated to save a child’s life for every ~$5,000 received, or donate to Make-A-Wish, which grants an ill child’s wish for ~$7,500 on average. If you care about maximizing the impartial good brought by your donation, the decision might seem obvious. But Mogensen argues it isn’t, and, more broadly, many EA priority rankings seem to be in tension with reasonable assumptions you might make about rational decision-making under uncertainty.

Cluelessness

If we choose actions based on their consequences, we have to consider all possible consequences—some of which stretch into the far future. As long as we don’t discount future value, future consequences dominate the differences in expected value between choices, meaning they determine our choices. The problem: we are clueless about our actions’ long-term consequences.

Mogensen anticipates a Naive Response to this: Cluelessness doesn’t impede our rational decision-making because we can still maximize expected value—we simply account for uncertainty by assigning probabilities.

He calls this “the Naive Response” because it doesn’t take the depth of our uncertainty seriously—our evidence often doesn’t even allow us to assign precise probabilities.

For example, consider the AMF. Our evidence suggests saving children’s lives may indirectly affect population size, but we are uncertain about how. Worse, we are uncertain about whether, at the margin, we want the population to increase or decrease. We also don’t know the political effects of saving lives through a charity instead of local health institutions. Hence, we are so deeply uncertain about the effects of donating to the AMF that we can’t create a precise probability distribution.[1]

The maximality rule

One way to adapt to such uncertainty is to assess decisions with a set of plausible probability functions instead of just one. Unfortunately, doing so is incompatible with simple expected value maximization, leaving us to find a new decision criterion.

Mogensen proposes a decision criterion, the maximality rule, that he finds plausible enough to prevent us from ruling it out. 

The maximality rule (definition)

The maximality rule: We’re allowed to choose an act if no other act has greater expected value according to every probability function in our set of plausible probability functions.

When we have two acts, and one has greater expected value in some of our set’s probability functions but not all of them, the maximality rule doesn’t prefer that act over the other, but it also doesn’t think they are exactly equal. This is a key attraction of the maximality rule: It “does not contrive a preference between incommensurable options where there is none.”[2]

Unfortunately, it also has its faults. First, because it considers every probability function, it can prefer acts that don’t have the highest expected value in any of our set’s individual probability functions.

Mogensen discusses a similar alternative to the maximality rule that doesn’t have this problem, called the liberal rule, but its problems lead him to prefer the maximality rule.[3]

Mogensen also notes a fault in the maximality rule in the following case:[4]

  • can be true or false. Your set of probability functions gives all probabilities from 10% to 80% to p being true.
  • You're first offered bet A, which you can choose to take or leave. It pays $15 if p is false, but costs $10 if p is true.
  • Then you’re offered bet B, which you can also choose to take or leave. It pays $15 if p is true, but costs $10 if p is false.
  • Accepting both bets guarantees a $5 gain, regardless of whether p is true or false.

The maximality rule permits us to decline both bets, even though we are foregoing $5 for sure.

Another similar alternative to the maximality rule, the Γ-maximin rule, doesn’t have this problem, but, as before, its drawbacks lead Mogensen to prefer the maximality rule.[5]

So, although some decision criteria are better than the maximality rule in some cases, Mogensen doesn’t think any of them are better overall; we can’t rule out the maximality rule and ought to avoid conclusions that aren’t consistent with it.

Implications

Donating to the AMF has numerous potential indirect effects, such as the population size. Some of these affect the long-term future, including slightly changing the probability of existential risks.[1] Because minute changes to the far future have great significance, they dominate the differences in expected value between our options. The problem: we’re so uncertain about these long-term consequences that we can’t create a precise probability function to define expected values, giving us a reason to consult the maximality rule. Under such uncertainty, Mogensen argues that, in a decision between only the AMF and Make-A-Wish, the maximality rule permits us to donate to either.[6]

While this conclusion doesn’t necessarily hold in reality, as we can donate to many organizations (not just these two), Mogensen thinks it applies to many highly-ranked EA global well-being charities, as we don’t know the sign or magnitude of their long-term effects (including the probability of extinction).

As he puts it:

If our evidence cannot rule out that the chance of extinction is ever so slightly higher given a choice to donate to one of GiveWell’s top charities as opposed to an organization like Make-A-Wish Foundation, [people abiding by the maximality rule] arguably need not prefer the former.

Unfortunately, interventions aiming to improve the long-term future or reduce existential risks also lack sufficient evidence of their effects, forcing us to rely on intuitive conjectures with a demonstrably poor track record.[7]

Mogensen feels we need more research before we’ll know what the maximality rule would suggest about long-term focused interventions, but he thinks we should be skeptical that it would say we must choose them.

Conclusion

He concludes:

I do not insist that the maximality rule is correct. I merely claim that it is sufficiently plausible that we cannot rule it out. For all we know, orthodox effective altruist conclusions about cause prioritization are all true. In fact, I am inclined to believe they are. The problem is that I do not know how to set out and argue for a decision theory that is consistent with a long-termist perspective and supports these conclusions without downplaying the depth of our uncertainty. Then again, as a philosopher, I know that I am inclined to believe a great many things for which I lack an adequate response to certain apparently compelling sceptical challenges. Some may share my conviction that this is just one of those cases. But those who are already sceptical of effective altruist conclusions undoubtedly will not.

Brief summary

Mogensen argues:

  1. When it comes to our actions’ broad effects, we are clueless—we’re so uncertain we can’t assign precise probabilities.
  2. When we’re clueless, we can’t rule out using the maximality rule and shouldn’t draw conclusions contrary to it.
  3. Using the maximality rule, we don’t have to donate to the AMF over Make-A-Wish, which probably applies to most other global well-being interventions.
  4. We need more research to know what the maximality rule would suggest about long-term-focused interventions, but we should be skeptical that it would say we must choose them.
  1. ^

    See pages 13 to 16 for a detailed and concrete discussion (especially if you are skeptical of this notion).

  2. ^
  3. ^

    See pages 10 and 11 for the definition and drawbacks of the liberal rule.

  4. ^

    Elga (2010) identified this problem.

  5. ^

    The Γ-maximin rule requires us to act as if the worst possible expected utility of each act is correct (see page 12 for the proper definition of the Γ-maximin rule). Mogensen thinks this is extremely restrictive.
    The Γ-maximin rule also violates a plausible principle called Restricted Conglomerability (see pages 12 and 13 for discussion).

  6. ^

    Mogensen states he cannot prove this conclusion, though, because of numerous barriers. See the bottom of page 16 and all of page 17 for his argument.

  7. ^

    See Hurford (2013).

Comments14
Sorted by Click to highlight new comments since: Today at 9:18 PM

This is a fun paper. But it rests a lot on an unsupported intuition about what's required in order to "take the depth of our uncertainty seriously" (i.e., that this requires imprecise credences with a very wide range of imprecision).  Since this intuition leads to the (surely false) conclusion that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation, it seems to me that we have very good reason to reject that theoretical intuition.

I'm a bit surprised that this is getting downvoted, rather than just disagree-voted. It's fine to reach a different verdict and all, but y'all really think the methodological point I'm making here shouldn't even be said?  Weird.

Hi Richard,

Since this intuition leads to the (surely false) conclusion that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation, it seems to me that we have very good reason to reject that theoretical intuition.

Is this a fair comparison? For readers' context, Andreas compares the Against Malaria Foundation (AMF) with Make-A-Wish Foundation:

In comparing Make-A-Wish Foundation unfavourably to Against Malaria Foundation, Singer (2015) observes that ³saving a life is better than making a wish come true.´ (6) Arguably, there is a qualifier missing from this statement: µall else being equal. Saving a child's life need not be better than fulfilling a child's wish if the indirect effects of saving the child's life are worse than those of fulfilling the wish. We have already touched on some of the potential negative indirect effects associated with the mass distribution of insecticide-treated anti-malarial bed-nets in section 2.2, but they are worth revisiting in order to make clear the depth of our uncertainty.

Firstly, there are potential effects on population. When people survive childhood in greater numbers, it is natural to expect the population to grow. The explosion in global population observed since the 17th century is arguably attributable principally to declining mortality (McKeown 1976). However, we must also account for the impact of reduced childhood mortality on family planning. When childhood mortality declines, parents in developing countries need not have as many children in 14 order to ensure that they can be supported in old age. As a result, averting child deaths may cause the rate of population growth to decline (Heer and Smith 1968). It is the position of the Gates Foundation that averting child deaths at the current margin will reduce population size (Gates and Gates 2014). Many studies confirm that the effect of reduced childhood mortality on population size is offset by reduced fertility (Schultz 1997; Conley, McCord, and Sachs 2007; Lorentzen, McMillan, and Wacziarg 2008; Murtin 2013). Others find that the reduction in births is less than one-to-one with respect to averted child deaths (Bhalotra and van Soest 2008; Herzer, Strulik, and Vollmer 2012; Bhalotra, Hollywood, and Venkataramani 2012). Unfortunately, the studies just noted are of different kinds (cross-country comparisons, panel studies, quasi-experiments, large-sample micro-studies), with different strengths and weaknesses, making it difficult to draw firm conclusions. 13

I agree increasing malaria is surely worse than decreasing malaria, but I would not say Make-A-Wish Foundation is surely worse than AMF. Given this distinction, I (lightly) downvoted your comment.

Thanks for explaining!

It is a fair comparison. Andreas' relevant claim is that it isn't clear what the sign of the effect from AMF is. If AMF is negative, then its opposite--FMF--would presumably be positive.

Thanks for following up!

If AMF is negative, then its opposite--FMF--would presumably be positive.

I am not sure about this. I think Andreas' claim is that AMF may be negative due to indirect effects. So, conditional on AMF being negative, one should expect the indirect effects would dominate the direct ones. This means a good candidare for "Minus AMF", an organisation whose value is symmetric to that of AMF, would have both direct and indirect effects symmetric to those of AMF.

The name For Malaria Foundation (FMF) suggested to me an organisation whose interventions have direct effects with similar magnitude, but opposite sign of those of AMF. However, the negative indirect effects of intentionally increasing malaria deaths seem worse than the negative of the positive indirect effects of decreasing malaria deaths[1]. So, AMF being negative would imply FMF having positive direct effects, but in this case I would expect FMF's indirect effects to be sufficiently negative for it to be overall net negative.

  1. ^

    I am utilitarian, but recognise saving a life, and abstaining from saving a live can have different indirect consequences.

If you're worried that a real-life FMF would not be truly symmetrical to AMF in its effects, just mentally replace it with "Minus AMF" in my original comment. (Or imagine stipulating away any such differences.)  It doesn't affect the essential point.

I didn't downvote, but if I had, it would be because I don't think it's surely false that "that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation", and that claim seems overconfident. (Or, rather, AMF could be no better than burning money or the Make a Wish Foundation, even if all are better than FMF, in case there is asymmetry between AMF and FMF.)

I specifically worry that AMF could be bad if and because it hurts farmed animals more than it helps people, considering also that descendants of beneficiaries will likely consume more factory farmed animal products, with increasing animal product consumption and intensification with economic development. Wild animal (invertebrate) effects could again go either way. If you're an expectational total utilitarian or otherwise very risk-neutral wrt aggregate welfare, then you may as well ignore the near term benefits and harms and focus on the indirect effects on the far future, e.g. through how it affects the EA community and x-risks. (Probably FMF would have very bad community effects, worse than AMF's are good relative to more direct near term effects, unless FMF quietly acts to convince people to stop donating to AMF.)

And I say this as a recurring small donor to malaria charities including AMF. I think AMF can still be a worthwhile part of a portfolio of interventions, even if it turns out to not look robustly good on its own (it could be that few things do). See my post Hedging against deep and moral uncertainty for illustration.

Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, I'm most inclined to think this is one of those cases where we've got a philosophical argument we don't immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false. 

On the other hand, I think I'm most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. I'm more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper 'Tough enough? Robust satisficing as a decision norm for long-term policy analysis' but we weren't especially sold on them.

Thanks, yeah, I remember liking that paper. Though I'm inclined to think you should assign (precise) higher-order probabilities to the various "admissible probability functions", from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?

General lesson: if we don't have any good way of dealing with imprecise credences, we probably shouldn't regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.

General lesson: if we don't have any good way of dealing with imprecise credences, we probably shouldn't regard them as rationally mandatory.

I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?

I'm more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility.

Another response could be to just look for more structure in our credences we've failed to capture. Say we have a bunch of probability functions according to which AMF is bad and a bunch according to which AMF is good, but we nonetheless think AMF is good. Why would we think AMF is good anyway? If we're epistemically rational, it would presumably be because we doubt the functions according to which it is bad more than we do the ones according to which it is good. So, we've actually failed to adequately capture our credences and their structure with these probability functions as they are.

One way to represent this is to have another probability function to mix all of those probability functions ("(precise) higher-order probabilities to the various "admissible probability functions"), reducing to precise credences, in such a way that AMF turns out to look good, like @Richard Y Chappell suggests in reply here. Another, still permitting imprecise credences, is to have multiple such mixing functions of probability functions, but such that AMF still looks good on each mixing function. If you're sympathetic to imprecise credences in the first place (like I am), the latter seems like a pretty good solution.

Of course, an alternative explanation could be that we aren't actually aren't justified in thinking AMF is good. We should be careful in how we pick these higher-order probabilities to avoid motivated reasoning. In picking these higher-order probabilities, we should remain open to the possibility that AMF is not actually robustly good.

Thanks for the summary, Nicholas. For reference, the paper was discussed on EA Forum here.

One move we can make to reduce paralysis with the maximality rule is to consider whole portfolios/sequences of actions, rather than acts in isolation. We can make up for the potential downsides of some acts with the upsides of others. See my post Hedging against deep and moral uncertainty.

Hi Andreas! I'm worried that the maximality rule will overgeneralize, implying that little is rationally required of us. Consider the decision whether to have children. There are obvious arguments both for and against from a self-interested point of view, and it isn't clear exactly how to weigh them against each other. So, plausibly, having children will max EU according to at least one probability function in our representor, whereas not having children will max EU according to at least one other probability function in our representor. Result via maximality rule: either choice is rationally permissible. Or consider some interesting public policy problem from the perspective of a benevolent social planner. Given the murkiness of social science research, it seems like that, if we've gone in for the imprecise credence picture, no one policy will maximize EU relative to every credence function in the representor, in which case, many policy choices will be rationally permissible. I wonder if you have thoughts on this? 

Curated and popular this week
Relevant opportunities