__Maximal cluelessness__* is a Global Priorities Institute Working Paper by Andreas Mogensen. This post is part of my** sequence*

*of GPI Working Paper summaries.*

*If you’d like a very brief summary, skip to “Brief summary.”*

# Introduction

Suppose you may choose to donate to the Against Malaria Foundation (AMF), which is __estimated__ to save a child’s life for every ~$5,000 received, or donate to Make-A-Wish, which grants an ill child’s wish for ~$7,500 on average. If you care about maximizing the impartial good brought by your donation, the decision might seem obvious. But Mogensen argues it isn’t, and, more broadly, many EA priority rankings seem to be in tension with reasonable assumptions you might make about rational decision-making under uncertainty.

# Cluelessness

If we choose actions based on their consequences, we have to consider *all* possible consequences—some of which stretch into the far future. As long as we don’t discount future value, future consequences dominate the differences in expected value between choices, meaning they determine our choices. The problem: we are clueless about our actions’ long-term consequences.

Mogensen anticipates a *Naive Response* to this: Cluelessness doesn’t impede our rational decision-making because we can still maximize expected value—we simply account for uncertainty by assigning probabilities.

He calls this “the Naive Response” because it doesn’t take the depth of our uncertainty seriously—our evidence often doesn’t even allow us to assign precise probabilities.

For example, consider the AMF. Our evidence suggests saving children’s lives may indirectly affect population size, but we are uncertain about how. Worse, we are uncertain about whether, at the margin, we want the population to increase or decrease. We also don’t know the political effects of saving lives through a charity instead of local health institutions. Hence, we are so deeply uncertain about the effects of donating to the AMF that we can’t create a precise probability distribution.^{[1]}

# The maximality rule

One way to adapt to such uncertainty is to assess decisions with a set of plausible probability functions instead of just one. Unfortunately, doing so is incompatible with simple expected value maximization, leaving us to find a new decision criterion.

Mogensen proposes a decision criterion, *the maximality rule*, that he finds plausible enough to prevent us from ruling it out.

### The maximality rule (definition)

**The maximality rule:** We’re allowed to choose an act if no other act has greater expected value according to *every* probability function in our set of plausible probability functions.

When we have two acts, and one has greater expected value in *some* of our set’s probability functions but not all of them, the maximality rule doesn’t prefer that act over the other, but it also doesn’t think they are exactly equal. This is a key attraction of the maximality rule: It “does not contrive a preference between incommensurable options where there is none.”^{[2]}

Unfortunately, it also has its faults. First, because it considers every probability function, it can prefer acts that don’t have the highest expected value in any of our set’s *individual* probability functions.

Mogensen discusses a similar alternative to the maximality rule that doesn’t have this problem, called *the* *liberal rule*, but its problems lead him to prefer the maximality rule.^{[3]}

Mogensen also notes a fault in the maximality rule in the following case:^{[4]}

*p*can be true or false. Your set of probability functions gives all probabilities from 10% to 80% to*p*being true.- You're first offered bet
*A*, which you can choose to take or leave. It pays $15 if*p*is false, but costs $10 if*p*is true. - Then you’re offered bet
*B*, which you can also choose to take or leave. It pays $15 if*p*is true, but costs $10 if*p*is false. - Accepting both bets guarantees a $5 gain, regardless of whether
*p*is true or false.

The maximality rule permits us to decline both bets, even though we are foregoing $5 for sure.

Another similar alternative to the maximality rule, *the Γ-maximin rule*, doesn’t have this problem, but, as before, its drawbacks lead Mogensen to prefer the maximality rule.^{[5]}

So, although some decision criteria are better than the maximality rule in some cases, Mogensen doesn’t think any of them are better overall; we can’t rule out the maximality rule and ought to avoid conclusions that aren’t consistent with it.

# Implications

Donating to the AMF has numerous potential indirect effects, such as the population size. Some of these affect the long-term future, including slightly changing the probability of existential risks.^{[1]} Because minute changes to the far future have great significance, they dominate the differences in expected value between our options. The problem: we’re so uncertain about these long-term consequences that we can’t create a precise probability function to define expected values, giving us a reason to consult the maximality rule. Under such uncertainty, Mogensen argues that, in a decision between only the AMF and Make-A-Wish, the maximality rule permits us to donate to either.^{[6]}

While this conclusion doesn’t necessarily hold in reality, as we can donate to many organizations (not just these two), Mogensen thinks it applies to many highly-ranked EA global well-being charities, as we don’t know the sign or magnitude of their long-term effects (including the probability of extinction).

As he puts it:

If our evidence cannot rule out that the chance of extinction is ever so slightly higher given a choice to donate to one of GiveWell’s top charities as opposed to an organization like Make-A-Wish Foundation, [people abiding by the maximality rule] arguably need not prefer the former.

Unfortunately, interventions aiming to improve the long-term future or reduce existential risks also lack sufficient evidence of their effects, forcing us to rely on intuitive conjectures with a demonstrably poor track record.^{[7]}

Mogensen feels we need more research before we’ll know what the maximality rule would suggest about long-term focused interventions, but he thinks we should be skeptical that it would say we must choose them.

# Conclusion

He concludes:

I do not insist that the maximality rule is correct. I merely claim that it is sufficiently plausible that we cannot rule it out. For all we know, orthodox effective altruist conclusions about cause prioritization are all true. In fact, I am inclined to believe they are. The problem is that I do not know how to set out and argue for a decision theory that is consistent with a long-termist perspective and supports these conclusions without downplaying the depth of our uncertainty. Then again, as a philosopher, I know that I am inclined to believe a great many things for which I lack an adequate response to certain apparently compelling sceptical challenges. Some may share my conviction that this is just one of those cases. But those who are already sceptical of effective altruist conclusions undoubtedly will not.

# Brief summary

Mogensen argues:

- When it comes to our actions’ broad effects, we are clueless—we’re so uncertain we can’t assign precise probabilities.
- When we’re clueless, we can’t rule out using the maximality rule and shouldn’t draw conclusions contrary to it.
- Using the maximality rule, we don’t have to donate to the AMF over Make-A-Wish, which probably applies to most other global well-being interventions.
- We need more research to know what the maximality rule would suggest about long-term-focused interventions, but we should be skeptical that it would say we must choose them.

^{^}See pages 13 to 16 for a detailed and concrete discussion (especially if you are skeptical of this notion).

^{^}Quote from Bradley and Steele (2015).

^{^}See pages 10 and 11 for the definition and drawbacks of the liberal rule.

^{^}Elga (2010) identified this problem.

^{^}The

*Γ-maximin rule*requires us to act as if the worst possible expected utility of each act is correct (see page 12 for the proper definition of the*Γ-maximin rule*). Mogensen thinks this is extremely restrictive.

The*Γ-maximin rule*also violates a plausible principle called Restricted Conglomerability (see pages 12 and 13 for discussion).^{^}Mogensen states he cannot prove this conclusion, though, because of numerous barriers. See the bottom of page 16 and all of page 17 for his argument.

^{^}See Hurford (2013).

This is a fun paper. But it rests a

loton an unsupported intuition about what's required in order to "take the depth of our uncertainty seriously" (i.e., that this requires imprecise credences with a very wide range of imprecision). Since this intuition leads to the (surely false) conclusion that a rational beneficent agent might just as well support theFor Malaria Foundationas theAgainst Malaria Foundation, it seems to me that we have very good reason to reject that theoretical intuition.I'm a bit surprised that this is getting downvoted, rather than just disagree-voted. It's fine to reach a different verdict and all, but y'all really think the methodological point I'm making here

shouldn't even be said? Weird.I didn't downvote, but if I had, it would be because I don't think it's

that "that a rational beneficent agent might just as well support thesurely falseFor Malaria Foundationas theAgainst Malaria Foundation", and that claim seems overconfident. (Or, rather, AMF could be no better than burning money or the Make a Wish Foundation, even if all are better than FMF, in case there is asymmetry between AMF and FMF.)I specifically worry that AMF could be bad if and because it hurts farmed animals more than it helps people, considering also that descendants of beneficiaries will likely consume more factory farmed animal products, with increasing animal product consumption and intensification with economic development. Wild animal (invertebrate) effects could again go either way. If you're an expectational total utilitarian or otherwise very risk-neutral wrt aggregate welfare, then you may as well ignore the near term benefits and harms and focus on the indirect effects on the far future, e.g. through how it affects the EA community and x-risks. (Probably FMF would have very bad community effects, worse than AMF's are good relative to more direct near term effects, unless FMF

quietlyacts to convince people to stop donating to AMF.)And I say this as a recurring small donor to malaria charities including AMF. I think AMF can still be a worthwhile part of a portfolio of interventions, even if it turns out to not look robustly good on its own (it could be that few things do). See my post Hedging against deep and moral uncertainty for illustration.

Hi Richard,

Is this a fair comparison? For readers' context, Andreas compares the Against Malaria Foundation (AMF) with Make-A-Wish Foundation:

I agree increasing malaria is surely worse than decreasing malaria, but I would not say Make-A-Wish Foundation is surely worse than AMF. Given this distinction, I (lightly) downvoted your comment.

Thanks for explaining!

It is a fair comparison. Andreas' relevant claim is that it isn't clear what the sign of the effect from AMF is. If AMF is negative, then its opposite--FMF--would presumably be positive.

Thanks for following up!

I am not sure about this. I think Andreas' claim is that AMF may be negative due to indirect effects. So, conditional on AMF being negative, one should expect the indirect effects would dominate the direct ones. This means a good candidare for "Minus AMF", an organisation whose value is symmetric to that of AMF, would have both direct and indirect effects symmetric to those of AMF.

The name For Malaria Foundation (FMF) suggested to me an organisation whose interventions have direct effects with similar magnitude, but opposite sign of those of AMF. However, the negative indirect effects of intentionally increasing malaria deaths seem worse than the negative of the positive indirect effects of decreasing malaria deaths

^{[1]}. So, AMF being negative would imply FMF having positive direct effects, but in this case I would expect FMF's indirect effects to be sufficiently negative for it to be overall net negative.^{^}I am utilitarian, but recognise saving a life, and abstaining from saving a live can have different indirect consequences.

If you're worried that a real-life FMF would not be truly symmetrical to AMF in its effects, just mentally replace it with "Minus AMF" in my original comment. (Or imagine stipulating away any such differences.) It doesn't affect the essential point.

Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, I'm most inclined to think this is one of those cases where we've got a philosophical argument we don't immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.

On the other hand, I think I'm most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. I'm more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper 'Tough enough? Robust satisficing as a decision norm for long-term policy analysis' but we weren't especially sold on them.

Thanks, yeah, I remember liking that paper. Though I'm inclined to think you should assign (precise) higher-order probabilities to the various "admissible probability functions", from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?

General lesson: if we don't have any good way of dealing with imprecise credences, we probably shouldn't regard them as rationally mandatory. Especially since the case for thinking that we

musthave imprecise credences (i.e., that any kind of precision isnecessarilyirrational) seems kind of weak.I worry that this is motivated reasoning. Should what we can justifiably believe will happen as a consequence of our actions depend on whether it results in satisfactory moral consequences (e.g. avoiding paralysis)?

Another response could be to just look for more structure in our credences we've failed to capture. Say we have a bunch of probability functions according to which AMF is bad and a bunch according to which AMF is good, but we nonetheless think AMF is good. Why would we think AMF is good anyway? If we're epistemically rational, it would presumably be because we doubt the functions according to which it is bad more than we do the ones according to which it is good. So, we've actually failed to adequately capture our credences and their structure with these probability functions as they are.

One way to represent this is to have another probability function to mix all of those probability functions ("(precise) higher-order probabilities to the various "admissible probability functions"), reducing to precise credences, in such a way that AMF turns out to look good, like @Richard Y Chappell suggests in reply here. Another, still permitting imprecise credences, is to have

multiplesuch mixing functions of probability functions, but such that AMF still looks good oneachmixing function. If you're sympathetic to imprecise credences in the first place (like I am), the latter seems like a pretty good solution.Of course, an alternative explanation could be that we aren't actually justified in thinking AMF is good. We should be careful in how we pick these higher-order probabilities to avoid motivated reasoning. In picking these higher-order probabilities, we should remain open to the possibility that AMF is not actually robustly good.

This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn't (by default) include a weighting amongst the set.

It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.

If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could go for EV based on that distribution, or you could make other choices that are more risk averse. But whatever you do, you're back to using a single probability function. I think that's probably what you should do. But that sounds to me indistinguishable from the naive response.

The idea of a "precise probability function" is in general flawed. The whole point of a probability function is you don't have precision. A probability function of a real event is (in my view) just a mathematical formulation modeling my own subjective uncertainty. There is no precision to it. That's the Bayesian perspective on probability, which seems like the right interpretation of probability, in this context.

The concern motivating the use of imprecise probabilities is that you don't always have a unique prior you're justified in using to compare the plausibility of these distributions. In some cases you'll find that any choice of unique prior, or unique higher-order distribution for aggregating priors, involves an arbitrary choice. (E.g., arbitrary weights assigned to conflicting intuitions about plausibility.)

You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.

For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.

We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn't 10% or more.

You could model the distribution of your uncertainty with, say, a beta distribution of Beta(a=0.0001,b=100).

You might wonder, why b=100 and not b=200, or 101? It's an arbitrary choice, right?

To which I have two responses:

Thanks for the summary, Nicholas. For reference, the paper was discussed on EA Forum here.

One move we can make to reduce paralysis with the maximality rule is to consider whole portfolios/sequences of actions, rather than acts in isolation. We can make up for the potential downsides of some acts with the upsides of others. See my post Hedging against deep and moral uncertainty.

Hi Andreas! I'm worried that the maximality rule will overgeneralize, implying that little is rationally required of us. Consider the decision whether to have children. There are obvious arguments both for and against from a self-interested point of view, and it isn't clear exactly how to weigh them against each other. So, plausibly, having children will max EU according to at least one probability function in our representor, whereas not having children will max EU according to at least one other probability function in our representor. Result via maximality rule: either choice is rationally permissible. Or consider some interesting public policy problem from the perspective of a benevolent social planner. Given the murkiness of social science research, it seems like that, if we've gone in for the imprecise credence picture, no one policy will maximize EU relative to every credence function in the representor, in which case, many policy choices will be rationally permissible. I wonder if you have thoughts on this?