I outline two problems with risk aversion with respect to the difference one makes. I argue that only some psychological constraints (warm-glow altruism and motivational ones) might justify difference-making risk aversion. This argument does not apply to risk aversion over states of the world which is very different. I argue that this argument should be seen as a general problem with the "difference-making" framing of effective altruism.

I wrote this as a shortform (to be able to refer to it in discussions) and then was encouraged to make it a normal post. ~half of the discussion is based on/inspired by a draft paper from Greaves et al. "On the Desire to Make a Difference" (of which I have seen an early presentation). I got approval to write a more accessible version (but that does not mean that they necessarily agree with my arguments here).

Introduction

Most members of the EA community are much less risk-averse in their efforts to do good (i.e. "difference-making") than most of society. However, even within effective altruism, risk aversion in difference-making is sometimes cited as an argument for AMF over the Schistosomiasis Control initiative, for funding the Humane League rather than the Good Food Institute, for poverty alleviation over AI alignment research, …

Distinguishing two ways you can be risk-averse: over the difference you make vs. over how good the world is

Notice that there is a (somewhat subtle) difference in risk aversion between the following two preferences:

  1. preferring to save 1 life for certain rather than 2 lives with a 50% probability
  2. preferring a world in which everyone has a life of value 1 over a world in which everyone has an equal chance of having a life of value 2 or 0

If you have the first preference, you are risk-averse about your own efforts to do good in the world. Call this risk aversion about difference-making. 

If you have the second preference, you are risk-averse about the overall state the world is in. Call this risk aversion over states of the world.

  • E.g. you prefer a world in which everyone has a life of value 2 over a world in which people have an equal chance of having a life of value 4 or 0.
  • If you have this form of risk aversion, you think it is particularly important to avoid very bad outcomes, e.g. a totalitarian regime in which most people get tortured or sth like that. This kind of risk aversion is like prioritarianism.

You can be risk-averse over states of the world and not about difference-making, or vice versa. Importantly, this post is arguing against risk aversion about difference-making.

I refer to difference-making when someone aims to “make the biggest/a difference in the world” in contrast to “making the world as good as possible”. Notice that in many cases these two things don't necessarily come apart but if we add risk aversion they do. If you are risk-averse over states of the world, you want to make the worse states less likely. If you want to do some good for certain, you will work in the areas where impact is measurable and more certain.

I show some problems with risk-averse difference-making and difference-making as a framework more generally.

An adjacent topic which is not explored is the question of loss aversion with respect to one's difference making.

Toy Scenario

Consider the following toy scenario:

You have five actions to choose from. There are four states of the world (A-D), each being 25% likely.

 ABCD
Option 1:Save 1 life Save 1 life Save 1 lifeSave 1 life 
Option 2:Save 5 lives0 lives0 lives0 lives
Option 3:0 livesSave 5 lives0 lives0 lives
Option 4: 0 lives0 livesSave 5 lives0 lives
Option 5: 0 lives0 lives0 livesSave 5 lives

 

In other words, if you take option/action 1, you save 1 life in all possible states of the world. If you take action 2, 3, 4 or 5, you save 5 lives in 25% of states, and 0 lives in 75% of states.

You can think about "world A" as for instance: "AGI comes before 2030" or "Deworming works", etc. Which action does a risk-averse agent take?

  1. For simplicity, we assume the following form of risk aversion (about difference-making): value = — but the example will work for any sufficiently risk-averse agent:
    1. For option 1, V = 
    2. For options 2 to 5, V =

So option 1 is preferred.

Consequences of Risk Aversion about Difference-Making:

I describe 2 problems with risk aversion. 

  1. Preferences of a risk-averse agent are time-inconsistent and depend on what you perceive to be the decision unit.
    1. Suppose you make the above decision each day, then you want to commit to choosing between 2-5 each day. However, on the last day, you will live you want to choose 1 instead.
      1. Why? Once you reached the last day of your life, you can make a new decision and here you know you only maximise for this one day. Hence, you will choose option 1.
  2. Many risk-averse agents might choose Pareto-dominated outcomes.
    1. When we talk about risk-averse difference-making we normally refer to an individual.
      1. However, suppose you and three people make the choice above. If you are risk-averse together, then you can choose options 2,3,4, and 5 respectively. In this case, we save 5 people for certain and every one of us effectively saved 1.25 people with certainty. If we are risk-averse alone, we choose option 1.
      2. Here we showed that sometimes we want to be risk averse among friends rather than as individuals, but why then only in this friend group and not also with strangers? If we were to include everyone, we would just end up with risk aversion over states of the world rather than risk-averse difference-making.
      3. Why should your life be the decision unit rather than the impact of your family or the impact in this decade?  you have to choose a decision unit and there doesn't seem to be a way to choose a decision unit non-arbitrarily
        1. Technically speaking this is an argument against difference-making not against risk aversion. However, risk aversion makes it clear that the unit you choose will lead to different choices + it is hard to justify the unit.
        2. If difference-making (rather than making the world as good as possible) is weird then in particular risk aversion about difference-making might also be weird
    2. In addition, even if we all had the same risk aversion level, advice/cooperation would be (actively) misleading - people can't effectively cooperate while respecting their values.
      1. Consider a risk-averse organisation which can advise many people but “wants to do at least some good” - provides advice that the risk-averse advisees actually don’t want because it includes too much risk. However, from the perspective of the organisations, the uncertainty washes out if the advice for each individual leads to good outcomes in different states of the world - the outcomes are relatively uncorrelated for the different advisees.

One might think that all of this is not super problematic. In certain situations, risk-averse difference-making is hard to justify from an altruistic perspective. One could exercise caution in being a risk-averse difference, and in particular make sure one is not in one of the three traps (time inconsistency, decision unit dependency, and choosing pareto-dominated outcomes in multi-agent scenarios). 

I think this is not the right takeaway. First, I think it is actually very hard to impossible to avoid these weird conclusions. Secondly, I think this is also a wrong response, if one thinks that the repugnant conclusion is repugnant, then it also does not make sense to have the total view and to just avoid making many people who just happened to have lives barely worth living.

In Practice

As a risk-neutral actor, you are allowed to behave risk-aversely to do more good

  1. Consider the following argument from Hayden Wilkinson.
    1. You consider either giving away 10% or 90%. If you give away 90%, it is very likely that you will stop giving in a few years. With 10%, you expect to continue for your whole life. If so, it is better to give away 10%.
  2. Similarly, suppose you think that if you don't have any ex-post impact for 5 years, you will stop trying to do good. Then, choosing altruistic action in a very risk-averse way is the right altruistic thing to do (as an expected value maximiser).
  3. You might not want to actually risk-aversely but to maximise separately for fuzzies and utilons. You give a bit to sth which has less risky (e.g. Give Directly) and then the rest you optimise for the expected value.

 You don't have to be a pure altruist. I think that risk aversion about difference-making is a personal preference, like many others, and you are allowed to have selfish preferences [You can have more than one goal]

Thanks to Sam Clarke and Hayden Wilkinson for feedback.

[1]  Risk aversion refers to the tendency of an agent to strictly prefer certainty to uncertainty, e.g. you strictly disprefer a mean-preserving spread of the value of outcomes. 

41

Comments1
Sorted by Click to highlight new comments since: Today at 11:10 PM

Great post, thanks for writing this!

 

I think the alternatives also have important problems that are worth pointing out.

Suppose instead we're maximizing expected utility for a utility function over states of the world.

If it's unbounded, then

  1. At least in principle (I'd guess not in practice), we also need to check cases and make careful commitments, or else we could violate the sure-thing principle or be vulnerable to Dutch books or money pumps. See here for an example. Some take unbounded utility functions to therefore be irrational.
  2. It's fanatical, and so you need to deal with Pascal's wager, Pascal's mugging, tiny probabilities of infinities.

On the other hand, if it's bounded, then 

  1. It can't be stochastically separable and what you should do could depend on things you can't predictably change (even acausally) like the welfare of ancient Egyptians or those in causally separated parts of the universe (and making decisions independently from your own), AND
  2. There's a good chance it will be far too egoistic in practice*. The most natural forms** will tend to promote weighing your own interests more than any others' in practice, and possibly far more, because (i) you're more sure of your own existence than others' due to the possibility of solipsism (only you exist), (ii) differences to highly populated universes with value approaching either bound will tend to matter far less than those where only you exist, and (iii) it would be surprising for the value to be close to 0 in a highly populated universe. For further illustration and explanation, see:
    1. This thread by Derek Shiller.
    2. The average utilitarian’s solipsism wager by Caspar Oesterheld
    3. Average Utilitarianism Implies Solipsistic Egoism by Christian Tarsney (also covers rank-discounted utilitarianism and variable value theories, depending on the marginal returns to additional population).

* Or else will need to be set based on your beliefs about how many moral patients there are, which seems like motivated reasoning, and if you come to believe sufficiently more exist, then you could be stuck with the egoistic conclusion again.

**  E.g. a sigmoid function like arctan applied to the total utilitarian sum of welfares, average utilitarianism and other variable value theories, other functions symmetric around the empty universe, "convex" to the left and "concave" to the right.

 

Stochastic dominance as a decision rule seems to fare better, although it may leave multiple options permissible, and the options we actually choose may suffer from the kinds of problems above anyway or otherwise violate some other requirement of rationality.  Selecting uniformly at random among available permissible options (including policies over future actions) could at least reduce egoistic biases, but I wouldn't be surprised if it had other serious problems.