I outline two problems with risk aversion with respect to the difference one makes. I argue that only some psychological constraints (warm-glow altruism and motivational ones) might justify difference-making risk aversion. This argument does not apply to risk aversion over states of the world which is very different. I argue that this argument should be seen as a general problem with the "difference-making" framing of effective altruism.
I wrote this as a shortform (to be able to refer to it in discussions) and then was encouraged to make it a normal post. ~half of the discussion is based on/inspired by a draft paper from Greaves et al. "On the Desire to Make a Difference" (of which I have seen an early presentation). I got approval to write a more accessible version (but that does not mean that they necessarily agree with my arguments here).
Introduction
Most members of the EA community are much less risk-averse in their efforts to do good (i.e. "difference-making") than most of society. However, even within effective altruism, risk aversion in difference-making is sometimes cited as an argument for AMF over the Schistosomiasis Control initiative, for funding the Humane League rather than the Good Food Institute, for poverty alleviation over AI alignment research, …
Distinguishing two ways you can be risk-averse: over the difference you make vs. over how good the world is
Notice that there is a (somewhat subtle) difference in risk aversion between the following two preferences:
- preferring to save 1 life for certain rather than 2 lives with a 50% probability
- preferring a world in which everyone has a life of value 1 over a world in which everyone has an equal chance of having a life of value 2 or 0
If you have the first preference, you are risk-averse about your own efforts to do good in the world. Call this risk aversion about difference-making.
If you have the second preference, you are risk-averse about the overall state the world is in. Call this risk aversion over states of the world.
- E.g. you prefer a world in which everyone has a life of value 2 over a world in which people have an equal chance of having a life of value 4 or 0.
- If you have this form of risk aversion, you think it is particularly important to avoid very bad outcomes, e.g. a totalitarian regime in which most people get tortured or sth like that. This kind of risk aversion is like prioritarianism.
You can be risk-averse over states of the world and not about difference-making, or vice versa. Importantly, this post is arguing against risk aversion about difference-making.
I refer to difference-making when someone aims to “make the biggest/a difference in the world” in contrast to “making the world as good as possible”. Notice that in many cases these two things don't necessarily come apart but if we add risk aversion they do. If you are risk-averse over states of the world, you want to make the worse states less likely. If you want to do some good for certain, you will work in the areas where impact is measurable and more certain.
I show some problems with risk-averse difference-making and difference-making as a framework more generally.
An adjacent topic which is not explored is the question of loss aversion with respect to one's difference making.
Toy Scenario
Consider the following toy scenario:
You have five actions to choose from. There are four states of the world (A-D), each being 25% likely.
A | B | C | D | |
Option 1: | Save 1 life | Save 1 life | Save 1 life | Save 1 life |
Option 2: | Save 5 lives | 0 lives | 0 lives | 0 lives |
Option 3: | 0 lives | Save 5 lives | 0 lives | 0 lives |
Option 4: | 0 lives | 0 lives | Save 5 lives | 0 lives |
Option 5: | 0 lives | 0 lives | 0 lives | Save 5 lives |
In other words, if you take option/action 1, you save 1 life in all possible states of the world. If you take action 2, 3, 4 or 5, you save 5 lives in 25% of states, and 0 lives in 75% of states.
You can think about "world A" as for instance: "AGI comes before 2030" or "Deworming works", etc. Which action does a risk-averse agent take?
- For simplicity, we assume the following form of risk aversion (about difference-making): value = — but the example will work for any sufficiently risk-averse agent:
- For option 1, V =
- For options 2 to 5, V =
So option 1 is preferred.
Consequences of Risk Aversion about Difference-Making:
I describe 2 problems with risk aversion.
- Preferences of a risk-averse agent are time-inconsistent and depend on what you perceive to be the decision unit.
- Suppose you make the above decision each day, then you want to commit to choosing between 2-5 each day. However, on the last day, you will live you want to choose 1 instead.
- Why? Once you reached the last day of your life, you can make a new decision and here you know you only maximise for this one day. Hence, you will choose option 1.
- Suppose you make the above decision each day, then you want to commit to choosing between 2-5 each day. However, on the last day, you will live you want to choose 1 instead.
- Many risk-averse agents might choose Pareto-dominated outcomes.
- When we talk about risk-averse difference-making we normally refer to an individual.
- However, suppose you and three people make the choice above. If you are risk-averse together, then you can choose options 2,3,4, and 5 respectively. In this case, we save 5 people for certain and every one of us effectively saved 1.25 people with certainty. If we are risk-averse alone, we choose option 1.
- Here we showed that sometimes we want to be risk averse among friends rather than as individuals, but why then only in this friend group and not also with strangers? If we were to include everyone, we would just end up with risk aversion over states of the world rather than risk-averse difference-making.
- Why should your life be the decision unit rather than the impact of your family or the impact in this decade? you have to choose a decision unit and there doesn't seem to be a way to choose a decision unit non-arbitrarily
- Technically speaking this is an argument against difference-making not against risk aversion. However, risk aversion makes it clear that the unit you choose will lead to different choices + it is hard to justify the unit.
- If difference-making (rather than making the world as good as possible) is weird then in particular risk aversion about difference-making might also be weird
- In addition, even if we all had the same risk aversion level, advice/cooperation would be (actively) misleading - people can't effectively cooperate while respecting their values.
- Consider a risk-averse organisation which can advise many people but “wants to do at least some good” - provides advice that the risk-averse advisees actually don’t want because it includes too much risk. However, from the perspective of the organisations, the uncertainty washes out if the advice for each individual leads to good outcomes in different states of the world - the outcomes are relatively uncorrelated for the different advisees.
- When we talk about risk-averse difference-making we normally refer to an individual.
One might think that all of this is not super problematic. In certain situations, risk-averse difference-making is hard to justify from an altruistic perspective. One could exercise caution in being a risk-averse difference, and in particular make sure one is not in one of the three traps (time inconsistency, decision unit dependency, and choosing pareto-dominated outcomes in multi-agent scenarios).
I think this is not the right takeaway. First, I think it is actually very hard to impossible to avoid these weird conclusions. Secondly, I think this is also a wrong response, if one thinks that the repugnant conclusion is repugnant, then it also does not make sense to have the total view and to just avoid making many people who just happened to have lives barely worth living.
In Practice
As a risk-neutral actor, you are allowed to behave risk-aversely to do more good
- Consider the following argument from Hayden Wilkinson.
- You consider either giving away 10% or 90%. If you give away 90%, it is very likely that you will stop giving in a few years. With 10%, you expect to continue for your whole life. If so, it is better to give away 10%.
- Similarly, suppose you think that if you don't have any ex-post impact for 5 years, you will stop trying to do good. Then, choosing altruistic action in a very risk-averse way is the right altruistic thing to do (as an expected value maximiser).
- You might not want to actually risk-aversely but to maximise separately for fuzzies and utilons. You give a bit to sth which has less risky (e.g. Give Directly) and then the rest you optimise for the expected value.
You don't have to be a pure altruist. I think that risk aversion about difference-making is a personal preference, like many others, and you are allowed to have selfish preferences [You can have more than one goal]
Thanks to Sam Clarke and Hayden Wilkinson for feedback.
[1] Risk aversion refers to the tendency of an agent to strictly prefer certainty to uncertainty, e.g. you strictly disprefer a mean-preserving spread of the value of outcomes.
Great post, thanks for writing this!
I think the alternatives also have important problems that are worth pointing out.
Suppose instead we're maximizing expected utility for a utility function over states of the world.
If it's unbounded, then
On the other hand, if it's bounded, then
* Or else will need to be set based on your beliefs about how many moral patients there are, which seems like motivated reasoning, and if you come to believe sufficiently more exist, then you could be stuck with the egoistic conclusion again.
** E.g. a sigmoid function like arctan applied to the total utilitarian sum of welfares, average utilitarianism and other variable value theories, other functions symmetric around the empty universe, "convex" to the left and "concave" to the right.
Stochastic dominance as a decision rule seems to fare better, although it may leave multiple options permissible, and the options we actually choose may suffer from the kinds of problems above anyway or otherwise violate some other requirement of rationality. Selecting uniformly at random among available permissible options (including policies over future actions) could at least reduce egoistic biases, but I wouldn't be surprised if it had other serious problems.