The drowning child argument has persuaded many people to join the effective altruism movement. It highlights one of EA’s central concepts: opportunity costs. The money spent on expensive clothes could instead be used to donate to cost-effective charities overseas, where it can save a person's life. Of course, the force of the argument does not stop there: If you're left with more money, or if you could work for a few extra hours to earn more, you can donate more to help additional people. Every decision we make has opportunity costs – this realization can feel overwhelming.
Several critics consider the ideas behind effective altruism flawed or impractical because they appear to demand too much from us. In his widely cited essay A Critique of Utilitarianism, Bernard Williams argues that the idea of always trying to bring about the best outcome1 places too high a burden on a person by taking away their choice in what they want to do in life:
It is to alienate him in a real sense from his actions and the source of his action in his own convictions. It is to make him into a channel between the input of everyone’s projects, including his own, and an output of optimific decision; but this is to neglect the extent to which his actions and his decisions have to be seen as the actions and decisions which flow from the projects and attitudes with which he is most closely identified. It is thus, in the most literal sense, an attack on his integrity.
When I first read Williams’ critique a few years ago, I already considered myself an effective altruist and, as such, I found the arguments surprisingly unimpressive. I felt that obviously, if someone’s goal is to make the world a better place, this is the person’s decision, her chosen life-project. However, as I’m now more aware of than before, people have the tendency to overestimate how similar others are to themselves. I was already so immersed in EA thinking that I forgot how things may feel for others. Williams’ critique highlights an important point. People can be altruistically motivated but have other goals in life besides doing the most good, or they may have pre-existing commitments that contribute to their identity and happiness. If so, they might – consciously or unconsciously – view the all-encompassing interpretations of effective altruism as something that threatens what they value. This can manifest itself either in rationalizations against the idea of EA, or – if the person is more introspective – it might lead to a genuine conflict of internal motivations, which often results in unhappiness. The situation becomes especially difficult for such a person if he/she is being pressured, either externally (by other people’s expectations) or internally (e.g. by comparing oneself to a very high moral standard or to a person who gave up everything else for effective altruism). Needless to say, such an outcome is very unfortunate, both for the people themselves and for them not getting involved because of it.
A healthier framing
Perhaps it cannot be entirely avoided that some people are going have an aversive reaction, at least to some extent, when they learn more about effective altruism. There is truth to the saying “ignorance is bliss”: Some ideas, like the drowning child argument, irreversibly change the way we view life. Nevertheless, I believe that the feeling of “overwhelmingness” discussed above is uncalled for.
In this article I want to present a way to see or frame effective altruism that I consider both philosophically correct and useful from a motivational point of view. I prefer to think of EA as a choice, rather than some sort of external moral obligation. In reply to people who are concerned that effective altruism is overwhelming or overly demanding, I want to point out two important considerations:
- If you view EA as a possible goal to have, there is nothing contradictory about having other goals in addition.
- Even if EA becomes your only goal, it does not mean, necessarily, that you should spend the majority of your time thinking about it, or change your life in drastic ways. (More on this below.)
What are goals?
Imagine you could shape yourself and the world any way you like, unconstrained by the limits of what is considered feasible. What would you do? Which changes would you make? Your answers to these questions describe your ideal world. To guide our actions in practice, we also have to specify how important various good things are compared to other good things. So, in a second step, imagine that you had the same powers to shape the world as you wish, but this time, they are limited. You cannot make every change you had in mind, so you need to prioritize some changes over others. Which changes would be most important to you? The outcome of this thought experiment approximates your goals2.
1) Having other goals besides EA
It is perfectly possible to give a lot of weight to one's personal well-being or one's favored life-projects, while still choosing to dedicate some amount of time and money to effective altruism. There is nothing contradictory about having multiple goals – it just means that one is willing to make tradeoffs in the face of resources being limited. Some people may think that it is somehow wrong or “inelegant” to have several goals, but as long as the person herself is fine with it, that’s all that matters.
2) EA as a goal does not necessarily imply sacrificing all other commitments
Giving up all personal commitments is bad on any account of rational goal-achievement, if this would make a person psychologically unable to continue working productively toward their goals. Compared to a perfect utilitarian robot, humans have many shortcomings. This applies to everyone, but the degrees vary from person to person. It makes just as little sense to compare oneself to a perfect utilitarian robot than to compare oneself to a person who is, for whatever reason, unreachably in a different starting position from one's own.
When people think of effective altruists, what sort of person are they typically picturing? Most likely, we first think of the people who went into investment banking, working excessive hours and night-shifts, to donate 50% of their income; or the people who quit promising career paths in order to work full-time for EA organizations; or the famous philosopher who churns out paper after paper. Of course, these sorts of people are outliers, high-achievers that don’t represent the typical population. It takes specific personality traits and skills to be motivated and able to do EA almost as an extreme sport. A more typical EA would be someone with a more “normal” job who donates say 10% of their income, or everything above a limit that is enough to live comfortably while having some financial security in the future.
The “high-achievers” mentioned first are arguably the EAs who make the most difference individually, yet they only represent a small minority of EAs, even more so as EA becomes increasingly more mainstream. Are these people more motivated than other EAs, are they the only ones who “take EA seriously”? I don't think so. These people aren’t more motivated to be EAs; rather, they are more motivated and/or more suited to do the things that are most effective from an EA-standpoint. This distinction is important. Outliers don’t necessarily care more, but their personalities and skills make them better better positioned to contribute effectively3.
For the typical person, therefore, being an EA does not imply trying to do all the things the highest-achieving EAs are doing. One could be tempted to view this as a watered-down version of EA, as “EA light”, but this would be getting it wrong. If you’re doing the best you can, there’s nothing watered down about your goals, your moral ideals. Comparing yourself to the most skilled and hardworking EAs would be making the same kind of mistake, on a lower level, as comparing yourself to a perfect utilitarian robot. Even the most hardworking EAs need breaks sometimes, and in comparison with a perfect robot, they too fall short. Rationality is about making the best out of what you have. Holding yourself to impossible standards is silly and counterproductive. The better approach is to find smaller but sustainable ways to contribute.
Personalities are different
People don’t only differ in regard to skills, they also differ in regard to what they’re interested in. In my case, becoming an effective altruist came easy to me. I discovered all this information out there, LessWrong and the now less active Felicifia, and I couldn’t stop reading and having discussions with others. It wasn’t like I had to force myself to do any of that. I think about EA-related topics most of the time because this is what I enjoy doing; if I found it strenuous (as some people will), I would be doing it less.
Personality differences also reflect what sort of things people are (de)motivated by. Some people are attracted to weird (i.e. non-mainstream) ideas because they enjoy discussions. Others might dislike having to talk about or defend their positions constantly on the internet or in social settings, and if that's the case, a lot of EA-related activities become much harder.
Finally, personality differences also affect the way people prefer their life to go as a whole. Having balance in life is important for everyone, but some need a lot of it and others are more okay with a life that is optimized obsessively towards a single goal. People might have life-projects or strong commitments that would make them miserable if they had to be abandoned, wanting children for example, or a specific job one really likes. These things are compatible with EA, because EA doesn’t solely work if done as an extreme sport.
It is important to take into account that people differ from each other in many respects, including previous commitments, personality differences and differences in skills. Being rational about achieving your goals means, among other things, to understand what is within your reach and what isn’t, and to not hold yourself responsible for being unable to do the impossible. I have the impression that some personality types, especially people who are very altruistic and caring, sometimes suffer from holding themselves to too high a standard, and being unable to allow themselves to relax in spite of all the world’s suffering on their shoulders. Ben Kuhn’s blogpost To stressed-out altruists contains a great insight that is worth quoting at length:
I think the culprit of stress for many EAs is a lack of compartmentalization. Now, to really understand the ideas of effective altruism, you need not to compartmentalize too much—for example, I have a roommate who buys the idea of altruism in the abstract, but doesn’t do anything about it because he separates his brain’s “abstract morality” module and its “decide what to do” module. Because of things like this, compartmentalization has an often-deserved poor reputation for letting people evade cognitive dissonance without really coming to terms with their conflicting ideas. But compartmentalization isn’t always maladaptive: if you do it too little, then whatever you care about completely consumes you to the point of non-functional misery.
Effective altruism requires less compartmentalization than the average person has, so standard effective altruism discourse, which is calibrated against the average person, tries to break down compartmentalization. But you probably aren’t the average person. If you’re stressed out about effective altruism, ignore the standard EA discourse and compartmentalize more!
Most of the advice I’d give to people struggling with EA is along the lines Ben talks about. Below, collected a list of things I use myself or would recommend for others for trying to learn to compartmentalize more. Of course, not everyone will find these applicable or equally helpful.
- Avoiding daily dilemmas: If you find yourself struggling internally every time you go to the grocery store, wondering whether you should buy expensive products or rather save the money for donations, it might make sense to set up clear heuristics for yourself, e.g. a yearly budget for charity. Those heuristics then take care of recurring EA-related decisions. Choosing to donate a percentage of your income every month instead of “trying to see how much you can save” each day allows you to stop ruminating about maybe saving the additional fifty cents (= deworming one more child!) every time you spend money on something. This way, you can free your mind from looming opportunity costs and focus on your personal needs, while still having the altruistic impact through your regular donations.
- Putting emotions into perspective: I find that a basic understanding of evolutionary biology can be helpful in some situations. To a large degree (the rest is upbringing, experiences, randomness), your emotional reactions to things are the way they are because these reactions proved beneficial, in terms of gene-copying success, in the environment of your ancestors. Your personal goals are not tied to gene-copying success, and the modern environment is very different from the evolutionary one. Therefore, we should expect that our emotions are not calibrated to fulfill EA-related goals in a modern environment. When you’re feeling really bad about mistakes you may have made or about not doing enough, this does not necessarily reflect that what you did/didn’t do is indeed really bad. From a pragmatic perspective, feeling bad only makes sense as a learning mechanism: if there’s something you learn to do different the next time. Often people blame themselves for things that couldn’t be anticipated or changed. And even if a genuine mistake happened, once the lesson is learned, it is important to try to take a forward-looking perspective and not fret about the past.
Note: This next example works very well for me personally, but I can imagine that the competitive aspect of “trying to score as many points as possible” is bad for some people.
- Viewing EA as a game with varying levels of difficulty: When one looks at life from a consequentialist point of view where opportunity costs are always looming in the background, the vast majority of things to do will be “wrong” in the sense that a perfect robot with one's exact goals would do them differently. However, that need not concern us. Instead of looking at things as “right vs. wrong”, it's a lot more helpful to think of things like a score system in video games, where you can gather points open-endedly. Foregoing a few points here and there will be fine as long as we keep track of the important decisions. Because life isn’t fair, the level of difficulty of the video game will sometimes be “hard” or even “insane”, depending on the situation you’re in. The robot on the other hand would be playing on “easy”, because it would never encounter a lack of willpower, skills or thinking capacity. So don’t worry about not being able to score too many points in the absolute sense, and focus instead on how many points are reachable within the difficulty-level that you’re playing on.
- Separating conflicting motivations: If you find it hard to donate to organizations recommended by other EAs because you have a commitment to other charities, e.g. because you’ve been a donor there for a long time, have visited the charity, or have found that their approach strongly resonates with you, then consider splitting your charitable budget into two parts, separating what you feel good about from what you consider has the best effect in terms of helping people. See this blogpost for a better explanation. An additional benefit is that, if this splitting is always an option, it prevents you from rationalizing that your previously favored charity is also the one that just happens to be most effective from an EA-point-of-view.
- Talking to other EAs: If you’re feeling bad about something or have a problem with some aspect of EA, it is likely that you’re not the only person to whom this ever happened. Talking to others who may be in a good position to help out or give advice might be a good thing to try.
2This question is of course a very difficult one, and what someone says after thinking about it for five minutes might be quite different from what someone would choose if she had heard all the ethical arguments in the world and thought about the matter for a long time. If you care about making decisions for good/informed reasons, you might want to refrain from committing too much to specific answers and instead give weight to what a better informed version of yourself would say after longer reflection.
3Of course, the matter is not black-and-white. Caring/commitment does matter to a significant extent, i.e. there will be people who would be suited to do the extreme-EA thing but don’t try enough.
4This also goes the other way (cf. “scope insensitivity”), but standard EA discourse talks a lot about this already.