Room for Other Things: How to adjust if EA seems overwhelming

by Lukas_Gloor 4y26th Mar 201515 comments

17


Overwhelming obligations

The drowning child argument, which has persuaded many people to join the effective altruism movement, highlights one of EA’s central concepts: Opportunity costs. The money spent on expensive clothes/shoes could instead be used to donate to cost-effective charities, where it can save the life of a person in Africa. Of course, the force of the argument does not stop there: If you have more money left, or if you could work for a few extra hours to earn more money, that too can be used to help additional people. Every decision we make has opportunity costs – this realization can feel overwhelming

Several critics consider the ideas behind effective altruism flawed or impractical because they appear to demand too much from people. In his widely cited essay A Critique of Utilitarianism, Bernard Williams argues that the idea of always trying to bring about the best outcome1 places too high a burden on a person by taking away their choice in what they want to do in life: 

 

It is to alienate him in a real sense from his actions and the source of his action in his own convictions. It is to make him into a channel between the input of everyone’s projects, including his own, and an output of optimific decision; but this is to neglect the extent to which his actions and his decisions have to be seen as the actions and decisions which flow from the projects and attitudes with which he is most closely identified. It is thus, in the most literal sense, an attack on his integrity.

 

The same of course also applies to women.

When I first read Williams’ critique a few years ago, I already considered myself an effective altruist and, as such, I found the arguments surprisingly unimpressive. I felt that obviously, if someone’s goal is to make the world a better place, this is the person’s decision and chosen life-project. But, as I’m now more aware of than before, people have the tendency to overestimate how similar other people are to themselves.  I was already so immersed into EA-thinking that I forgot how I felt before. Williams’ critique highlights an important point: If people are altruistically motivated but have other goals in life besides doing the most amount of good, or if they have other commitments that contribute to their self-identity and happiness, then they might – consciously or unconsciously – view effective altruism as something that threatens the things they consider valuable. This might manifest itself either in rationalizations against the concept of EA, or – if the person is more rational – it might lead to a genuine conflict of internal motivations, which may result in unhappiness. The situation becomes especially difficult for such a person if he/she is being pressured, either externally (by other people’s expectations) or internally (e.g. by comparing oneself to a very high moral standard or to a person who gave up everything else for effective altruism).  Needless to say, this outcome is very unfortunate, both for the people themselves and for them not getting involved because of it. 


A healthier framing

Perhaps it cannot be avoided completely that some people are going have this reaction, at least to some extent, when they learn more about effective altruism. There is truth to the saying “ignorance is bliss”: Some ideas, like the drowning child argument for instance, irreversibly change the way things are viewed by us. Nevertheless, I believe that the feeling of “overwhelmingness” discussed above is not called for.

In this article I want to present a way to see or frame effective altruism that I consider both philosophically correct and useful from a motivational point of view. I prefer to think of EA as a choice, rather than some sort of external moral obligation. In reply to people who are concerned that effective altruism is overwhelming or overly demanding, I want to point out two important considerations: 

  1. If you view EA as a possible goal to have, there is nothing contradictory about having other goals in addition. 
  2. Even if EA becomes your only goal, it does not mean that you should necessarily spend the majority of your time thinking about it, or change your life in drastic ways. (More on this below.)

What are goals?

Imagine you could shape yourself and the world any way you like, unconstrained by the limits of what is considered feasible and what not, what would you do? Which changes would you make? The result describes your ideal world, it describes everything that is at all important to you. However, it does not yet describe how important these things are in relation to other things you consider important. So imagine that you had the same super-powers, but this time they are limited: You cannot make every change you had in mind, you need to prioritize some changes over others. Which changes would be most important to you? The outcome of this thought experiment approximates your goals2

1) Having other goals besides EA

This part is trivial. Someone could give more weight to personal well-being or favored life-projects, while still spending some amount of time and money on effective altruism. There is nothing contradictory about having multiple goals – it just means that one has to make tradeoffs because resources are limited. Some people may think that it is somehow wrong or “inelegant” to have several goals, but as long as the person herself is fine with it, that’s all that matters. 

2) EA as a goal does not necessarily imply sacrificing all other commitments

Giving up all your personal commitments is bad if this would make you psychologically unable to continue working productively on EA-projects. Compared to a perfect utilitarian robot, humans have many shortcomings. This is true for all humans, but the degrees vary from person to person. It makes just as little sense to compare yourself to a perfect utilitarian robot than to compare yourself to a person who is, for whatever reason, unreachably in a different starting position than you are. 

When people think of effective altruists, what sort of EA are they typically thinking of? Perhaps they first think of the guys who went into investment banking, working excessive hours and night-shifts, to donate 50% of their income; or the people who quit promising career paths in order to work full time for EA organizations; or the famous philosopher who churns out paper after paper. Of course, the sort of people who do something like that are outliers, high-achievers that don’t represent the typical (university-)population. It takes specific personality traits and skills to be motivated and able to do EA almost like an extreme sport. A more typical EA would be someone with a more “normal” job who donates say 10% of their income, or everything above a limit that is enough to live comfortably while having some financial security in the future. 

The “high-achievers” mentioned first are arguably the EAs who make the most difference individually, yet they only represent a small minority of EAs, even more so when EA becomes more and more mainstream. Are these people more motivated than other EAs, are they the only ones who “take EA seriously”? This seems highly implausible. These people aren’t more motivated to be EAs; rather, they are more motivated and/or more suited to do the things that are most effective from an EA-standpoint. This distinction is important, it means they don’t necessarily care more, but their personalities and skills are such that they are placed in a better position to contribute effectively3

For the typical person, therefore, being an EA does not imply trying to do all the things those high-achiever EAs are doing. One could be tempted to view this as a watered-down version of EA, as “EA light”, but this would be getting it wrong: If you’re doing the best you can, there’s nothing watered down about your goals. Comparing yourself to the most skilled and hardworking EAs would be making the same kind of mistake, on a lower level, as comparing yourself to a perfect utilitarian robot. Even the most hardworking EAs need breaks sometimes, and in comparison with a perfect robot, they too fall short. Rationality is about making the best out of what you have. Holding yourself to impossible standards is silly and counterproductive; instead, the better approach is to find smaller but sustainable ways to contribute.

Personalities are different

People don’t only differ in regard to skills, they also differ in regard to what they’re interested in. In my case, becoming an effective altruist was easy for me. I discovered all this information out there, LessWrong and the now less active Felicifia, and I couldn’t stop reading and having discussions with people, it wasn’t like I had to force myself to do any of that. I think about EA-related topics most of the time because this is what I enjoy doing; if I found it strenuous (as some people will), I would be doing it less.

Personality differences also reflect what sort of things people are (de)motivated by. Some people are attracted to weird (i.e. non-mainstream) ideas because they enjoy discussions. Others might dislike having to talk about or defend their positions constantly on the internet or in social settings, and if that's the case, a lot of EA-related activities become much harder. 

Finally, personality differences also affect the way people prefer their life to go as a whole. Having balance in life is important for everyone, but some need a lot of it and others are more okay with a life that is optimized obsessively towards one single thing. People might have life-projects or strong commitments that would make them miserable if they had to be abandoned, wanting children for example, or a specific job one loves. These things are compatible with EA, because EA doesn’t solely work if done as an extreme sport.


Some advice

It is important to take into account that people differ from each other in many respects, including previous commitments,  personality differences and differences in skills. Being rational about achieving your goals means, among other things, to understand what is within your reach and what isn’t, and to not hold yourself responsible for being unable to do the impossible. I have the impression that some personality-types, especially people who are very altruistic and caring, sometimes suffer from holding themselves to too high a standard, from being unable to allow themselves to relax in spite of all the world’s suffering on their shoulders. Ben Kuhn’s blogpost To stressed-out altruists contains a great insight that is worth quoting at length:

I think the culprit of stress for many EAs is a lack of compartmentalization. Now, to really understand the ideas of effective altruism, you need not to compartmentalize too much—for example, I have a roommate who buys the idea of altruism in the abstract, but doesn’t do anything about it because he separates his brain’s “abstract morality” module and its “decide what to do” module. Because of things like this, compartmentalization has an often-deserved poor reputation for letting people evade cognitive dissonance without really coming to terms with their conflicting ideas. But compartmentalization isn’t always maladaptive: if you do it too little, then whatever you care about completely consumes you to the point of non-functional misery.

 

Effective altruism requires less compartmentalization than the average person has, so standard effective altruism discourse, which is calibrated against the average person, tries to break down compartmentalization. But you probably aren’t the average person. If you’re stressed out about effective altruism, ignore the standard EA discourse and compartmentalize more


Most of the advice I’d give to people struggling with EA is along the lines Ben talks about. I collected a list of things I use myself or would recommend for others for trying to learn to compartmentalize more, but of course, not everyone will find these applicable or equally helpful. 

  • Avoiding daily dilemmas: If you find yourself having internal struggles every time you go to the grocery store, wondering whether you should buy expensive products or rather save the money for donations, it might make sense to set up clear heuristics for yourself, e.g. a yearly budget for charity, that take care of recurring EA-related decisions. Choosing to donate a percentage of your income every month instead of “trying to see how much you can save” each day allows you to not have to worry about saving the additional fifty cents (= deworming one more child!) every time you spend money on something. This way, you can free your mind from opportunity costs and focus on your personal needs, while still having the altruistic impact through your regular donations.
     
  • Putting emotions into perspective: I find that a basic understanding of evolutionary biology can be helpful in some situations. To a large extent (the rest is upbringing, experiences, randomness), your emotional reactions to things are the way they are because these reactions proved beneficial, in terms of gene-copying success, in the environment of your ancestors. Your personal goals are not tied to gene-copying success, and the modern environment is very different from the evolutionary one. Therefore, we should expect that emotions are not calibrated well to fulfill EA-related goals in the modern world. When you’re feeling bad about a mistake you made or about not doing something, it does not necessarily mean that just because the feeling is really bad, what you did/didn’t do is really bad as well. From a pragmatic perspective, feeling bad only makes sense as a learning mechanism if there’s something you learn to do different the next time. Often people blame themselves for things that couldn’t really be anticipated or changed. And even if a genuine mistake happened, after the lesson is learned it is important to try to take a forward-looking perspective and not fret about the past. 


Note
: This next example works very well for me personally, but I can imagine that the competitive aspect of “trying to score as many points as possible” is bad for some people.
 

  • Viewing EA as a game with varying levels of difficulty: When you look at life from a consequentialist point of view where opportunity costs are always looming in the background, the vast majority of things to do will be “wrong” in the sense that a perfect robot with your exact goals would do them differently. However, that’s beside the point. Instead of looking at things as “right vs. wrong”, it fits better to think of things like a score system in video games, where you can gather points open-endedly. Foregoing a few points here and there will be fine as long as you keep track of the important decisions. Because life isn’t fair, the level of difficulty of the video game will sometimes be “hard” or even “insane”, depending on the situation you’re in. The robot on the other hand would be playing on “easy”, because it would never encounter a lack of willpower, skills or thinking capacity. So don’t worry about not being able to score very many points in the absolute sense, focus instead on how many points are reachable within the difficulty-level that you’re playing on.
     
  • Separating conflicting motivations: If you find it hard to donate to organizations recommended by other EAs because you have a commitment to other charities, e.g. because you’ve been a donor there for a long time, have visited the charity, or have found that their approach strongly resonates with you, then consider splitting your charitable budget in two parts, separating what you feel good about from what you consider has the best effect in terms of helping people. See this blogpost for a better explanation.  An additional benefit is that, if this splitting is always an option, it prevents you from rationalizing that your previously favored charity is also the one that just happens to be most effective from an EA-point-of-view. 

  • Talking to other EAs: If you’re feeling bad about something or have a problem with some aspect of EA, it is likely that you’re not the only person to whom this ever happened. Talking to others who may be in a good position to help out or give advice might be a good thing to try.

1Williams is talking about utilitarianism as a moral theory, not about effective altruism as an idea/movement. I realize that the two are distinct, e.g. one can be an EA without subscribing to utilitarianism. Nevertheless, large parts of Williams’ critique, and especially the passage I quoted, apply well to EA. 
2This question is of course a very difficult one, and what someone says after thinking about it for five minutes might be quite different from what someone would choose if she had heard all the ethical arguments in the world and thought about the matter for a very long time. If you care about making decisions for good/informed reasons, you might want to refrain from committing too much to specific answers and instead give weight to what a better informed version of yourself would say after longer reflection. 
3Of course, the matter is not black-and-white. Caring/commitment does matter to a significant extent, i.e. there will be people who would be suited to do the extreme-EA thing but don’t try enough. 
4This also goes the other way (cf. “scope insensitivity”), but standard EA discourse talks a lot about this already.