Hide table of contents

For some time now I’ve been rather hesitant about pitching Effective Altruism to people, because I wasn’t really sure how to summarise EA. It’s become a fairly diverse movement by now, in terms of causes and interventions that people prioritise. Is there a way to link all of these areas under one banner, with that banner still making substantive and novel claims? Here I’ve drawn on previous discussion by Will MacAskill and Ben Todd, attempted to improve the claims they make, and then discussed some extensions to their arguments which seem important.

Note: this post is in two halves. If you only read one half, read the second one; it’s more important, and more novel.

What is Effective Altruism?

Here’s Ben:

The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.

The project of effective altruism: is defined as the search for the actions that do the most to contribute to the common good (relative to their cost). It can be broken into (i) an intellectual project – a research field aimed at identifying these actions and, (ii) a practical project to put these findings into practice and have an impact.

I define the ‘common good’ in the same way Will MacAskill defines the good in “The definition of effective altruism”, as what most increases welfare from an impartial perspective. This is only intended as a tentative and approximate definition, which might be revised.

I find it odd that this definition is non-normative - that is, it doesn’t say what people morally should do. In particular, it doesn’t actually defend being impartial or welfarist, or even moral at all. Yet in practice, a shared belief in the importance of morality is a defining characteristic of EA, and I’m not sure what we gain by excluding it from the definition. Movements can fail by being too demanding, but they can also fail by not being demanding enough to foster a strong sense of purpose - especially for people who aren’t very motivated by altruism by default. I suspect that the intuitions of movement leaders might not represent the latter group very well.

Will’s definition is also non-normative; in justifying this choice, Will says:

There are two ways in which the definition of effective altruism could have made normative claims. First, it could have made claims about how much one is required to sacrifice: for example, it could have stated that everyone is required to use as much of their resources as possible in whatever way will do the most good; or it could have stated some more limited obligation to sacrifice, such as that everyone is required to use at least 10% of their time or money in whatever way will do the most good.

But these two options seem very far from exhausting the space of possibilities. For one thing, normative claims don’t need to be as specific as the ones Will mentions. For another, they don’t need to be phrased in terms of moral obligations. So I’d propose to split Ben’s claim above into two:

  • If you could contribute much more to the common good without making major personal sacrifices, then it’s morally important to do so.
  • If you want to contribute much more to the common good, it’s a mistake not to pursue the project of effective altruism.

Here we’re not specifying whether morally important actions are obligatory or good but not required (in technical terms, supererogatory). I expect that it’ll be useful, when advocating for EA, to highlight that some people choose to interpret it either way. And similarly for what we mean by “contribute much more” and “major personal sacrifices”. This is a little watered-down, but I think that it’s good enough for almost all purposes - individuals are free to adopt strong definitions, but it’s not necessary for the movement as a whole to stand for any of them in particular.

One other feature of my definition is that the two claims I’ve made don’t contain any maximalist language. By contrast, Ben’s definition implies that not contributing to the common good as efficiently as possible is a mistake (as highlighted in the comments on his original post). And Will also talks about doing “as much good as possible” with given resources. But I’ve personally never found such phrases compelling, for a few reasons.

  1. I think that ethics is not well-defined enough for “the most good” to be a coherent concept (for roughly the same reasons that many other concepts tend to break down when we push them to extremes).
  2. In the face of radical uncertainty about the future, it seems hard to ever justifiably claim that one course of action is the “best thing to do”, rather than just a very good thing to do.
  3. Almost everyone chooses altruistic actions based partly on non-altruistic goals - for example, by factoring in their personal preferences about which charity to donate to. Yet if those choices are driven by doing a lot of good, then the fact that they’re not technically choosing to maximise the good shouldn’t make a difference.

I think that emphasising the moral importance of doing a lot more good still captures the core idea here, without the additional commitments entailed by saying that people should be maximalist (for reasons I describe here). However, I’m still happy to talk about the project of effective altruism as the search for the actions that do the most to contribute to the common good (given limited resources) [0], since it’s such a convenient phrase - as long as we understand that it’s only an approximation.

What are the arguments for Effective Altruism?

Ben again:

The three main premises supporting the claim of EA are:

  • Spread: there are big differences in how much different actions (with similar costs) contribute to the common good.
  • Identifiability: We can find some of the high-impact actions with reasonable effort.
  • Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.

Since I’ve added a moral claim to his original formulation, we presumably need moral premises to support them. It seems like welfarism and impartiality should be the two of them, and then I’d add a third about how individuals should relate to morality, in order to support the normative claim I made previously. However, I won’t dig into the details of these now; I’m more interested in discussing the three empirical premises.

I think these three premises do a good job of summarising the core argument for EA. However, I think that they give a misleading impression of EA unless we acknowledge that different people interpret the scope of these claims very differently, and cite very different evidence in favour of them. For example, the implicit definition of “big differences” used when discussing donating to AMF versus the Make-A-Wish foundation is very far from the one used when discussing astronomical waste. I think that attempting to convey the core ideas in EA without explicitly addressing this tension may create confusion. We might also unintentionally perform a motte-and-bailey fallacy by defending weaker versions of the arguments, and then acting on stronger ones. So below I identify three domains in which we can apply these arguments, roughly corresponding to different views on what epistemic standards EA should apply. Different people can then explicitly distinguish which versions of the premises they’re defending. Note that these have considerable overlap, but I think a rough attempt to disambiguate them is better than none.

EA as social science

First is the domain of standard academic research in the social sciences: randomised controlled trials, statistical analysis of data, peer review, and so on. One interpretation of our premises is that using these types of analysis to judge the impacts of interventions allows us to identify interventions that are several orders of magnitude more impactful than usual. Let’s call this the “social sciences” version of EA. Under this I’d also include bringing in ideas about charity evaluation from the business world - for example, not penalising charities for high overheads or staff costs.

EA as hits-based altruism

It turns out, however, that there are a lot of domains in which reaching a solid academic consensus is very hard, and yet the impacts of good work can be large - for example, political advocacy for morally important policies. What does EA add to existing thinking about these domains? I’d identify two core claims: that we can significantly increase our impact by

  • Using careful consequentialist reasoning which incorporates quantitative considerations (but isn’t necessarily as rigorous as academic research is meant to be); and by
  • Being less risk-averse, and generally thinking more like entrepreneurs and venture capitalists.

The arguments that entrepreneurs make about why they’ll succeed are a very long way from being academically rigorous; indeed, they’re often barely enough to convince venture capitalists who actively embrace crazy ideas. But nevertheless, those entrepreneurs succeed often enough in business to make it valuable to back them; we might hope that the same is true for altruists with similarly ambitious plans. I’ll call this domain “EA as hits-based altruism”.

To be clear, this perspective on EA isn’t just about starting new organisations, but more generally about finding powerful yet neglected levers to influence the world. I consider Norman Borlaug launching the Green Revolution to be one of the best examples. I hope that clean meat will be a comparable success story in a few decades; and the same for projects to improve institutional decision-making. Another type of “hit” is the discovery of a new moral truth - for example, that wild animal suffering matters. Note that a great altruistic idea doesn’t need to be as counterintuitive as a great startup idea, because the altruism space is much less competitive. Most of OpenPhil's donations in policy-oriented philanthropy and scientific research funding seem to be working within this domain.

EA as trajectory change

Thirdly, we can try to predict humanity’s overall trajectory over the timeframe of centuries or longer, and how to shift it (which I’ll call “EA as trajectory change”). Compared with hits-based altruism, this depends on much more speculative reasoning about much bigger-picture worldviews, and receives much less empirical feedback. Previous events which plausibly qualify as successful trajectory changes include abolitionism; feminism; the foundation of democracy in America; the enlightenment; the scientific revolution; the industrial revolution; the Allied victory in World War 2; and the fight against global warming. These tended to involve influencing people’s moral values, or changing the way progress in general occurs; looking forward, they might also involve reducing existential risk. I think there’s a pretty clear case that such changes can do a huge amount of good; the more pressing question is whether we’re able to identify them and influence them to a non-negligible extent.

For each of the domains I’ve just discussed, we can make the case for EA by arguing that the three original premises are all applicable to it. In doing so, we’ll need to make claims about the general properties of the types of interventions in each domain. I hope that the categories are natural enough to make this tenable, but I expect it to nevertheless be a difficult endeavour. Note that some interventions might be supported by different versions of the EA premises in different ways - for example, we might think that the goal of reducing existential risk is tractable because of arguments about EA as trajectory change, but then also endorse unusual ways to go about it because of arguments about EA as hits-based altruism. As another example, the cause area of improving institutional decision-making draws on a bunch of academic research, making it an example of EA as social science. However, the justifications for why improving this research will lead to large benefits tend to rely on arguments in one of the other domains.

I give some more specific thoughts on how defensible the EA premises are in each of these domains in a follow-up post, My Evaluations of Different Domains of Effective Altruism.

 

[0] Note that I prefer “given limited resources” over “relative to their cost”, for reasons described here.

Comments10
Sorted by Click to highlight new comments since: Today at 11:04 AM

Thanks for the post! I think disambiguating "EA as trajectory change" and "EA as hits-based giving" is particularly valuable for me.

In the face of radical uncertainty about the future, it seems hard to ever justifiably claim that one course of action is the “best thing to do”, rather than just a very good thing to do.

I'm confused by this. I assume that the "best thing to do" phrase is used ex ante rather than ex post.  Perhaps you're using the word "justifiably" to mean something more technical/philosophical than what the common language meaning is?

No, I'm using the common language meaning. Put it this way: there are seven billion people in the world, and only one of them is the best person to fund (ex ante). If you pick one person, and say "I believe that this is the BEST person to fund, given the information available in 2021", then there's a very high chance that you're wrong, and so this claim isn't justified. Whereas you can justifiably claim that this person is a very good person to fund.

I guess when I think "best action to do" the normative part of the claim is something about the local map rather than the territory or the global map. I think this has two parts:

1) When I say "X is the best bet" I meant that my subjective probability P(X is best) > P(any specific other reference member). I'm not actually betting it against "the field" or claiming P(X is best) > 0.5!

2) If I believe that X is the best bet in the sense of highest probability, of course if I was smarter and/or had more information my assigned probabilities will likely change. 

I think the main problem with your definition is that it doesn't allow you to be wrong. If you say "X is the best bet", then how can I disagree if you're accurately reporting information about your subjective credences? Of course, I could respond by saying "Y is the best bet", but that's just me reporting my credences back to you. And maybe we'll change our credences, but at no point in time was either of us wrong, because we weren't actually talking about the world itself.

Which seems odd, and out of line with how we use this type of language in other contexts. If I say "Mathematics is the best field to study, ex ante" then it seems like I'm making a claim not just about my own beliefs, but also about what can be reasonably inferred from other knowledge that's available; a claim which might be wrong. In order to use this interpretation, we do need some sort of implicit notion of what knowledge is available, and what can be reasonably inferred from it, but that saves us from making claims that are only about our own beliefs. (In other words, not the local map, nor the territory, but some sort of intermediate "things that we should be able to infer from human knowledge" map.)

Thanks for writing this! I don't have fully formed thoughts right now, but I have a very small meta request: it would be great if you had subheadings for the 3 domains (EA as social science, EA as hits-based altruism, EA as trajectory change) for easy linking/referencing :)

Thanks; done.

I really like this post! I'm sympathetic to the point about normativity. I particualrly think the point that movements may be able to suffer from not being demanding enough is a potentially really good one and not something I've thought about before. I wonder if there are examples?

For what it's worth, since the antecedent "if you want to contrinute to the common good" is so minimal, ben's def feels kind of near-normative to me -- like it gets someone on the normative hook with "mistake" unless they say "well I jsut don't care about the common good", and then common sense morality tells them they're doing something wrong... so it's kind of like we don't have to explicitly?

Also, I think I disagree about the maximising point. Basically I read your proposed definition as near-maximising, becuase when you iterate on 'contributing much more' over and over again you get a maximum or a near-maximum. And then it's like... does that really get you out of the cited worries with maximising? Like it still means that "doing a lot of good" will be not good enough a lot of the time (as long as there's still something else you could do that would do much more good), which I think could still run into at least the 2nd and 3rd worries you cite with having maximising in there?

Thanks for the kind words and feedback! Some responses:

I wonder if there are examples?

The sort of examples which come to mind are things like new religions, or startup, or cults - all of which make heavy demands on early participants, and thereby foster a strong group bond and  sense of shared identity which allows them greater long-term success. 

since the antecedent "if you want to contribute to the common good" is so minimal, ben's def feels kind of near-normative to me

Consider someone who only cares about the lives of people in their own town. Do they want to contribute to the common good? In one sense yes, because the good of the town is a part of the common good. But in another sense no; they care about something different from the common good, which just happens to partially overlap with it.

Using the first definition, "if you want to contribute to the common good" is too minimal to imply that not pursuing effective altruism is a mistake.

Using the second definition, "if you want to contribute to the common good" is too demanding - because many people care about individual components of the common good (e.g. human flourishing) without being totally on board with "welfare from an impartial perspective".

I think I disagree about the maximising point. Basically I read your proposed definition as near-maximising, becuase when you iterate on 'contributing much more' over and over again you get a maximum or a near-maximum.

Yeah, I agree that it's tricky to dodge maximalism. I give some more intuitions for what I'm trying to do in this post. On the 2nd worry: I think we're much more radically uncertain about the (ex ante) best option available to us out of the space of all possible actions, than we are radically uncertain about a direct comparison between current options vs a new proposed option which might do "much more" good. On the 3rd worry: we should still encourage people not to let their personal preferences stand in the way of doing much more good. But this is consistent with (for example) people spending 20% of their charity budget in less effective ways. (I'm implicitly thinking of "much more" in relative terms, not absolute - so a 25% increase is not "much more" good.)

I think you're saying that my word choice is unusual here for commonsensical intuitions, but I don't think it is? Tennis is an unusually objective field, with clear metrics and a well-defined competitive system.

When somebody says "I think Barack Obama (or your preferred presidential candidate) is the best man to be president" I highly doubt that they literally mean there's a >50% chance that of all  living America-born citizens >35 years of age, this person will be better at governing the US than everybody else. 

Similarly, when somebody says "X is the best fiction author," I doubt they are expressing >50% credence that of all humans who have ever told a story, X told the best fiction stories. 

The reference class is the same as the field. Sorry I was clear. But like you said, there are >7 billion people, so "specific reference member" means something very different than "field overall."

For future reference, Linch's comment was in response to a comment of mine which I deleted before Linch replied, in which I used the example of saying "Federer is the best tennis player". Sorry about that! I replaced it with a comment that tried to point at the heart of the objection; but since I disagree with the things in your reply, I'll respond here too.

I think I just disagree with your intuitions here. When someone says Obama is the best person to be president, they are presumably taking into account factors like existing political support and desire to lead, which make it plausible that Obama actually is the best person.

And when people say "X is the best fiction author ever", I think they do mean to make a claim about the literal probability that this person is, out of all the authors who ever wrote fiction, the best one. In that context, the threshold at which I'd call something a "belief" is much lower than in most contexts, but nevertheless I think that when (for example) a Shakespeare fan says it, they are talking about the proposition that nobody was greater than Shakespeare. And this is not an implausible claim, given how much more we study Shakespeare than anyone else.

(By contrast, if they said: nobody had as much latent talent as Shakespeare, then this would be clearly false).

Anyway, it seems to me that judging the best charitable intervention is much harder than judging the best author, because for the latter you only need to look at books that have already been written, whereas in the former you need to evaluate the space of all interventions, including ones that nobody has proposed yet.

Curated and popular this week
Relevant opportunities