One of the central/foundational claims of EA as seen in the wild is that some ways of doing good are much better than others.
I think this claim isn’t obvious. Actually I think:
- It’s a contingent claim about the world as it is today
- While there are theoretical reasons to expect the distribution of opportunities to start distributed over several orders, there are also theoretical reasons to expect the best opportunities to be taken systematically in a more-or-less efficient altruistic market
- In fact, EA is aiming for a world where we do have an efficient altruistic market, so if EA does very well, the claim will become false!
- It’s pretty reasonable to be sceptical of the claim
- One of the most natural reference class claims to consider is “some companies are much better buys than others” … while this is true ex post, it’s unclear how true it is ex ante; why shouldn’t we expect something similar for ways of doing good?
So why is it so widely believed in EA? I think a lot of the answer is that we can look at concrete domains like global health where there are good metrics for how much interventions help — and the claim seems empirically to be true there! But this is within a single cause area (we presumably expect some extra variation between cause areas), and good metrics should make it easier for the altruistic market to be efficient. So the appropriate conclusion is something like “well if it’s true even there where we can measure carefully, it’s probably more true in the general case”.
Another foundational claim which is somewhat contingent about the world is “it’s possible to do a lot of good with a relatively small expenditure of resources”. Again, it seems pretty reasonable to be sceptical of the claim. Again, the concrete examples in global health make a particularly good test case, and I think are valuable in informing many people's intuitions about the general situation.
I think this is an important reason why concrete areas like global health should be prominently featured in introductory EA materials, even if we’re coming from a position that thinks they’re not the most effective causes (e.g. because of a longtermist perspective). I think that we should avoid making this (or being seen to make this) a bait-and-switch by being clear that they’re being used as illustrative examples, not because we think they’re the most important areas. Of course many people in EA do think that global health is the most important cause area, and I don’t want to ignore that or pretend it isn’t the case. Perhaps it’s best to introduce global health examples by explaining that some people think it’s an especially important area, but many others think there are more important areas, but still think it’s a particularly good example for understanding some of the fundamentals of how impact is distributed.
Why not just use a more abstract/toy domain for making these points? If I illustrate them with a metaphor about looking for gold, nobody will mistakenly think I'm claiming that literally searching for gold is the best way to do good. I think this is a great tactic for conveying complex points which are ultimately grounded in theory. However, I don’t think it works for claims which are importantly contingent on the world we find ourselves in. For these, I think we want easy-to-assess domains where we can measure and understand what’s going on. And the closer the domain is to the domains where we ultimately want to apply the inferences, the more likely it is to be valid to import them. Global health — centred around helping people, and with a great deal of effort from many parties going into securing good outcomes, with good metrics available to see how things are doing — is ideally positioned for examining these claims.
Perhaps related:
It took until 2010 or so for effective altruism to become a movement, even though there's not a lot of distance between EA and Peter Singer's early writings. I believe that Givewell was a crucial ingredient here.
On my model, people being able to rally around concrete charities wasn't just a forceful example to use against common objections. Perhaps more importantly, Givewell made doing good tractable enough for young university students so they could achieve easy successes, which probably reinforced the self-image of those aspiring altruists. Absent some tractable ways to do good, people might lose the motivation to devote their careers to EA causes or, more generally, stay passive consumers of discussion material, never becoming personally "activated."
Absolutely. I'm exactly in that boat - I became convinced of some basic EA principles after reading Singer's work in 1st year uni last year but I don't think I would have committed to donating a large chunk of my salary and stuck to it for instance if GWWC didn't exist. I wouldn't be here if the community hadn't made it so tractable. I also was initially skeptical of the longtermist perspective - had EA been presented to me in terms other than the power law distribution of global health charity effectiveness it's also much less likely I'd be here (I'm now a longtermist :P)