One of the central/foundational claims of EA as seen in the wild is that some ways of doing good are much better than others.
I think this claim isn’t obvious. Actually I think:
- It’s a contingent claim about the world as it is today
- While there are theoretical reasons to expect the distribution of opportunities to start distributed over several orders, there are also theoretical reasons to expect the best opportunities to be taken systematically in a more-or-less efficient altruistic market
- In fact, EA is aiming for a world where we do have an efficient altruistic market, so if EA does very well, the claim will become false!
- It’s pretty reasonable to be sceptical of the claim
- One of the most natural reference class claims to consider is “some companies are much better buys than others” … while this is true ex post, it’s unclear how true it is ex ante; why shouldn’t we expect something similar for ways of doing good?
So why is it so widely believed in EA? I think a lot of the answer is that we can look at concrete domains like global health where there are good metrics for how much interventions help — and the claim seems empirically to be true there! But this is within a single cause area (we presumably expect some extra variation between cause areas), and good metrics should make it easier for the altruistic market to be efficient. So the appropriate conclusion is something like “well if it’s true even there where we can measure carefully, it’s probably more true in the general case”.
Another foundational claim which is somewhat contingent about the world is “it’s possible to do a lot of good with a relatively small expenditure of resources”. Again, it seems pretty reasonable to be sceptical of the claim. Again, the concrete examples in global health make a particularly good test case, and I think are valuable in informing many people's intuitions about the general situation.
I think this is an important reason why concrete areas like global health should be prominently featured in introductory EA materials, even if we’re coming from a position that thinks they’re not the most effective causes (e.g. because of a longtermist perspective). I think that we should avoid making this (or being seen to make this) a bait-and-switch by being clear that they’re being used as illustrative examples, not because we think they’re the most important areas. Of course many people in EA do think that global health is the most important cause area, and I don’t want to ignore that or pretend it isn’t the case. Perhaps it’s best to introduce global health examples by explaining that some people think it’s an especially important area, but many others think there are more important areas, but still think it’s a particularly good example for understanding some of the fundamentals of how impact is distributed.
Why not just use a more abstract/toy domain for making these points? If I illustrate them with a metaphor about looking for gold, nobody will mistakenly think I'm claiming that literally searching for gold is the best way to do good. I think this is a great tactic for conveying complex points which are ultimately grounded in theory. However, I don’t think it works for claims which are importantly contingent on the world we find ourselves in. For these, I think we want easy-to-assess domains where we can measure and understand what’s going on. And the closer the domain is to the domains where we ultimately want to apply the inferences, the more likely it is to be valid to import them. Global health — centred around helping people, and with a great deal of effort from many parties going into securing good outcomes, with good metrics available to see how things are doing — is ideally positioned for examining these claims.
does not contradict that
if you define an efficient altruistic market by the maximization of persons' philanthropic spending on causes they care about. But perhaps, you mean efficiency in impact maximization taking an impartially welfarist point of view (or focusing on long and healthy lives of all entities - which could actually be negatively perceived by some).
The metrics can still be imperfect. For example, health of extremely poor persons may affect their wellbeing in a limited way. The cost-effectiveness can vary across contexts. For example, informing people on the returns to education prove (by an MIT's RCT) to be even better than deworming in increasing schooling (43 years per $100) but was found (by a J-PAL's RCT) about 187x worse (0.23 years per $100) in a different context (Dominican Republic, 2001-2005). Cost-effectiveness comparisons can be biasing: if you compare a normal number (e. g. 1 additional year of schooling per $100, which is the cost of a boarding school subsidy) to a very small number (e. g. something that happened to be run poorly in the study context and so showed almost no impact), the normal number appears large. Single metrics also neglect systemic change (for instance, education is only effective if it leads to real income increases) or the marginality of interventions (e. g. if deworming is only valuable if teachers are trained and textbooks present). I understand that you may be using this for the sake of an argument.
I disagree that people introduced to EA should continue to be 'tricked by big selected numbers' in the global health or any other realm. I do agree that the possibility of "doing a lot of good with [relatively little]” can be demonstrated. However, it should motivate creative thinking about various opportunities (for example, I am just discussing with an NGO in Nigeria about cost-effective ways of addressing severe malnutrition in food insecure communities: community cases recognition and addressing by local solidarity before the need of therapy) rather than repeating specific numbers.
Have you thought that global health (and human, animal, and sentient AI welfare; AI safety; catastrophic risk mitigation; and EA improvement and scale-up) are fundamentals of longtermism due to institutional perpetuation? If robust institutions that dynamically mitigate risks and increase cooperation on improving everyone's wellbeing are set up today, then there should be high levels of welfare across the future. Introducing possible objectives could be a good reason to mention these topics in an 'intro to EA with a longtermist focus.' Mentioning currently popular global health interventions (that are set up to be able to continue absorbing large sums), such as bednet distribution, can motivate participants to address current issues rather than to focus on institutional development. So, current global health intro content can be detrimental in intro materials focusing on longtermism.
Yes, it should be sincerely stated that one can choose the impact path they prefer, including if they happen to dislike EA and go on with local art gallery support. Usually, this does not happen and people lead an open dialogue about systemic change: which programs should be prioritized when to achieve the best outcome. I would not include statements that could be interpreted as expectations.
Regarding introducing general principles which can be broadly applicable to a range of causes and applying these to global health: Yes, this can be the best way. I would include a representative set of examples (from global health and other domains) that together inspire one to localize, scale-up, and innovate existing thinking and solutions in a manner which demonstrates understanding of these principles. (I was not planning to advertise this here but there is a fellowship curriculum that attempts to do this, but is biased toward some sub-Saharan Africa relevant topics).