One of the central/foundational claims of EA as seen in the wild is that some ways of doing good are much better than others.

I think this claim isn’t obvious. Actually I think:

  • It’s a contingent claim about the world as it is today
    • While there are theoretical reasons to expect the distribution of opportunities to start distributed over several orders, there are also theoretical reasons to expect the best opportunities to be taken systematically in a more-or-less efficient altruistic market
    • In fact, EA is aiming for a world where we do have an efficient altruistic market, so if EA does very well, the claim will become false!
  • It’s pretty reasonable to be sceptical of the claim
    • One of the most natural reference class claims to consider is “some companies are much better buys than others” … while this is true ex post, it’s unclear how true it is ex ante; why shouldn’t we expect something similar for ways of doing good?

So why is it so widely believed in EA? I think a lot of the answer is that we can look at concrete domains like global health where there are good metrics for how much interventions help — and the claim seems empirically to be true there! But this is within a single cause area (we presumably expect some extra variation between cause areas), and good metrics should make it easier for the altruistic market to be efficient. So the appropriate conclusion is something like “well if it’s true even there where we can measure carefully, it’s probably more true in the general case”.

Another foundational claim which is somewhat contingent about the world is “it’s possible to do a lot of good with a relatively small expenditure of resources”. Again, it seems pretty reasonable to be sceptical of the claim. Again, the concrete examples in global health make a particularly good test case, and I think are valuable in informing many people's intuitions about the general situation.

I think this is an important reason why concrete areas like global health should be prominently featured in introductory EA materials, even if we’re coming from a position that thinks they’re not the most effective causes (e.g. because of a longtermist perspective). I think that we should avoid making this (or being seen to make this) a bait-and-switch by being clear that they’re being used as illustrative examples, not because we think they’re the most important areas. Of course many people in EA do think that global health is the most important cause area, and I don’t want to ignore that or pretend it isn’t the case. Perhaps it’s best to introduce global health examples by explaining that some people think it’s an especially important area, but many others think there are more important areas, but still think it’s a particularly good example for understanding some of the fundamentals of how impact is distributed.

Why not just use a more abstract/toy domain for making these points? If I illustrate them with a metaphor about looking for gold, nobody will mistakenly think I'm claiming that literally searching for gold is the best way to do good. I think this is a great tactic for conveying complex points which are ultimately grounded in theory. However, I don’t think it works for claims which are importantly contingent on the world we find ourselves in. For these, I think we want easy-to-assess domains where we can measure and understand what’s going on. And the closer the domain is to the domains where we ultimately want to apply the inferences, the more likely it is to be valid to import them. Global health — centred around helping people, and with a great deal of effort from many parties going into securing good outcomes, with good metrics available to see how things are doing — is ideally positioned for examining these claims.





More posts like this

Sorted by Click to highlight new comments since: Today at 1:50 PM

Perhaps related:

 It took until 2010 or so for effective altruism to become a movement, even though there's not a lot of distance between EA and Peter Singer's early writings. I believe that Givewell was a  crucial ingredient here. 

On my model, people being able to rally around concrete charities wasn't just a forceful example to use against common objections. Perhaps more importantly, Givewell made doing good tractable enough for young university students so they could achieve easy successes, which probably reinforced the self-image of those aspiring altruists. Absent some tractable ways to do good, people might lose the motivation to devote their careers to EA causes or, more generally, stay passive consumers of discussion material, never becoming personally "activated." 


Absolutely. I'm exactly in that boat - I became convinced of some basic EA principles after reading Singer's work in 1st year uni last year but I don't think I would have committed to donating a large chunk of my salary and stuck to it for instance if GWWC didn't exist. I wouldn't be here if the community hadn't made it so tractable. I also was initially skeptical of the longtermist perspective - had EA been presented to me in terms other than the power law distribution of global health charity effectiveness it's also much less likely I'd be here (I'm now a longtermist :P)

I agree with this, but a meta comment. I’m sort of feeling uncomfortable with how much of the defence of EA funding global health is phrased in indirect terms rather than defending the object level value itself:


  • it’s a good testing ground, has good feedback loops
  • has good optics
  • keeps us grounded


FWIW I don't think this really constitutes a defence of global health spending. It's a defence of talking about global health when explaining what EA is.


Might be worth making this distinction more prominent in the post! I didn't notice it on first (brief) read either.

FWIW my own opinion is that funding global health is defensible and has a strong case (vis a vis existential risk and broad Lt-ism)...

at least for people who don't have a "total population" ethic, weight suffering heavily, or are morally uncertain about this.

I want to see it considered/defended on its own grounds; otherwise it cannot sustain.

Ultimately (and in other situations) people will say ‘well, we can signal to ourselves and others and get good feedback loops in other, we don't need to support global health interventions'

This is great and I’m glad you wrote it. For what it’s worth, the evidence from global health does not appear to me strong enough to justify high credence (>90%) in the claim “some ways of doing good are much better than others” (maybe operationalized as "the top 1% of charities are >50x more cost-effective than the median", but I  made up these numbers).

The DCP2 (2006) data (cited by Ord, 2013) gives the distribution of the cost-effectiveness of global health interventions. This is not the distribution of the cost-effectiveness of possible donations you can make. The data tells us that treatment of Kaposi Sarcoma is much less cost-effective than antiretroviral therapy in terms of avoiding HIV related DALYs, but it tell us nothing about the distribution of charities, and therefore does not actually answer the relevant question: of the options available to me, how much better are the best than the others?

If there is one charity focused on each of the health interventions in the DCP2 (and they are roughly equally good at turning money into the interventions) – and therefore one action corresponding to each intervention – then it is true that the very best ways of doing good available to me are better than average.

The other extreme is that the most cost-effective interventions were funded first (or people only set up charities to do the most cost-effective interventions) and therefore the best opportunities still available are very close to average cost-effectiveness. I expect we live somewhere between these two extremes, and there are more charities set up for antiretroviral therapy than kaposi sarcoma.

The evidence that would change my mind is if somebody publicly analyzed the cost-effectiveness of all (or many) charities focused on global health interventions. I have been meaning to look into this, but haven’t yet gotten around to it. It’s a great opportunity for the Red Teaming Contest, and others should try to do this before me. My sense is that GiveWell has done some of this but only publishes the analysis for their recommended charities; and probably they already look at charities they expect to be better than average – so they wouldn’t have a representative data set.

Yeah I think this is a really good question and would be excited to see that kind of analysis. Maybe I'd make the numerator be "# of charitable $ spent" rather than "# of charities" to avoid having the results be swamped by which areas have the most very small charities.

 It might also be pretty interesting to do some similar analysis of how good interventions in different broad areas look on longtermist grounds (although this necessarily involve a lot more subjective judgements).

even if we’re coming from a position that thinks they’re not the most effective causes 

How do you interpret "most effective cause"? Is it "most effective given the current funding landscape"?

This seems mostly right, but it still doesn't seem like the main reason that we ought to talk about global health.

There are lots of investors visibly trying to do things that we ought to expect will make the stock market more efficient. There are still big differences between companies in returns on R&D or returns on capital expenditures. Those returns go mainly to people who can found a Moderna or Tesla, not to ordinary investors.

There are not (yet?) many philanthropists who try to make the altruistic market more efficient. But even if there were, there'd be big differences in who can accomplish what kinds of philanthropy.

Introductory EA materials ought to reflect that: instead of one strategy being optimal for everyone who wants to be an EA, the average person ought to focus on easy-to-evaluate philanthropy such as global health. A much smaller fraction of the population with unusual skills ought to focus on existential risks, much as a small fraction of the population ought to focus on founding companies like Moderna and Tesla.

Thank you for contributing this. I enjoyed reading it and thought that it made some people’s tendency in EA (which I might be imagining) to "look at other cause-areas with Global Health goggles" more explicit.

Here are some notes I’ve taken to try to put everything you’ve said together. Please update me if what I’ve written here omits certain things, or presents things inadequately. I’ve also included additional remarks to some of these things.

  • Central Idea: [EA’s claim that some pathways to good are much better than others] is not obvious, but widely believed (why?).
    • Idea Support 1: The expected goodness of available actions in the altruistic market differs (across several orders of magnitude) based on the state of the world, which changes over time.
      • If the altruistic market was made efficient (which EA might achieve), then the available actions with the highest expected goodness, which change with the changing state of the world, would routinely be located in any  world state. Some things don't generalize.
    • Idea Support 2: Hindsight bias routinely warps our understanding of which investments, decisions, or beliefs were best made at the time, by having us believe that the best actions were more predictable than they were in actuality. It is plausible that this generalizes to altruism. As such, we run the risk of being overconfident that, despite the changing state of the world, the actions with the highest expected goodness presently will still be the actions with the highest expected goodness in the future, be that the long-term one or the near-term one.
    • (why?): The cause-area of global health has well defined metrics of goodness, i.e. the subset of the altruistic market that deals with altruism in global health is likely close to being efficient.
      • Idea Support 3: There is little cause to suspect that since altruism within global health is likely close to being efficient, altruism within other cause-areas are close to efficient or can even be made efficient, given their domain-specific uncertainties.
    • Idea Support 4: How well “it’s possible to do a lot of good with a relatively small expenditure of resources” generalizes beyond global health is unclear, and should likely not be a standard belief for other cause-areas. The expected goodness of actions in global health is contingent upon the present world state, which will change (as altruism in global health progresses and becomes more efficient, there will be diminishing returns in the expected goodness of the actions we take today to further global health)
    • Action Update 1: Given the altruistic efficiency and clarity within global health, and given people’s support for it, it makes sense to introduce EA’s altruist market in global health to newcomers; however, we should not “trick” them into thinking EA is solely or mostly about altruism in global heath - rather, we should frame EA’s altruist market in global health as an example of what a market likely close to being efficient can look like.

I think the main thing this seems to be missing is that I'm not saying global health has an efficient altruistic market -- I'm saying that if anything does you should expect to see it there. But actually we don't even see it there ... reasonable-looking health interventions vary by ~four orders of magnitude in cost-effectiveness, and the most cost-effective are not fully funded.

Thanks for writing this! The fact that it highlights a premise in EA ("some ways of doing good are much better than others")  that a lot of people (myself included) take without very careful consideration makes me happy that it's been written. 

Having said that, I am not sure that I believe this more generally because of the reasoning that you give:  “well if it’s true even there [in global health] where we can measure carefully, it’s probably more true in the general case”. I think this is part of my belief, but the other part is that just directly comparing the naive expected value of interventions in different cause areas makes this seem true. 

For example, under some views of comparing animal welfare to humans, it seems far more impactful to donate to cage-free hen corporate outreach campaigns which per dollar, affects between 9 and 120 years of chicken life, compared to AMF. Further, my impression is that considering the expected value of longtermist interventions also would represent quite a large difference. 

This is partially why I try to advocate for members of my group to develop their own cause-prioritization. 

Another advantage from global poverty and health projects is demonstrating clearly the multiplier effect of donations.  The base case is a cash transfer to a person with one hundredth of the donor’s income, which should give a one hundred times boost to welfare.  From this compelling starting point we can then proceed to argue why in expectation other projects may do even better.  We can picture a range of projects from those with good evidence base but returns only a modest multiple above cash transfers (bed nets) to project which could produce higher returns but have limited evidence (charity start ups).  Doners may want to fund along this continuum.

The definition of health here should include mental and socioemotional health, since they affect how people reason and relate to each other, respectively.

the best opportunities ... taken systematically in a[n] efficient altruistic market

does not contradict that

some ways of doing good are much better than others

if you define an efficient altruistic market by the maximization of persons' philanthropic spending on causes they care about. But perhaps, you mean efficiency in impact maximization taking an impartially welfarist point of view (or focusing on long and healthy lives of all entities - which could actually be negatively perceived by some).

global health where there are good metrics for how much interventions help

The metrics can still be imperfect. For example, health of extremely poor persons may affect their wellbeing in a limited way. The cost-effectiveness can vary across contexts. For example, informing people on the returns to education prove (by an MIT's RCT) to be even better than deworming in increasing schooling (43 years per $100) but was found (by a J-PAL's RCT) about 187x worse (0.23 years per $100) in a different context (Dominican Republic, 2001-2005). Cost-effectiveness comparisons can be biasing: if you compare a normal number (e. g. 1 additional year of schooling per $100, which is the cost of a boarding school subsidy) to a very small number (e. g. something that happened to be run poorly in the study context and so showed almost no impact), the normal number appears large. Single metrics also neglect systemic change (for instance, education is only effective if it leads to real income increases) or the marginality of interventions (e. g. if deworming is only valuable if teachers are trained and textbooks present). I understand that you may be using this for the sake of an argument.

areas like global health should be prominently featured in introductory EA materials, even if we’re coming from a position that thinks they’re not the most effective causes

I disagree that people introduced to EA should continue to be 'tricked by big selected numbers' in the global health or any other realm. I do agree that the possibility of "doing a lot of good with [relatively little]” can be demonstrated. However, it should motivate creative thinking about various opportunities (for example, I am just discussing with an NGO in Nigeria about cost-effective ways of addressing severe malnutrition in food insecure communities: community cases recognition and addressing by local solidarity before the need of therapy) rather than repeating specific numbers.

Have you thought that global health (and human, animal, and sentient AI welfare; AI safety; catastrophic risk mitigation; and EA improvement and scale-up) are fundamentals of longtermism due to institutional perpetuation? If robust institutions that dynamically mitigate risks and increase cooperation on improving everyone's wellbeing are set up today, then there should be high levels of welfare across the future. Introducing possible objectives could be a good reason to mention these topics in an 'intro to EA with a longtermist focus.' Mentioning currently popular global health interventions (that are set up to be able to continue absorbing large sums), such as bednet distribution, can motivate participants to address current issues rather than to focus on institutional development. So, current global health intro content can be detrimental in intro materials focusing on longtermism.

Perhaps it’s best to introduce global health examples by explaining that some people think it’s an especially important area, but many others think there are more important areas, but still think it’s a particularly good example for understanding some of the fundamentals of how impact is distributed.

Yes, it should be sincerely stated that one can choose the impact path they prefer, including if they happen to dislike EA and go on with local art gallery support. Usually, this does not happen and people lead an open dialogue about systemic change: which programs should be prioritized when to achieve the best outcome. I would not include statements that could be interpreted as expectations.

Regarding introducing general principles which can be broadly applicable to a range of causes and applying these to global health: Yes, this can be the best way. I would include a representative set of examples (from global health and other domains) that together inspire one to localize, scale-up, and innovate existing thinking and solutions in a manner which demonstrates understanding of these principles. (I was not planning to advertise this here but there is a fellowship curriculum that attempts to do this, but is biased toward some sub-Saharan Africa relevant topics).