Click the link above to see the full article and charts. Here is a summary I wrote for the latest edition to the 80,000 Hours newsletter, or see the Twitter version.
Is it really true that some ways of solving social problems achieve hundreds of times more, given the same amount of effort?
Back in 2013, Toby Ord1 pointed out some striking data about global health. He found that the best interventions were:
- 10,000x better at creating years of healthy life than the worst interventions.
- 50x better than the median intervention.
He argued this could have radical implications for people who want to do good, namely that a focus on cost-effectiveness is vital.
For instance, it could suggest that by focusing on the best interventions, you might be able to have 50 times more impact than a typical person in the field.
This argument was one of the original inspirations for our work and effective altruism in general.
Now, ten years later, we decided to check how well the pattern in the data holds up and see whether it still applies – especially when extended beyond global health.
We gathered all the datasets we could find to test the hypothesis. We found data covering health in rich and poor countries, education, US social interventions, and climate policy.
If you want to get the full picture on the data and its implications, read the full article (with lots of charts!):The bottom line is that the pattern Toby found holds up surprisingly well.
This huge variation suggests that once you’ve built some career capital and chosen some problem areas, it’s valuable to think hard about which solutions to any problem you’re working on are most effective and to focus your efforts on those.
The difficult question, however, is to say how important this is. I think people interested in effective altruism have sometimes been too quick to conclude that it’s possible to have, say, 1,000 times the impact by using data to compare the best solutions.
First, I think a fairer point of comparison isn’t between best and worst but rather between the best measurable intervention and picking randomly. And if you pick randomly, you expect to get the mean effectiveness (rather than the worst or the median).
Our data only shows the best interventions are about 10 times better than the mean, rather than 100 or 1,000 times better.
Second, these studies will typically overstate the differences between the best and average measurable interventions due to regression to the mean: if you think a solution seems unusually good, that might be because it is actually good, or because you made an error in its favour.
The better something seems, the greater the chance of error. So typically the solutions that seem best are actually closer to the mean. This effect can be large.
Another important downside of a data-driven approach is that it excludes many non-measurable interventions. The history of philanthropy suggests the most effective solutions historically have been things like R&D and advocacy, which can’t be measured ahead of time in randomised trials. This means that restricting yourself to measurable solutions could mean excluding the very best ones.
And since our data shows the very best solutions are far more effective than average, it’s very bad for your impact to exclude them.
In practice, I’m most keen on the “hits-based approach” to choosing solutions. I think it’s possible to find rules of thumb that make a solution more likely to be among the very most effective, such as “does this solution have the chance of solving a lot of the problem?”, “does it offer leverage?”, “does it work at all?”, and “is it neglected?”
Hypothetically, if we could restrict ourselves to solutions that are among the top half and then pick randomly from what remains, we can expect a cost-effectiveness that’s about twice the mean. And I think it’s probably possible to do better than that. Read more in our article on choosing solutions.
So, suppose you use a hits-based approach to carefully pick solutions within an area. How much more impact can you have?
My overall take is something like 10 times more. I feel pretty uncertain, though, so my range is perhaps 3-100 times.
A 10-times increase in impact given the same amount of effort is a big deal. It’s probably underrated by the world at large, though it may be overrated by fans of effective altruism.
A final thought: I think you can increase your impact by significantly more than 10 times by carefully choosing which problem area to focus on in the first place. This is a big reason why we emphasise problem selection in career choice so much at 80,000 Hours. Overall, we’d say to focus on exploring and building career capital first, then start to target some problem areas, and only later focus on choosing solutions.
I'm not sure if this is fair if you're trying to communicate the amount of value that could be created by getting more people to switch strategies.
Let's say everyone picks their strategy randomly. Then they read some information that suggests that some strategies are far more effective than others. Those who are already executing top-10% interventions conclude that they should stick with their current strategies, while some fraction of the other 90% are persuaded to switch. If everyone who switches strategies comes from that bottom-90% group, then the average change in value will look closer to 100x rather than 10x - because if you exclude the positive outliers then the mean will look much lower, and in fact closer to the median.
If you're trying to suggest that choosing the correct cause area is more important than choosing the correct strategy, because there's "only" a 10x value difference in choosing the correct strategy, I think you'd need to show why this mean-over-median approach is correct to apply to strategy selection but incorrect to apply to cause area selection. Couldn't you equally argue that regression to the mean indicates we'll make errors in thinking some cause areas are 1000x more important or neglected than others?
I agree different comparisons are relevant in different situations.
A comparison with the median is also helpful, since it e.g. tells us the gain that the people currently doing the bottom 50% of interventions could get if they switched.
Though I think the comparison to the mean is very relevant (and hasn't had enough attention) since it's the effectiveness of what the average person donates to, supposing we don't know anything about them. Or alternatively it's the effectiveness you end up with if you pick without using data.
... (read more)