TL;DR: The explore-exploit tradeoff for causes is impossible if you don't know how far exploration could take you - how good the best causes may be.
Recently, I found out that Centre for Exploratory Altruism Research (CEARCH) estimates, with a high confidence level, that advocacy for top sodium reduction policies is around 100x as cost-effective as top GiveWell charities, in terms of DALYs. This made me feel like a sucker, for having donated to GiveWell.
You see, when I'm donating, I think of myself as buying a product - good for others. The hundreds of dollars I donated to GiveWell could've probably been replaced with a couple of dollars to this more effective cause. That means that I wasted hundreds of dollars, that could've done much more good.
But let's use milk as an analogy. GiveWell is the equivalent of buying a liter of milk for 200$. If that happened with a product I wanted to buy for myself, I would probably feel scammed. A literal lifetime of donations to GiveWell might be replaced with 6 months of donating to this cause. I'm not saying I got scammed, but thinking about it from the perspective of buying good intuitively helps. Nowadays I don't donate to GiveWell anyways, but it sucks.
This - being a sucker who pays too much for doing good - is really bad. It's exactly what we try to avoid in EA. It can decrease our impact by orders of magnitude.
And that's not even the end. CEARCH also estimated (although with a low level of certainty) that nuclear arsenal limitation could be 5000x cost-effective as top GiveWell charities. If they're wrong by an order of magnitude, that's still 5x times better then even the hypertension work. Now donating to GiveWell is like buying a liter of milk for 1000$. And who's to say that there's nothing 50x more effective than the nuclear cause?
At some point, you might consider to stop buying milk for the moment, and looking around for the cheapest prices. And you'll probably not get the cheapest, but you might be able to figure a reasonable estimate of them, and buy at a rate close to it.
So this situation made me think - can we put a reasonable limit to the maximum cost-effectiveness of our money and time? Within a 50% CI? 80 CI? 99% CI... And can we have a reasonable estimate for the time it will be reached?
For someone extremely risk-averse it's probably easy, as there are only so much RCTs in our world. And it's seems likely that GiveWell is within a close range. But for anyone that's risk-neutral, I can't think of a good way. So I've come to ask you wonderful people - what do you think?
The end result I'm thinking of is something like:
"The maximum cost-effectiveness we can expect is 500x-50,000x that of GiveWell (95% CI). The year we expect to find a cause within an order of magnitude of that cost-effectiveness is 2035-2050 (50% CI)".
But of course any input will be interesting.
Things that could potentially limit cost-effectiveness, off the top of my head:
- A weak form of the 'stable market hypothesis' for doing good.
- Hedonic adaptation - people adapt to better circumstances. The amount of good we can induce in anyone's lives is thus limited.
- Caps on the amount of humans & other beings that are likely to ever live.
Note: I'm emotionally content with my past donations to GiveWell, don't worry. Also, this is not a diss on GiveWell, they're doing a great job for their goals.
Thanks for the reply!
If I understand your main arguments correctly, you're basically saying that high cost-effectiveness options are rare, uncertain, and have a relatively small funding gap that is likely to be closed anyways. Also new charities are likely to fail, and can be less effective. And smart EAs won't waste their money.
Uncertainty and rarity: Assume that CEARCH is on average 5x too optimistic about their high confidence level report, 20x too optimistic about their low confidence stuff (that's A LOT). Still, out of 17 causes they researched, 4 are still over 10x effective as top GiveWell charities. Almost 1/4th. They were probably lucky - Rethink Priorities and CE and such don't have such a high rate (it would be an interesting topic for analysis). But still, their budgets are miniscule. RP has spent around 14m$ in it's entire lifetime. CEARCH is composed of only 2 full-time workers, and was founded less than 2 years ago. CE had a total income of £775k in 2022. The cost of operations for these stuff is tiny, compared to the amount we spend on direct work.
Small funding gap, likely to be closed anyways: Let's say that on average, finding such causes requires 5m$ (seems overblown with the aforementioned info). And assume these causes are on average 20x effective as top GiveWell charities. And the funding gap is indeed small - only 10m$ on average. That's 15m$, that would've done as much good as 200m$ for GiveWell. So by finding and donating to 2.6 causes/y, we can equal the impact GiveWell has done in 2021. Those funding gaps aren't that likely to be closed - it took more then 10 years after the inception of EA for CEARCH to find those causes. In the stock market, a 2% misprice may be closed within a few hours. In altruism, a 500% misallocation will never be closed without deliberate challenge.
And these causes pretty easy to find. CEARCH has been started in 2022 and has already found 4 causes 10x GiveWell under my aforementioned pessimistic assumptions. CE and RP have found more. There are big funding gaps, because there are many causes like this. There are many big world governments to do lobbying to. We should aim to close the funding gaps as soon as possible, because that would help more people.
New charities likely to fail, and be less effective: CE's great work shows that might not be true. A substantial number of their charities report significant success. Also, I assume that's taken into account in exploratory research. It can still diminish the impact by 50% and it won't matter to the overall scheme.
EAs won't waste their money on bad donations: If that was true, then all EAs seeking to maximize expected value would roughly agree on where to donate their money. Rather, we see the community being split into 4 main parts (global H&P, animals, existential risk, meta). Some people in EA simply don't and won't donate to some of these parts. This shows that at least a part of the community might donate to worse charities.
Imagine you have 2 investments that you will return your money only in 10 years.
What would you choose? I bet the start-up. With altruism there's no reason to be loss averse, so the logic is even more solid.
I guess my are that we should spend more on cause prioritization and supporting new charities (akin to CE). But then - when do we know we've found a decent cause? The exploration-exploitation trade-off is impossible if you don't know how far exploration will take you.
EA is the smartest, most open community I know. I'm sure it will explore this.