In a comment on Benjamin Todd's article in favor of small donors, NunoSempere writes:
This article is kind of too "feel good" for my tastes. I'd also like to see a more angsty post that tries to come to grips with the fact that most of the impact is most likely not going to come from the individual people, and tries to see if this has any new implications, rather than justifying that all is good.
I am naturally an angsty person, and I don't carry much reputational risk, so this seemed like a natural fit.
I agree with NunoSempere that Benjamin's epistemics might be suffering from the nobility of his message. It's a feel-good encouragement to give, complete with a sympathetic photo of a very poor person who might benefit from your generosity. Because that message is so good and important, it requires a different style of writing and thinking than "let's try very hard to figure out what's true."
Additionally, I see Benjamin's post as a reaction to some popular myths. This is great, but we shouldn't mistake "some arguments against X are wrong" for "X is correct".
As to not bury the lede: I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.
Funny enough, although this is framed as a "red-team" post, I think that Benjamin mostly agrees with that advice. You can choose to take this as evidence that the advice is robust to worldview diversification, or as evidence that I'm really bad at red-teaming and falling prey to justification drift.
In terms of epistemic status: I take my own arguments here seriously, but I don't see them as definitive. Specifically, this post is meant to counterbalance Benjamin's post, so you should read his first, or at least read it later as a counterbalance against this one.
1. Our default view should be that high-impact funding capacity is already filled.
Consider Benjamin's explanation for why donating to LTFF is so valuable:
I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health.
I absolutely agree that those issues are very neglected, but only among the general population. They're not at all neglected within EA. Specifically, the question we should be asking isn't "do people care enough about this", but "how far will my marginal dollar go?"
To answer that latter question, it's not enough to highlight the importance of the issue, you would have to argue that:
- There are longtermist organizations that are currently funding-constrained,
- Such that more funding would enable them to do more or better work,
- And this funding can't be met by existing large EA philanthropists.
It's not clear to me that any of these points are true. They might be, but Benjamin doesn't take the time to argue for them very rigorously. Lacking strong evidence, my default assumptions are that funding capacity for extremely high-impact organizations well aligned with EA ideology will be filled by donors.
Benjamin does admirably clarify that there are specific programs he has in mind:
there are ways that longtermists could deploy billions of dollars and still do a significant amount of good. For instance, CEPI is a $3.5bn programme to develop vaccines to fight the next pandemic.
At face value, CEPI seems great. But at the meta-level, I still have to ask, if CEPI is a good use of funds, why doesn't OpenPhil just fund it?
In general, my default view for any EA cause is always going to be:
- If this isn't funded by OpenPhil, why should I think it's a good idea?
- If this is funded by OpenPhil, why should I contribute more money?
You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy's Law states, "no matter who you are, most of the smartest people work for someone else."
But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there's an existing popular venue for crowdsourcing ideas, I'm even less willing to believe that that large EA foundations have simply missed a good opportunity.
Benjamin might argue that OpenPhil is just taking its time to evaluate CEPI, and we should fill its capacity with small donations in the meantime. That might be true, but would still greatly lower the expected impact of giving to CEPI. In this view, you're accelerating CEPI's agenda by however long it takes OpenPhil to evaluate them, but not actually funding work that wouldn't happen otherwise. And of course, if it's taking OpenPhil time to evaluate CEPI, I don't feel that confident that my 5 minutes of thinking about it should be decisive anyway.
When I say "our default view", I don't mean that this is the only valid perspective. I mean it's a good place to start, and we should then think about specific cases where it might not be true.
2. Donor coordination is difficult, especially with other donors thinking seriously about donor coordination.
Assuming that EA is a tightly knit high-trust environment, there seems to be a way to avoid this whole debate. Don't try too hard to reason from first principles, just ask the relevant parties. Does OpenPhil think they're filling the available capacity? Do charities feel like they're funding-constraints despite support from large foundations?
The problem is that under Philanthropic Coordination Theory, there are altruistic reasons to lie, or at least not be entirely transparent. As GiveWell itself writes in their primer on the subject:
Alice and Bob, are both considering supporting a charity whose room for more funding is $X, and each is willing to give the full $X to close that gap. If Alice finds out about Bob's plans, her incentive is to give nothing to the charity, since she knows Bob will fill its funding gap.
Large foundations are Bob in this situation, and small donors are Alice. Assuming GiveWell wants to maintain the incentive for small donors to give, they have to hide their plans.
But why would GiveWell even want to maintain the incentive? Why not just fill the entire capacity themselves? One simple answer is that GiveWell wants to keep more money for other causes. A better answer is that they don't want to breed dependence on a single large donor. As OpenPhil writes:
We typically avoid situations in which we provide >50% of an organization's funding, so as to avoid creating a situation in which an organization's total funding is "fragile" as a result of being overly dependent on us.
The optimistic upshot of this comment is that small donors are essentially matched 1:1. If GiveWell has already provided 50% of AMF's funding, then by giving AMF another $100, you "unlock" another $100 that GiveWell can provide without exceeding their threshold.
But the most pessimistic upshot is that assuming charities have limited capacity, it will be filled by either GiveWell or other small donors. In the extreme version of this view, a donation to AMF doesn't really buy more bednets, it's essentially a donation to GiveWell, or even a donation to Dustin Moskovitz.
Is that so bad? Isn't donating to GiveWell good? That's the argument I'll address in the next section. [1]
3. Benjamin's views on funging don't make sense.
Okay, so maybe a donation to AMF is really a donation to GiveWell, but isn't that fine? After all, it just frees GiveWell to use the money on the next most valuable cause, which is still pretty good.
This seems to be the view Benjamin holds. As he writes, if you donate $1000 to a charity that is OpenPhil backed, "then that means that Open Philanthropy has an additional $1,000 which they can grant somewhere else within their longtermist worldview bucket." The upshot is that the counterfactual impact of your donation is equivalent to the impact of OpenPhil's next-best cause, which is probably a bit lower, but still really good.
The nuances here depend a bit on your model of how OpenPhil operates. There seem to be a few reasonable views:
- OpenPhil will fund the most impactful things up to $Y/year.
- OpenPhil will fund anything with an expected cost-effectiveness of above X QALYs/$.
- OpenPhil tries to fund every highly impactful cause it has the time to evaluate.
In the first view, Benjamin is right. OpenPhil's funding is freed up, and they can give it to something else. But I don't really believe this view. By Benjamin's own estimate, there's around $46 billion committed to EA causes. He goes on to say that: "I estimate the community is only donating about 1% of available capital per year right now, which seems too low, even for a relatively patient philanthropist."
What about the second view? In that case, you're not freeing up any money since OpenPhil just stops donating once it's filled the available capacity.
The third view seems most plausible to me, and is equally pessimistic. As Benjamin writes further on:
available funding has grown pretty quickly, and the amount of grantmaking capacity and research has not yet caught up. I expect large donors to start deploying a lot more funds over the coming years. This might be starting with the recent increase in funding for GiveWell.
But what exactly is "grantmaking capacity and research"? It would make sense if GiveWell has not had time to evaluate all possible causes and institutions, and so there are some opportunities that they're missing. It would not make sense that GiveWell is unable to give more money to AMF due to a research bottleneck.
That implies that you might be justified in giving to a cause that OpenPhil simply hasn't noticed (note the concerns in section 1), but not justified in giving more money to a cause OpenPhil already supports. If Benjamin's view is that EA foundations are research bottlenecked rather than funding bottlenecked, small donations don't "free up" more funding in an impact-relevant way.
4. Practical recommendations
Where does this all leave us? Surprisingly, about back where we started. Benjamin already noted in his post that "there's an opportunity to do even more good than earning to give".
First of all, think hard about the causes that large EA foundations are unable to fund, despite being high impact. As Scott Alexander wrote:
It's not exactly true that EA "no longer needs more money" - there are still some edge cases where it's helpful; a very lossy summary might be "things it would be too weird and awkward to ask Moskovitz + Tuna to spend money on".
This is not exhaustive, but a short list of large foundation limitations include:
- PR risk: It's not worth funding a sperm bank for nobel-prize winners that might later get you labeled a racist. See also, the Copenhagen Interpretation of Ethics, i.e. it might not be worth funding a highly imperfect intervention, even if it's net good.
- More generally, it might not be worth funding an intervention that has a 90% chance of going well, but a 10% chance of going really poorly.
- Small grants: When he launched Emergent Ventures, Tyler Cowen explained that "the high fixed costs of processing any request discriminate against very small proposals". E.g., it's not even worth OpenPhil's time to consider, evaluate and dispense a $500 grant.
To be clear, I don't think these are particular failings of OpenPhil, or EA Funds. Actually, I think that EA foundations do better on these axes than pretty much every other foundation. But there are still opportunities for small individual donors to exploit.
More positively, what are the opportunities I think you should pursue?
-
Fund individuals: As Dan Luu writes, some work depends entirely on who's doing it. If you know a specific person whose work you think is likely to be high-impact, and if some of that knowledge is not institutionally legible, you should consider just funding them yourself.
-
Fund weird things: A decent litmus test is "would it be really embarrassing for my parents, friends or employer to find out about this?" and if the answer is yes, more strongly consider making the grant.
- Of course, the weird things are still subject to more conventional cost-effectiveness estimates.
-
Fund yourself: Instead of earning-to-give, earn-to-retire, and then do direct work yourself with the freedom to ignore that's "fundable" or laudable.
- You might worry that "unfundable" work is unlikely to be high-impact, but again, you should think specifically about what work large foundations can't fund.
Outside of funding, try to:
- Be more ambitious: There's some tradeoff curve between cost-effectiveness and scale. When EA was more funding constrained, a $1M grant with 10X ROI looked better than a $1B grant with 5x ROI, but now the reverse is true.
- Be more entrepreneurial: Similarly, there's a tradeoff between making marginal improvements to a high-impact org, and starting a new org with potentially lower-impact. When EA was more talent constrained, working at existing EA orgs was higher impact. A lot of people would argue that it's still a very high impact, but relatively speaking, the value of starting a brand new org is higher.
- This doesn't mean starting Generic Longtermist Research Firm X, it means trying to do work outside the scope of current organizations.
But as I mentioned at the outset, that's all fairly conventional, and advice that Benjamin would probably agree with. So given that my views differ, where are the really interesting recommendations?
The answer is that I believe in something I'll call "high-variance angel philanthropy". But it's a tricky idea, so I'll leave it for another post.
Is this whole section an infohazard? If thinking too hard about Philanthropic Coordination Theory risks leading to weird adversarial game theory, isn't it better for us to be a little naive? OpenPhil and GiveWell have already discussed it, so I don't personally feel bad about "spilling the beans". In any case, OpenPhil's report details a number of open questions here, and I think the benefits of discussing solutions publicly outweighs the harms of increasing awareness. More importantly, I just don't think this view is hard to come up with on your own. I would rather make it public and thus publicly refutable than risk a situation where a bunch of edgelords privately think donations are useless due to crowding-out but don't have a forum for subjecting those views to public scrutiny. ↩︎
Thanks for red teaming – it seems like lots of people are having similar thoughts, so it’s useful to have them all in one place.
First off, I agree with this:
I say this in the introduction (and my EA Global talk). The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbers) in the 98th percentile of impactful things you might do; while these things might be, say, 99.5-99.9th percentile. I agree my post might not have made this sufficiently salient. It's really hard to correct one misperception without accidentally encouraging one in the opposite direction.
The arguments in your post seem to imply that additional funding has near zero value. My prior is that more money means more impact, but at a diminishing rate.
Before going into your specific points, I’ll try to describe an overall model of what happens when more funds come into the community, which will explain why more money means more but diminishing impact.
Very roughly, EA donors try to fund everything above a ‘bar’ of cost-effectiveness (i.e. value per dollar). Most donors (especially large ones) are reasonably committed to giving away a certain portion of their funds unless cost-effectiveness drops very low, which means that the bar is basically set by how impactful they expect the ‘final dollar’ they give away in the future to be. This means that if more money shows up, they reduce the bar in the long run (though capacity constraints may make this take a while). Additional funding is still impactful, but because the bar has been dropped, each dollar generates a little less value than before.
Here’s a bit more detail of a toy model. I’ll focus on the longtermist case since I think it’s harder to see what’s going on there.
Suppose longtermist donors have $10bn. Their aim might be to buy as much existential risk reduction over the coming decades as possible with that $10bn, for instance, to get as much progress as possible on the AI alignment problem.
Donations to things like the AI alignment problem has diminishing returns – it’s probably roughly logarithmic. Maybe the first $1bn has a cost-effectiveness of 1000:1. This means that it generates 1000 units of value (e.g. utils, x-risk reduction) per $1 invested. The next $10bn returns 100:1, the next $100bn returns 10:1, the next $1,000bn is 2:1, and additional funding after that isn’t cost-effective. (In reality, it’s a smoothly declining curve.)
If longtermist donors currently have $10bn (say), then they can fund the entire first $1bn and $9bn of the next tranche. This means their current funding bar is 100:1 – so they should aim to take any opportunities above this level.
Now suppose some smaller donors show up with $1m between them. Now in total there is $10.001bn available for longtermist causes. The additional $1m goes into the 100:1 tranche, and so has a cost-effectiveness of 100:1. This is a bit lower than the average cost-effectiveness of the first $10bn (which was 190:1), but is the same as marginal donations by the original donors and still very cost-effective.
Now instead suppose another mega-donor shows up with $10bn, so the donors have $20bn in total. They’re able to spend $1bn at 1000:1, then $10bn at 100:1 and then the remaining $9bn is spent on the 10:1 tranche. The additional $10bn had a cost-effectiveness of 19:1 on average. This is lower than the 190:1 of the first $10bn, but also still worth doing.
How does this play out over time?
Suppose you have $10bn to give, and want to donate it over 10 years.
If we assume hinginess isn’t changing & ignore investment returns, then the simplest model is that you’ll want to donate about $1bn per year for 10 years.
The idea is that if the rate of good opportunities is roughly constant, and you’re trying to hit a particular bar of cost-effectiveness, then you’ll want to spread out your giving. (In reality you’ll give more in years where you find unusually good things, and vice versa.)
Now suppose a group of small donors show up who have $1bn between them. Then the ideal is that the community donates $1.1bn per year for 10 years – which requires dropping their bar (but only a little).
One way this could happen is for the small donors to give $100m per year for 10 years (‘topping up’). Another option is for the small donors to give $1bn in year 1 – then the correct strategy for the megadonor is to only give $100m in year 1 and give $1.1bn per year for the remaining 9 (‘partial funging’).
A big complication is that the set of opportunities isn’t fixed – we can discover new opportunities through research or create them via entrepreneurship. (This is what I mean by ‘grantmaking capacity and research’.)
It takes a long time to scale up a foundation, and longtermism as a whole is still tiny. This means there’s a lot of scope to find or create better opportunities. So donors will probably want to give less at the start of the ten years, and more towards the end when these opportunities have been found (and earning investment returns in the meantime).
Now I can use this model to respond to some of your specific points:
Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.
This doesn’t, however, mean donating to CEPI has no value. I think CEPI could make a meaningful contribution to biosecurity (and given my personal cause selection, likely similarly or more effective than donating to GiveWell-recommended charities).
An opportunity can be below Open Phil’s current funding bar if Open Phil expects to find even better opportunities in the future (as more opportunities come along each year, and as they scale up their grantmaking capacity), but that doesn’t mean it wouldn’t be ‘worth funding’ if we had even more money.
My point isn’t that people should donate to CEPI, and I haven’t thoroughly investigated it myself. It’s just meant as an illustration of how there are many more opportunities at lower levels of cost-effectiveness. I actually think both small donors and Open Phil can have an impact greater than funding CEPI right now.
(Of course, Open Phil could be wrong. Maybe they won’t discover better opportunities, or EA funding will grow faster than they expect, and their bar today should be lower. In this case, it will have been a mistake not to donate to CEPI now.)
It’s true that it’s not easy to beat Open Phil in terms of effectiveness, but this line of reasoning seems to imply that Open Phil is able to drive cost-effectiveness to negligible levels in all causes of interest. Actually Open Phil is able to fund everything above a certain bar, and additional small donations have a cost-effectiveness similar to that bar.
You’re right that donations to AMF probably doesn’t buy more bednets, since AMF is not the marginal opportunity any more (I think, not sure about that). Rather, additional donations to global health get added to the margin of GiveWell donations over the long term, which Open Phil and GiveWell estimate has a cost-effectiveness of about 7x GiveDirectly / saving the life of a child under 5 for $4,500.
You’re also right that as additional funding comes in, the bar goes down, and that might induce some donors to stop giving all together (e.g. maybe people are willing to donate above a certain level of cost-effectiveness, but not below.
However, I think we’re a long way from that point. I expect Dustin Moskovitz would still donate almost all his money at GiveDirectly-levels of cost-effectiveness, and even just within global health, we’re able to hit levels at least 5x greater than that right now.
Raising everyone in the world above the extreme poverty line would cost perhaps $100bn per year (footnote 8 here), so we’re a long way from filling everything at a GiveDirectly level of cost-effectiveness – we’d need about 50x as much capital as now to do that, and that’s ignoring other cause areas.
I think view (2) is closest, but this part is incorrect:
What actually happens is that as more funding comes in, Open Phil (& other donors) slightly reduces its bar, so that the total donated is higher, and cost-effectiveness a little lower. (Which might take several years.)
Why doesn’t Open Phil drop its bar already, especially given that they’re only spending ~1% of available capital per year? Ideally they’d be spending perhaps more like 5% of available capital per year. The reason this isn’t higher already is because growth in grantmaking capacity, research and the community will make it possible to find even more effective opportunities in the future. I expect Open Phil will scale up its grantmaking several fold over the coming decade. It looks like this is already happening within neartermism.
One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable. If the ratio became sufficiently extreme, additional capital would start to have relatively little value. However, I think we could actually deploy billions more without any additional people and still achieve reasonable cost-effectiveness. It’s just that I think that if we had more labour (especially the types of labour that are most complementary with funding), the cost-effectiveness would be even higher.
Finally, on practical recommendations, I agree with you that small donors have the potential to make donations even more effective than Open Phil’s current funding bar by pursuing strategies similar to those you suggest (that’s what my section 3 covers – though I don’t agree that grants with PR issues is a key category). But simply joining Open Phil in funding important issues like AI safety and global health still does a lot of good.
In short, world GDP is $80 trillion. The interest on EA funds is perhaps $2.5bn per year, so that’s the sustainable amount of EA spending per year. This is about 0.003% of GDP. It would be surprising if that were enough to do all the effective things to help others.
I want to 'second' some key points you made (which I was going to make myself). The main theme is that these 'absolute' thresholds are not absolute; these are simplified expressions of the true optimization problem.
The real thresholds will be adjusted in light of available funding, opportunities, and beliefsabout future funding.
See comments (mine and others) on the misconception of 'room for more funding'... the "RFMF" idea must be, either an approximate relative judgment ('past this funding, we think other opportunities may be better') or short-term capac... (read more)