Benjamin_Todd

Wiki Contributions

Comments

Despite billions of extra funding, small donors can still have a significant impact

Thanks, fixed. (https://twitter.com/ben_j_todd/status/1462882167667798021)

A Red-Team Against the Impact of Small Donations

It's hard to know – most valuations of the human capital are bound up with the available financial capital. One way to frame the question is to consider how much the community could earn if everyone tried to earn to give. I agree it's plausible that would be higher than the current income on the capital, but I think could also be a lot less.

A Red-Team Against the Impact of Small Donations

Thanks for red teaming – it seems like lots of people are having similar thoughts, so it’s useful to have them all in one place.

First off, I agree with this:

I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.

I say this in the introduction (and my EA Global talk). The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbers) in the 98th percentile of impactful things you might do; while these things might be, say, 99.5-99.9th percentile. I agree my post might not have made this sufficiently salient. It's really hard to correct one misperception without accidentally encouraging one in the opposite direction.

The arguments in your post seem to imply that additional funding has near zero value. My prior is that more money means more impact, but at a diminishing rate.

Before going into your specific points, I’ll try to describe an overall model of what happens when more funds come into the community, which will explain why more money means more but diminishing impact.

Very roughly, EA donors try to fund everything above a ‘bar’ of cost-effectiveness (i.e. value per dollar). Most donors (especially large ones) are reasonably committed to giving away a certain portion of their funds unless cost-effectiveness drops very low, which means that the bar is basically set by how impactful they expect the ‘final dollar’ they give away in the future to be. This means that if more money shows up, they reduce the bar in the long run (though capacity constraints may make this take a while). Additional funding is still impactful, but because the bar has been dropped, each dollar generates a little less value than before.

Here’s a bit more detail of a toy model. I’ll focus on the longtermist case since I think it’s harder to see what’s going on there.

Suppose longtermist donors have $10bn. Their aim might be to buy as much existential risk reduction over the coming decades as possible with that $10bn, for instance, to get as much progress as possible on the AI alignment problem.

Donations to things like the AI alignment problem has diminishing returns – it’s probably roughly logarithmic. Maybe the first $1bn has a cost-effectiveness of 1000:1. This means that it generates 1000 units of value (e.g. utils, x-risk reduction) per $1 invested. The next $10bn returns 100:1, the next $100bn returns 10:1, the next $1,000bn is 2:1, and additional funding after that isn’t cost-effective. (In reality, it’s a smoothly declining curve.)

If longtermist donors currently have $10bn (say), then they can fund the entire first $1bn and $9bn of the next tranche. This means their current funding bar is 100:1 – so they should aim to take any opportunities above this level.

Now suppose some smaller donors show up with $1m between them. Now in total there is $10.001bn available for longtermist causes. The additional $1m goes into the 100:1 tranche, and so has a cost-effectiveness of 100:1. This is a bit lower than the average cost-effectiveness of the first $10bn (which was 190:1), but is the same as marginal donations by the original donors and still very cost-effective.

Now instead suppose another mega-donor shows up with $10bn, so the donors have $20bn in total. They’re able to spend $1bn at 1000:1, then $10bn at 100:1 and then the remaining $9bn is spent on the 10:1 tranche. The additional $10bn had a cost-effectiveness of 19:1 on average. This is lower than the 190:1 of the first $10bn, but also still worth doing.

How does this play out over time?

Suppose you have $10bn to give, and want to donate it over 10 years.

If we assume hinginess isn’t changing & ignore investment returns, then the simplest model is that you’ll want to donate about $1bn per year for 10 years.

The idea is that if the rate of good opportunities is roughly constant, and you’re trying to hit a particular bar of cost-effectiveness, then you’ll want to spread out your giving. (In reality you’ll give more in years where you find unusually good things, and vice versa.)

Now suppose a group of small donors show up who have $1bn between them. Then the ideal is that the community donates $1.1bn per year for 10 years – which requires dropping their bar (but only a little).

One way this could happen is for the small donors to give $100m per year for 10 years (‘topping up’). Another option is for the small donors to give $1bn in year 1 – then the correct strategy for the megadonor is to only give $100m in year 1 and give $1.1bn per year for the remaining 9 (‘partial funging’).

A big complication is that the set of opportunities isn’t fixed – we can discover new opportunities through research or create them via entrepreneurship. (This is what I mean by ‘grantmaking capacity and research’.)

It takes a long time to scale up a foundation, and longtermism as a whole is still tiny. This means there’s a lot of scope to find or create better opportunities. So donors will probably want to give less at the start of the ten years, and more towards the end when these opportunities have been found (and earning investment returns in the meantime). 

Now I can use this model to respond to some of your specific points:

At face value, CEPI seems great. But at the meta-level, I still have to ask, if CEPI is a good use of funds, why doesn't OpenPhil just fund it?

Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.

This doesn’t, however, mean donating to CEPI has no value. I think CEPI could make a meaningful contribution to biosecurity (and given my personal cause selection, likely similarly or more effective than donating to GiveWell-recommended charities).

An opportunity can be below Open Phil’s current funding bar if Open Phil expects to find even better opportunities in the future (as more opportunities come along each year, and as they scale up their grantmaking capacity), but that doesn’t mean it wouldn’t be ‘worth funding’ if we had even more money. 

My point isn’t that people should donate to CEPI, and I haven’t thoroughly investigated it myself. It’s just meant as an illustration of how there are many more opportunities at lower levels of cost-effectiveness. I actually think both small donors and Open Phil can have an impact greater than funding CEPI right now.

(Of course, Open Phil could be wrong. Maybe they won’t discover better opportunities, or EA funding will grow faster than they expect, and their bar today should be lower. In this case, it will have been a mistake not to donate to CEPI now.)


In general, my default view for any EA cause is always going to be:

If this isn't funded by OpenPhil, why should I think it's a good idea?

If this is funded by OpenPhil, why should I contribute more money?

It’s true that it’s not easy to beat Open Phil in terms of effectiveness, but this line of reasoning seems to imply that Open Phil is able to drive cost-effectiveness to negligible levels in all causes of interest.  Actually Open Phil is able to fund everything above a certain bar, and additional small donations have a cost-effectiveness similar to that bar.

In the extreme version of this view, a donation to AMF doesn't really buy more bednets, it's essentially a donation to GiveWell, or even a donation to Dustin Moskovitz.

You’re right that donations to AMF probably doesn’t buy more bednets, since AMF is not the marginal opportunity any more (I think, not sure about that). Rather, additional donations to global health get added to the margin of GiveWell donations over the long term, which Open Phil and GiveWell estimate has a cost-effectiveness of about 7x GiveDirectly / saving the life of a child under 5 for $4,500.

You’re also right that as additional funding comes in, the bar goes down, and that might induce some donors to stop giving all together (e.g. maybe people are willing to donate above a certain level of cost-effectiveness, but not below.

However, I think we’re a long way from that point. I expect Dustin Moskovitz would still donate almost all his money at GiveDirectly-levels of cost-effectiveness, and even just within global health, we’re able to hit levels at least 5x greater than that right now.

Raising everyone in the world above the extreme poverty line would cost perhaps $100bn per year (footnote 8 here), so we’re a long way from filling everything at a GiveDirectly level of cost-effectiveness – we’d need about 50x as much capital as now to do that, and that’s ignoring other cause areas.

There seem to be a few reasonable views:

1. OpenPhil will fund the most impactful things up to $Y/year.

2. OpenPhil will fund anything with an expected cost-effectiveness of above X QALYs/$.

3. OpenPhil tries to fund every highly impactful cause it has the time to evaluate.

I think view (2) is closest, but this part is incorrect:

What about the second view? In that case, you're not freeing up any money since OpenPhil just stops donating once it's filled the available capacity.

What actually happens is that as more funding comes in, Open Phil (& other donors) slightly reduces its bar, so that the total donated is higher, and cost-effectiveness a little lower. (Which might take several years.)

Why doesn’t Open Phil drop its bar already, especially given that they’re only spending ~1% of available capital per year? Ideally they’d be spending perhaps more like 5% of available capital per year. The reason this isn’t higher already is because growth in grantmaking capacity, research and the community will make it possible to find even more effective opportunities in the future. I expect Open Phil will scale up its grantmaking several fold over the coming decade. It looks like this is already happening within neartermism.

One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable. If the ratio became sufficiently extreme, additional capital would start to have relatively little value. However, I think we could actually deploy billions more without any additional people and still achieve reasonable cost-effectiveness. It’s just that I think that if we had more labour (especially the types of labour that are most complementary with funding), the cost-effectiveness would be even higher.

Finally, on practical recommendations, I agree with you that small donors have the potential to make donations even more effective than Open Phil’s current funding bar by pursuing strategies similar to those you suggest (that’s what my section 3 covers – though I don’t agree that grants with PR issues is a key category).  But simply joining Open Phil in funding important issues like AI safety and global health still does a lot of good.

In short, world GDP is $80 trillion. The interest on EA funds is perhaps $2.5bn per year, so that’s the sustainable amount of EA spending per year. This is about 0.003% of GDP. It would be surprising if that were enough to do all the effective things to help others.


 

Despite billions of extra funding, small donors can still have a significant impact

There isn't a hard cutoff, but one relevant boundary is when you can ignore the other issue for practical purposes. At 10-100x differences, then other factors like personal fit or finding an unusually good opportunity can offset differences in cause effectiveness. At, say 10,000x, they can't.

Sometimes people also suggest that e.g. existential risk reduction is 'astronomically' more effective than other causes (e.g. 10^10 times), but I don't agree with that for a lot of reasons.

Despite billions of extra funding, small donors can still have a significant impact

That's fair - the issue is there's a countervailing force in that OP might just fill 100% of their budget themselves if it seems valuable enough. My overall guess is that you probably get less than 1:1 leverage most of the time.

Despite billions of extra funding, small donors can still have a significant impact

I think this dynamic has sometimes applied in the past.

However, Open Philanthropy are now often providing 66%, and sometimes 100%, so I didn't want to mention this as a significant benefit.

There might still be some leverage in some cases, but less than 1:1. Overall, I think a clearer way to think about this is in terms of the value of having a diversified donor base, which I mention in the final section.

AI Safety Needs Great Engineers

+1 to this!

If you're a software engineer considering transitioning into AI Safety, we have a guide about how to do it, and attached podcast interview.

There are also many other ways SWE can use their skills for direct impact, including in biosecurity and by transitioning into information security, building systems at EA orgs, or in various parts of govt.

To get more ideas, we have 180+ engineering positions on our job board.

Despite billions of extra funding, small donors can still have a significant impact

There are no sharp cut offs - just gradually diminishing returns.

An org can pretty much always find a way to spend 1% more money and have a bit more impact. And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.

 

You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually effective.

But Open Phil believes it can spend marginal dollars at ~7x GiveDirectly.

I think what's happening is that Open Phil is taking up opportunities down to ~7x GiveDirectly, and so if small donors top up those orgs, those extra donations will be basically as effective as 7x GiveDirectly (in practice negligibly lower).

 

Despite billions of extra funding, small donors can still have a significant impact

Yes, my main attempt to discuss the implications of the extra funding is in the Is EA growing? post and my talk at EAG. This post was aimed at a specific misunderstanding that seems to have come up. Though, those posts weren't angsty either.

Despite billions of extra funding, small donors can still have a significant impact

This is the problem with the idea of 'room for funding'. There is no single amount of funding a charity 'needs'. In reality there's just a diminishing return curve. Additional donations tend to have a little less impact, but this effect is very small when we're talking about donations that are small relative to the charity's budget (if there's only one charity you want to support), or small relative to the EA community as a whole if you take a community perspective.

Load More