In a recent comment, Ben Todd of 80,000 Hours wrote about his desire to see more EA entrepreneurs who can scale their charities to spend larger amounts of money:
I'm especially excited about finding people who could run $100m+ per year 'megaprojects', as opposed to more non-profits in the $1-$10m per year range, though I agree this might require building a bigger pipeline of smaller projects.
He later tweeted
It's striking that the projects that were biggest in pre-2015 (OP, GiveWell, MIRI, CEA, 80k, FHI) are still among the biggest today, when additional resources should make new types of project possible.
It is striking and surprising that these are still some of the largest projects in the EA community. However, it's not surprising that these types of projects aren't spending $100 million or more each year. Other than regranting, GiveWell's largest expense in 2020 was staff salaries - they spent just over $3 million on salaries. In total, they spent about $6 million (excluding regranting). GiveWell would have to grow to 20x the size in order to become a $100 million 'megaproject' [1].
It is very hard for a charity to scale to more than $100 million per year without delivering a physical product. If you spend half your money on staff, like GiveWell, and spend on average $200,000 per employee including benefits and anything you owe to the government, you'd still need at least 250 employees. In comparison to GiveWell's $6 million in expenditure, the Against Malaria Foundation spent $65 million in 2020 on delivering bednets.
It makes intuitive sense that charities that deliver something tangible should spend more than charities with primarily desk-based staff, and that charities that deliver tangible benefits while remaining competitively cost-effective are particularly valuable, but most EA charities focus on desk-based work. By my count, 8 of the 13 charities incubated by Charity Entrepreneurship are focused entirely on research or advocacy, with the rest focused on other low-cost interventions like purchasing radio ads or sending text messages. These charities are good and I'm glad that they exist! And when we're focusing on cost-effectiveness, one of the best ways to achieve a good cost-benefit analysis is lowering costs. However, few of them seem likely to scale to a $100 million megaproject.
If EA wants to spend larger amounts of money cost-effectively, we will need to start identifying or founding charities with physical aspects to them. Some examples suggested are researching vaccines for neglected diseases (or researching how to speed up developing vaccines for novel viruses) or the Sentinal system for identifying new diseases. For more examples in the global poverty and climate space, the types of initiatives the Gates Foundation tend to invest in might be instructive -for example, developing low-carbon cement or gene-editing mosquitoes.
Being an executive at a charity that delivers a tangible product requires different skills to running a research or advocacy charity. A smaller charity will likely need to recruit all-rounders who are pretty good at strategy, finance, communications and more. In contrast, in a $100 million you will also need people with specialized skills and experience in areas like negotiating contracts or managing supply chains. If you want to start or join an EA charity that can scale to $100 million per year, you should consider developing skills in managing large-scale projects in industry, government or another large charity in addition to building relationships and experience within the EA community.
In summary, charities are more likely to be 'megascalable' if they involve staff doing something other than sitting at a desk, so if you're keen to lead a huge EA project consider developing delivery skills that are rarer in the EA community.
[1] The technical definition of 'megaproject' in the academic literature is unrelated to what Ben's talking about here - he's simply talking about very large projects.
Epistemic status: Moderate opinion, held weakly.
I think one thing that people, both in and outside of EA orgs, find confusing is that we don't have a sense of how high the standards of marginal cost-effectiveness ought to be before it's worth scaling at all. Related concepts include "Open Phil's last dollar" and "quality standards/"
In global health I think there's a clear minimal benchmark (something like "$s given to GiveDirectly at >10B/year scales"), but it's not clear I think whether people should bother creating scalable charities that are slightly better in expectation (say 2x) than GiveDirectly or if they ought to have a plausible case to be competing with marginal Malaria Consortium or AMF or deworming donations (which I think is estimated at current disease burdens, moral value of life vs economic benefits, etc, to be ~5-25x(?) the impact of GiveDirectly).
In longtermism I think the situation is murkier. There's no minimal baseline at all (except maybe GiveDirectly again, which is now more reliant on moral beliefs rather than empirical beliefs about the world), so I think people are just quite confused in general whether what's scaling looks more like "90th percentile climate change intervention" vs "has a plausible shot of being the most important AI alignment intervention."
In animal welfare it's somewhere in between. I think corporate campaigns a) looks like a promising marginal use of money and b)our uncertainty about its impact ranges more like 2 orders of magnitude (rather than ~1 for global health and ~infinite for longtermism). But comparing scalable interventions to existing corporate campaigns is premised on there not being lots of $s that'd flood the animal welfare space in the future, and I think this is a quite uncertain proposition in practice.
Meta is at least as confused as the object-level charities because you're multiplying the uncertainty of doing the meta work to the uncertainty of how it feeds into the object-level work, so it should be more confused, not less.
Personally, my own best guess is that I think when people are confused about what quality standards to aim at, they default to either a) sputtering around or b) doing the highest quality things possible instead of consciously and carefully think about what things can scale while maintaining (or accepting slightly worse) current quality, which means we currently implicitly overestimate the value of the last EA dollar.
I'm inside-view pretty convinced last-dollar uncertainty is a really big deal in practice, yet many grantmakers seem to disagree (see eg comments here), I'm not sure where the intuition differences lie.