In a recent comment, Ben Todd of 80,000 Hours wrote about his desire to see more EA entrepreneurs who can scale their charities to spend larger amounts of money:

I'm especially excited about finding people who could run $100m+ per year 'megaprojects', as opposed to more non-profits in the $1-$10m per year range, though I agree this might require building a bigger pipeline of smaller projects.

He later tweeted

It's striking that the projects that were biggest in pre-2015 (OP, GiveWell, MIRI, CEA, 80k, FHI) are still among the biggest today, when additional resources should make new types of project possible.

It is striking and surprising that these are still some of the largest projects in the EA community. However, it's not surprising that these types of projects aren't spending $100 million or more each year. Other than regranting, GiveWell's largest expense in 2020 was staff salaries - they spent just over $3 million on salaries. In total, they spent about $6 million (excluding regranting). GiveWell would have to grow to 20x the size in order to become a $100 million 'megaproject' [1].

It is very hard for a charity to scale to more than $100 million per year without delivering a physical product. If you spend half your money on staff, like GiveWell, and spend on average $200,000 per employee including benefits and anything you owe to the government, you'd still need at least 250 employees. In comparison to GiveWell's $6 million in expenditure, the Against Malaria Foundation spent $65 million in 2020 on delivering bednets.

It makes intuitive sense that charities that deliver something tangible should spend more than charities with primarily desk-based staff, and that charities that deliver tangible benefits while remaining competitively cost-effective are particularly valuable, but most EA charities focus on desk-based work. By my count, 8 of the 13 charities incubated by Charity Entrepreneurship are focused entirely on research or advocacy, with the rest focused on other low-cost interventions like purchasing radio ads or sending text messages. These charities are good and I'm glad that they exist! And when we're focusing on cost-effectiveness, one of the best ways to achieve a good cost-benefit analysis is lowering costs. However, few of them seem likely to scale to a $100 million megaproject.

If EA wants to spend larger amounts of money cost-effectively, we will need to start identifying or founding charities with physical aspects to them. Some examples suggested are researching vaccines for neglected diseases (or researching how to speed up developing vaccines for novel viruses) or the Sentinal system for identifying new diseases. For more examples in the global poverty and climate space, the types of initiatives the Gates Foundation tend to invest in might be instructive -for example, developing low-carbon cement or gene-editing mosquitoes.

Being an executive at a charity that delivers a tangible product requires different skills to running a research or advocacy charity. A smaller charity will likely need to recruit all-rounders who are pretty good at strategy, finance, communications and more. In contrast, in a $100 million you will also need people with specialized skills and experience in areas like negotiating contracts or managing supply chains. If you want to start or join an EA charity that can scale to $100 million per year, you should consider developing skills in managing large-scale projects in industry, government or another large charity in addition to building relationships and experience within the EA community.

In summary, charities are more likely to be 'megascalable' if they involve staff doing something other than sitting at a desk, so if you're keen to lead a huge EA project consider developing delivery skills that are rarer in the EA community.

[1] The technical definition of 'megaproject' in the academic literature is unrelated to what Ben's talking about here - he's simply talking about very large projects.

116

0
0

Reactions

0
0

More posts like this

Comments48
Sorted by Click to highlight new comments since: Today at 5:23 AM

Agree with this. I just want to be super clear that I think entrepreneurs should optimise for something like cost-effectiveness x scale.

I think research & advocacy orgs can often be 10x more cost-effective than big physical projects, so a $10m research org might be as impactful as a $100m physical org, so it's sometimes going to be the right call.

But I think the EA mindset probably focuses a bit too much on cost-effectiveness rather than scale (since we approach it from the marginal donor perspective rather than the entrepreneur one). If we're also leadership constrained, we might prefer a smaller number of bigger projects, and the bigger projects often have bigger externalities.

Overall, I agree we should be considering big physical projects, and agree these probably require different skills.

The reason most EA founders (and aspiring founders) act as if money is scares, is because the lived experience of most EA founders is that money is hard to get. As far as I know, this is true in all cause areas, including long-termism.

Yes - part of the reason this the funding overhang dynamic is happening in the first place is that it's really hard to think of a project that has a clearly net positive return from a longtermist perspective, and even harder to put it into practice.

Yeah, in the same thread Ben tweets:

4) There is plenty of funding, a fair number of interested junior employees, and also some ideas for megaprojects. The biggest bottleneck seems like leadership. Second would be more and better ideas.

But the EA Infrastructure Fund currently only has ~$65k available

If there is plenty of funding, is it just in the wrong place? Given Ben's latest post should we be encouraging donations to the EA Infrastructure Fund (and Long-Term Future Fund) rather than the Global Health and Development Fund, which currently has over $7m available?

But the EA Infrastructure Fund currently only has ~$65k available

Hi, thanks for mentioning this - I am the chairperson of the EA Infrastructure Fund and wanted to quickly comment on this: We do have room for more funding, but the $65k number is too low. As of one week ago, the EAIF had at least $290k available. (The website for me now does show $270k, not $65k.)

It is currently hard to get accurate numbers, including for ourselves at EA Funds, due to an accounting change at CEA. Apologies for any confusion this might cause. We will fix the number on the website as soon as possible, and will also soon provide more reliable info on our room for more funding in an EA Forum post or comment.

ETA: according to a new internal estimate, as of August 10th the EAIF had $444k available.

I have edited all our fund pages to include the following sentence:

Note: We are temporarily unable to display correct fund balances. Please ignore the balance listed below while we are fixing the issue.

I'd be happy to see more going to meta at the margin, though I'd want to caution against inferring much from how much the EA Infastructure Fund has available right now. 

The key question is something like "can they identify above-the-bar projects that are not getting funded otherwise?"

I believe the Infrastructure team has said they could fund a couple of million dollars worth of extra projects, and if so, I hope that gets funded.

Though even that also doesn't tell us much about the overall situation. Even in a world with a big funding overhang, we should expect there to be some gaps.

Epistemic status: Moderate opinion, held weakly.

I think one thing that people, both in and outside of EA orgs, find confusing is that we don't have a sense of how high the standards of marginal cost-effectiveness ought to be before it's worth scaling at all. Related concepts include "Open Phil's last dollar" and  "quality standards/"

In global health I think there's a clear minimal benchmark (something like "$s given to GiveDirectly at >10B/year scales"), but it's not clear I think whether people should bother creating scalable charities that are slightly better in expectation (say 2x) than GiveDirectly or if they ought to have a plausible case to be competing with marginal Malaria Consortium or AMF or deworming donations (which I think is  estimated at current disease burdens, moral value of life vs economic benefits, etc, to be ~5-25x(?) the impact of GiveDirectly).

In longtermism I think the situation is murkier. There's no minimal baseline at all (except maybe GiveDirectly again, which is now more reliant on moral beliefs rather than empirical beliefs about the world), so I think people are just quite confused in general whether what's scaling looks more like "90th percentile climate change intervention" vs "has a plausible shot of being the most important AI alignment intervention." 

In animal welfare it's somewhere in between. I think corporate campaigns a) looks like a promising marginal use of money and b)our uncertainty about its impact ranges more like 2 orders of magnitude (rather than ~1 for global health and ~infinite for longtermism). But comparing scalable interventions to  existing corporate campaigns is premised on there not being lots of $s that'd flood the animal welfare space in the future, and I think this is a quite uncertain proposition in practice.

Meta is at least as confused as the object-level charities because you're multiplying the uncertainty of doing the meta work to the uncertainty of how it feeds into the object-level work, so it should be more confused, not less. 

Personally, my own best guess is that I think when people are confused about what quality standards to aim at, they default to either a) sputtering around or b) doing the highest quality things possible instead of consciously and carefully think about what things can scale while maintaining (or accepting slightly worse) current quality, which means we currently implicitly overestimate the value of the last EA dollar.

I'm inside-view pretty convinced last-dollar uncertainty is a really big deal in practice, yet many grantmakers seem to disagree (see eg comments here), I'm not sure where the intuition differences lie.

I agree this is a big issue, and my impression is many grantmakers agree.

In longtermism, I think the relevant benchmark is indeed something like OP's last dollar in the longtermism worldview bucket. Ideally, you'd also include the investment returns you'll earn between now and when that's spent. This is extremely uncertain.

Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don't think it's the most relevant benchmark - more of a lower bound.

In some ways, meta seems more straightforward - the benchmark should be can you produce more than 1 unit of resources (NPV) per unit that you use?

I agree this is a big issue, and my impression is many grantmakers agree.

Hmm I'd love to see some survey results or a more representative sample. I often have trouble telling whether my opinions are contrarian or boringly mainstream! 

Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don't think it's the most relevant benchmark - more of a lower bound.

I wonder if this is better or worse than buying up fractions of AI companies?

In some ways, meta seems more straightforward - the benchmark should be can you produce more than 1 unit of resources (NPV) per unit that you use?

I think I agree, but I'm not confident about this, because this feels maybe too high-level? "1 unit" seems much more heterogeneous and less fungible when the resources we're thinking of is "people" or (worse) "conceptual breakthroughs" (as might be the case for cause prio work), and there are lots of ways that things are in practice pretty hard to compare, including but not limited to sign flips.

I should have probably have just said that OP seem very interested in the last dollar problem (and that's ~60% of grantmaking capacity).

Agree with your comments on meta.

With cause pri research, I'd be trying to think about how much more effectively it lets us spend the portfolio e.g. a 1% improvement to $420 million per year is worth about $4.2m per year.

cost-effectiveness x scale

So just total impact?

Yes, basically - if you're starting a new project, then all else equal, go for the one with highest potential total impact.

Instead, people often focus on setting up the most cost-effective project, which is a pretty different thing.

This isn't a complete model by any means, though :) Agree with what Lukas is saying below.

With a bunch of unrealistic assumptions (like constant cost-effectiveness), the counterfactual impact should be (impact/resource  -  opportunitycost/resource)  *  resource.

If impact/resource  is much bigger than opportunitycost/resource  (so that the latter is negligible) this is roughly equal to impact/resource * resource, which is one reading of cost-effectiveness * scale.

If so, assuming that resource=$ in this case, this roughly translates to the heuristic "if the opportunity cost of money isn't that high (compared to your project), you should optimise for total impact without thinking much about  the monetary costs".

Good point.

We could also read "impact/resource  -  opportunitycost/resource" as a cost-effectiveness estimate that takes opportunity costs into account. I think Charity Entrepreneurship has been optimizing for this (at least sometimes, based on the work I've seen in the animal space) and they refer to it as a cost-effectiveness estimate, but I think this is not typical in EA.

If impact/resource  is much bigger than opportunitycost/resource  (so that the latter is negligible) this is roughly equal to impact/resource * resource, which is one reading of cost-effectiveness * scale.

 

Also, this is looking more like cost-benefit analysis than cost-effectiveness analysis.

I think its a really good point that there's something very different between research/policy orgs and orgs that deliver products and services at scale. I basically agree, but I'd slightly tweak this to
"It is very hard for a charity to scale to more than $100 million per year without delivering a physical product or service."

Because  digital orgs/companies who deliver a digital service (GiveDirectly, Facebook/Google/etc) obviously can scale to $100 million per year. 

We can also spend a lot on advertisement, which seems neither like a product nor a service.

  1. Ads for Veganuary, Challenge 22 and similar diet pledge programs might scale reasonably well (both within and across regions). I suppose they're also providing services, i.e. helping people go vegan with information, support groups, dieticians/nutritionists/other healthcare professionals, etc., but that's separate from the ads.
  2. Ads for documentaries, videos, books or articles to get people into EA or specific causes.

Can GiveDirectly's "service" actually/should scale up to >$100m/year? Obviously they can distribute >100M/year, but I'm interested in whether they need or benefit from >100m/year of employees, software, etc (what in other subsectors of the nonprofit world just be called "overhead"), without just  tacking on unnecessary bloat. 

Absolutely, that's a great point!

One point of reference to note is that Bill and Melinda Gates Foundation had about 240 million in "management and general expenses" for about 5 billion in "total program expenses" (which I assume is grants made but I haven't checked). Open Phil is relatively lean right now, but if the EA community hits the point where the community is granting several billion dollars a year, it might make sense for our grantmaking institutions to also be operating at >100M/year scale.

Another point of reference is that a number of US universities each spend >1B/year on research. Now they probably have large sources of inefficency and ways costs can be cut, but otoh probably also have ways that productivity can be increased by spending money.

And in terms of scope, it does naively seem like EA has enough sufficiently important questions that the equivalent of a single top-tier US university should not be quite enough to solve all of them.

So on balance I'm not convinced that EA research is not something that can't use up greater (potentially much greater) than 100M/year, but of course our current research institutions are not quite designed for scaling*

*see scalably using labor

I've thought about this on-and-off over the last 3 months, and my current tentative conclusion is that the succinct version of my comment ("research orgs can be mega-projects") is obviously true and the strong version of the OP is wrong.

In addition to the examples above (19 US universities with budgets >1B/year, existence of large foundations), we can also look at the most recent post about RTI, with a budget of nearly a billion. You can also look at a number of thinktanks, the past budget of Bell Labs, or the R&D departments of major corporations.

I think a more sophisticated objection (which I suspect Khorton doesn't believe) is that you just can't make a research org that big without most of it being subsumed by fake work with low moral import, see e.g. RTI, the other recent EAF post about Fraunhofer, academic research universities, etc.

(I also don't really believe the more sophisticated objection myself, because as an empirical matter humans do appear to be making progress on scientific matters, and I think a lot of said progress is made within large institutions?)

Concretely, I think both RP and Open Phil have a decent shot* of scaling up to >100M/year without substantial loss of rigor, mission creep, etc, though it might take many years and may not be worth it. I also believe this about Redwood Research, and (to a lesser extent) FHI if they decide to shed off the Oxford mantle.

*to be clear I'm not saying that this will happen by default, nor that it's necessarily advisable. Scaling is never easy, and it's certainly harder in the nonprofit world than in startups due to aspects like the lack of contact with reality, it being easier to do fake work, etc. 

I think EA orgs are relevantly different from most non-EA orgs, in that EA orgs often desire staff that have a detailed understanding of EA thinking - which few people have. By contrast, you typically don't need anything analogous to work at the Bill and Melinda Gates Foundation or at a university. That's a reason to believe it's harder to scale EA research orgs quickly.

I think that is a reason that we can't quickly  scale, but not a strong reason that we can't eventually reach a similar scale to Gates/universities.

I expect that as these fields mature, we'll break things down into better-defined problems that can be more effectively done by less aligned people. (I think  this is already happening to some extent - e.g. compare the type of AI timelines research needed 5 years ago vs. asking someone to do more research into the parameters of recent OP reports.)

From the outside, Givewell work also feels much more regimented and doable by less-aligned people, compared to the early heady days when Holden and Elie were hacking things out from first principles without even knowing about QALYs.

Potentially, but I think the debate largely concerned near-term megaprojects. Cf.:

people able to run big EA projects seem like one of our key bottlenecks right now ... I'm especially excited about finding people who could run $100m+ per year 'megaprojects'

And to the extent that we're discussing near-term megaprojects, quick scaling matters.

I see, I agree with that.

RTI and the Bill and Melinda Gates Foundation in your earlier comment are good counterexamples to what I said - I didn't expect to see a research organisation hiring quite that many people. I would be really surprised to see the organisations you listed grow to more than 5000 employees, but you're right that it's not impossible, especially for Open Philanthropy.

I don't think of Bell Labs as a counterexample because afaik they spent a lot of money on expensive equipment, rather than spending $50M+ just on staff, but maybe I'm wrong about that.

Note that at $100M/year, having >5000 employees means the average cost per employee is <20k people. 

Also I think Ben's post about scalability was primarily about cost-effective ways to deploy capital at scale, so number of employees isn't a major crux. 

I believe that in time EA research/analysis orgs both could and should spend > $100m pa.

There are many non-EA orgs whose staff largely sit at a desk, and who spend >$100m, and I believe an EA org could too.

Let's consider one example. Standard & Poors  (S&P) spent c.$3.8bn in 2020 (source: 2020 accounts). They produce ratings on companies, governments, etc. These ratings help answer the question: "if I lend the company money, will I get my money back". Most major companies have a rating with S&P. (S&P also does other things like indices, however I'm sure the ratings bit alone spends >$100m p.a.)

S&P for charities?

Currently, very few analytical orgs in the EA space aim to have as broad a coverage of charities as S&P does of companies/governments/etc.

However an org which did this would have significant benefits.

  • They would have a broader appeal because they would be useful to so many more people; it could conceivably achieve the level of brand recognition achieved by charity evaluators such as Charity Navigator, which have high levels of brand recognition in the US (c50% with a bit of rounding).
  • Lots of the impact here is the illegible impact that comes from being well-known and highly influential; this could lead to more major donors being attracted to EA-style donating, or many other things.
  • There's also the impact that could come from donating to higher impact things within a lower impact cause area, and the impact of influencing the charity sector to have more impact.

I find these arguments convincing enough that I founded an organisation (SoGive) to implement them.

At the margin, GiveWell is likely more cost-effective, however I'd allude to Ben's comments about cost-effectiveness x scale in a separate comment.

S&P for companies' impact?

Human activity, as measured by GDP (for all that measure has flaws) is split roughly 60%(ish) by for-profit companies, 30%(ish) by governments and a little bit from other things (like charities).

  • As I have argued elsewhere, EA has likely neglected the 60% of human activity, and should be investing more in helping companies to have more positive impact (or avoiding their negative impact)
  • The charity CDP spent £16.5m (c.$23m) in the year to March 2019 (source). They primarily focus on the question of how much carbon emissions are associated with each company. The bigger question of how much overall impact is associated with each company would no doubt require a substantially larger organisation, spending at least an order of magnitude more than the c$23m spent by CDP.

(Note: I haven't thought very carefully about whether "S&P for companies' impact" really is a high-impact project)

Interesting thougts Sanjay and I agree that we neglect the 60% for profit sector

My biggest concern with your solution in one sentence: as long as people mostly care about money they want to act  on advice that maximises their financial return. Of course we could " subsidise"  a service like that for social profit, but as long as it is not in the systems interest to act on our advice it's useless.

So changing the incentives of the system (through policy advocacy) or movement building (expanding the moral circle) seem more promosing from this viewpoint. On the other hand: once enough people are really interested in social profits we need to have the insights which companies do good and which do not.  Maybe it's more a question of the right timing...

When I started thinking about these issues last year, my thinking was pretty similar to what you said. 

I thought about it and considered that for the biggest risks, investors may have a selfish incentive to avoid to model and manage the impacts that their companies have on the wider world -- if only because the wider world includes the rest of their own portfolio!

It turns out I was not the first to think of this concept, and its name is Universal Ownership. (I've described it on the forum here)

Universal Ownership doesn't go far enough, in my view, but it's a step forward compared to where we are today, and gives people an incentive to care about social impacts (or social "profits")

When Benjamin_Todd wanted to encourage new projects by mentioning $100M+ size orgs and CSET, my take was that he wanted to increase awareness of an important class of orgs that can now  be built.

In this spirit, I think there might be some perspectives  not mentioned yet in the consequent discussions:

 

1. Projects with 100m+ of required capital/talent has different patterns of founding and success

There may be reasons why building such 100m+ projects are different both from many smaller  "hits based" funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.

One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved:

Here are examples of members of the founding team of OpenAI and CSET:

CSET - Jason Matheny - https://en.wikipedia.org/wiki/Jason_Gaverick_Matheny

OpenAI - Sam Altman - https://en.wikipedia.org/wiki/Sam_Altman

If you look at these profiles, I think you can infer that if you have an org that is capped at $10M, or has to internalize a GiveWell-style cost effectiveness aesthetic, this wouldn't work and nothing would be founded. The people wouldn't be interested (as another datapoint, see 1M salaries at OpenAI).

 

2. Skillset and training patterns might differ than previous patterns used in the EA movement

I think it's important to add nuance to a 80,000 hours-style article of "get 100m+ org skills":

Being an executive at a charity that delivers a tangible product requires different skills to running a research or advocacy charity. A smaller charity will likely need to recruit all-rounders who are pretty good at strategy, finance, communications and more. In contrast, in a $100 million you will also need people with specialized skills and experience in areas like negotiating contracts or managing supply chains.

Note that being good at the most senior levels usually involves mastering or being fluent in many smaller, lower status skills. 

For evidence, when you work together with them, you often see senior leaders flaunting or actively using these skills, when they don't apparently have to. 

This is because the gears-level knowledge improves judgement of all decisions (e.g. "kicking tires"/"tasting the soup"). 

Also, the most important skill of senior leaders is fostering and selecting staff and other leaders, and again, gears-level observation of these skills is essential to such judgement.

specialized skills and experience in areas like negotiating contracts or managing supply chains.

Note that in a 100M+ org, these specialized skills can be fungible in  way that "communication" or "strategy" is not.

If you want to start or join an EA charity that can scale to $100 million per year, you should consider developing skills in managing large-scale projects in industry, government or another large charity in addition to building relationships and experience within the EA community.

From the primal motivation of impact and under the premise in Benjamin_Todd's statement, I think we would expect the goal is to try to create these big projects within 3 or 5 years. 

Some of these skills, especially founding a 100M+ org, would be extremely difficult to acquire within this time. 

There are other reasons to be cautious:

  • Note that approximately every ambitious person wants these skills and profile, and this set of people is immensely larger than the set of people in more specialized skill sets (ML, science, economics, policy) that has been encouraged in the past.
  • The skills are hard to observe (outputs like papers or talks are far less substantive, and blogging/internet discussion is often looked down).
  • The skillsets and characters can be orthogonal or opposed to EA traits such as conscientiousness or truth seeking.
  • Related to the above, free-riding and other behavior that pools with altruism is often used to mask very conventional ambition (see Theranos, and in some points of view, approximately every SV startup).

I guess my point is that I don't want to see EAs get Rickon'd by running in a straight line in some consequence of these discussions.

 

Note that underlying all of this is a worldview that views founder effects/relationships/leadership as critical and the founders as not fungible. 

It's important to explicitly notice this, as this worldview may be very valid for some interventions but is not for others. 

It is easy for these worldviews to spill over harmfully, especially if packaged with the high status we might expect to be associated with new EA mega projects.

 

3. Pools of EA leaders already exist

I also think there exists large pool of EA-aligned people (across all cause areas/worldviews) who have the judgement to lead such orgs but may not feel fully comfortable creating and carrying them from scratch. 

Expanding on this, I mean that, conditional on seeing an org with them in the top role, I would trust the org and the alignment. However, these people may not want to work with the intensity or deal with the operational and political issues (e.g. put down activist revolts, noxious patterns such as "let fires burn", and winning two games of funding and impact).

This might leave open important opportunities related to training and other areas of support.

I strong upvoted this because I think it's really important to consider in what situations you should NOT try to develop these kinds of skills!

There may be reasons why building such 100m+ projects are different both from many smaller  "hits based" funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.

One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved

This sounds like CSET is a 100m+ project. Their OpenPhil grant was for $11m/year for 5 years, and wikipedia says they got a couple of millions from other sources, so my guess is they're currently spending like $10m-$20m / year.

Yes, I wouldn't say CSET is a mega project, though more CSET-like things would also be amazing.

Thank you for pointing this out.

You are right, and I think maybe even a reasonable guess is that CSET funding is starting out at less than 10M a year.

Many tech companies easily have budgets above $100M a year, without shipping any physical product. Agree with the general premise of this, but the physical aspect seems overstated. Many service and software companies can easily scale to $100M plus.

Edit: Seems like Haydn already made my point better than I did.

Interesting analysis. One thing to note is that Anthropic raised $124m at a reported valuation of $845m, and I guess that their work isn't that "physical" in a narrow sense. (But potentially such orgs are still in the spirit of your analysis.)

I think AI research on large models is quite different to the kind of research meant by this post, because it requires large amounts of compute, which is physical (though I guess not exactly a product)

Similarly biotech research or high energy physics research is really expensive, and mostly because of physical world stuff

Based on vaguely remembered hearsay, my heuristic has been that the large AI  labs like DeepMind and OpenAI spend roughly as much on compute as they do on people, which would make for a ~2x increase in costs. Googling around doesn't immediately get me any great sources, although this page says "Cloud computing services are a major cost for OpenAI, which spent $7.9 million on cloud computing in the 2017 tax year, or about a quarter of its total functional expenses for that year".

I'd be curious to get a better estimate, if anyone knows anything relevant.

I strongly agree with the premise of this post and really like the analysis, but feel unhappy with the strong focus on physical products. I think we should instead think about a broader set of scalable ways to usefully spend money, including but not limited to physical products. E.g. scholarships aren't a physical product, but large scholarship programs could plausibly scale to >$100 million.

(Perhaps this has been said already; I haven't bothered reading all the comments.)

Yes, it has been pointed out; cf.:

https://forum.effectivealtruism.org/posts/Kwj6TENxsNhgSzwvD/most-research-advocacy-charities-are-not-scalable?commentId=mdDxjftDfeZX2AQoZ

https://forum.effectivealtruism.org/posts/Kwj6TENxsNhgSzwvD/most-research-advocacy-charities-are-not-scalable?commentId=xpwxjvimgQe84gcs4

There is a discussion of possible scalable ideas here. Feels like a useful counterpart to this discussion. 

https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/what-are-some-usd100m-ea-megaprojects-that-should-happen?commentId=eZoKvvLL8H3czGkAw

To be clear this is just a jumble of random thoughts I have, not a clear plan or a deep research topic or anything. I'm just imagining something vaguely in the direction of being an activist shareholder, except your pet cause is alignment rather than eg environmental concerns or board room diversity. 

I don't have well-formed views here, but some quick notes:

Investors and researchers who don't believe in your stances or leadership can probably exit and form new companies, and if they do believe, you don't necessarily need to buy shares to get them to listen.

  1. There are transition costs. Forming a new company is nontrivial.
  2. People aren't going to just change companies immediately because they disagree with your strategic direction a little, so there are soft stuff you can do.

Even within the EA community there's disagreement on safety/capabilities tradeoffs, or what safety work actually works. I wonder how you'll pick good leadership for this that all of the EA community is comfortable with.

The bar isn't an amazing thing with consensus opinion that it's amazing, the bar is  most decisionmakers think it's better than the status quo, or more precisely, better than the "status quo + benefits of offsetting CO2"

IMO, the main things holding back scaling are EA's (in)ability to identify good "shovel ready" ideas and talent within the community and allocate funds appropriately.  I think this is a very general problem that we should be devoting more resources to.  Related problems are training and credentialing, and solving common good problems within the EA community.

I'm probably not articulating all of this very well, but basically I think EA should focus a lot more on figuring out how to operate effectively, make collective decisions, and distribute resources internally.  

These are very general problems that haven't been solved very well outside of EA either.  But the EA community still probably has a lot to learn from orgs/people outside EA about this.  If we can make progress here, it can scale outside of the EA community as well.

Other than regranting, GiveWell's largest expense in 2020 was staff salaries - they spent just over $3 million on salaries. In total, they spent about $6 million (excluding regranting). GiveWell would have to grow to 20x the size in order to become a $100 million 'megaproject' [1].

I don't see why we should treat the funds they regrant differently than their salary expenses (in this context). GiveWell is a good counter example to the claim that "It is very hard for a charity to scale to more than $100 million per year without delivering a physical product." GiveWell can easily use another an additional $100M (e.g. by simply regranting it to GiveDirectly).

I wanted to avoid double-counting, so I didn't want to say "both GiveWell and GiveDirectly can absorb $100M" when actually it's the same $100M - that's why I excluded regranting

Curated and popular this week
Relevant opportunities