Hide table of contents

The effective altruism community is very fond of grants. This makes a lot of sense as it provides a way for our community to distribute its capital. The last FTX round had an extraordinary amount of applications. Only a small percentage of those were funded. For every funded application, dozens of hours were spent writing unsuccessful applications which would otherwise have been spent more productively.

This made me wonder. Is running all these grants actually a good use of time and resources? How would we even know?

To clarify my thinking, I made a guesstimate model that lets you plug in estimates of relevant variables and see how the cost-effectiveness of a grant programme changes.

In this post I will:
1. Briefly explain the model and my takeways from creating it 
2. Conclude overhead is not as big a deal as I thought it was

About the model

The model is pretty simple. It assumes there are the following costs associated with grantmaking:

  1. Money given out in grants
  2. Time spent writing applications
  3. Time spent evaluating applications
  4. Time spent administering and advising grants

The value of each grant is its cost-effectiveness multiplied by the grant's size.

The more grants are approved, the less cost effective the average grant becomes.

Finally I divide the impact the approved grants with the the costs of running the grant programme, resulting in a measure of cost-effectiveness. If this cost-effectiveness comes out higher than the counterfactual, for example a standard givewell charity, then the grant programme is worth running.

You can view and play around with the model here: https://www.getguesstimate.com/models/20444

Lessons on overhead to draw from the model

Can we learn anything from this model? That's debatable! Nevertheless, here are my own takeaways.

Overhead is not a big deal for large grants

As the average size and impact of grants increases, the overhead costs of grantmaking quickly start paling in comparison. The cost of giving 75 grants with an average size of $200,000, amounts to 15 million dollars given away. In turn, assuming my estimates weren't way off, the cost of writing and evaluating 1000 applications is only about 1 million. The size and number of grants given doesn't have to be very significant before the overhead of writing, evaluating, and administering grants becomes a negligible factor.

My initial worry that grants could be a near zero-sum game due to time spent on applications is not nearly as big a deal as I had anticipated.

Another takeaway from this is that grantmakers should spend evaluation time in proportion to grant size. The smaller the grant, the larger a percentage of the cost is taken up by overhead. For cheap grants, grantmakers should aim to make a decision quickly and move on.

Overhead might sometimes be a problem for smaller grant programmes

If you reduce the average size of the grants given out to $25k, overhead starts to look closer to a third of the total expenses.

In such circumstances, thinking about ways to reduce overhead may yield modest increases to the cost effectiveness of the grant programme.

Overhead is overshadowed by the expected value of each application

Much more important than overhead, is whether you expect the grants you're giving out to actually be impactful. Assumptions going into the the distribution of impact for grants, results in wild swings in the cost-effectiveness of the grant programme.

The expected impact of the average grant given out is determined by three factors:

  1. The distribution in the expected value of applications if funded
  2. Grantmakers ability to pick applicants from the top of this distribution
  3. Percentage of applications they decide to fund.

Are each application equally likely to have a high impact? Surely not. How much better should we expect the best applications to be from the median? The more fat-tailed a distribution, the more important becomes the grantmaker's ability to select the best applications.

In the model I assume a high sigma (fat-tailed) lognormal distribution of charity effectiveness. With such a distribution you will expect most grants given to be much less effective than givewell, with a few heavy-hitters that more than make up for it. The exact parameters of this distribution is extremely important. How much more cost-effective will the top percentile be than the average Givewell charity? 2x, 10x, 100x, more? How good are grantmakers at picking the winners? If you only expect the top applications to be twice as effective as Givewell and your grantmakers can't reliably identify them, the grant programme is probably not worth running.

If my distribution reflects reality, grantmakers should pass on proposals that don't have potential for a high upside. But the only basis I have for this choice of distribution is my intuition. I'd be curious to see actual data on the distribution of impact from grants made by EA organisations. I imagine an organisation's list over which grants they think had what expected value are sensitive and probably best kept private, but maybe they would be able to share anonymized distributions.

Grantmaking is cost-effective, sometimes, maybe.

When I plug in my best estimates of each variable for a programme giving out expensive grants, the cost-effectiveness of grantmaking is competitive with deworming. Not bad!

But the model is a suuper flawed representation of reality:

  • There's a bunch of details I skipped modeling
  • A few things that are modelled incorrectly (grants can have negative EV, for example!).
  • There's also multiple estimates which definitely should be broken into multiple components.
  • While global poverty has a clear counterfactual way to spend money, it's less clear what the counterfactual spend would be for existential risk reduction or animal welfare.

I wouldn't conclude much about cost-effectiveness of grantmaking from the model.

Overhead is rarely the deciding factor

My prior going in to this was that the EA community spending thousands of hours writing applications seemed like a massive waste of everyone's time. Now I'm less concerned about it.

I still have a few reservations, for example I think modeling using hourly wages instead of EV-per-hour probably results in a significant undervalution of an EA's time.

Nevertheless, it's nice to know that grants at least aren't obviously lighting cash on fire.

Comments5


Sorted by Click to highlight new comments since:

I think the appropriate cost to use for evaluators, applicants, and admins is the opportunity cost of their time. For many such people this would be considerably higher than their wage and outside the ranges used in the model. I don't know that this would change your conclusion, but it could significantly affect the numbers.

This could be a good submission for the criticism contest. Clean, tightly reasoned, not going in with the bottom line written.

The important statistic for cost effectiveness is really the cost effectiveness of the marginal grant. If the marginal grant is very cost effective, then money is being left on the table and we should be awarding more grants! Conversely, if the marginal grant is very cost ineffective, then grant making is inefficiently large even if the average grant is cost effective. In that situation we could improve cost-effectiveness by eliminating some of those marginal grants.

The distance between the marginal grant and the average grant is increasing in the fat-tailedness of the distribution, so for very fat tailed distributions, this difference is extremely important.

Thanks for doing this; I think this is useful. It feels vaguely akin to Marius's recent question of the optimal ratio of mentorship to direct work. More explicit estimates of these kinds of questions would be useful.

Blonergan's comment is good, though - and it shows the importance of trying to estimate the value of people's time in dollars.

This kind of reminds me of the music biz...A&R reps would scour clubs for bands and sign them to a record deal. Most of the bands would lose money but a few would hit it big paying for the rest. Similar also VC funds funding startups. It's the nature of the beast. In this way you can also understand that there will always be less and more effective charities and while seeking to be more efficient is good, the lesser still play their role by populating the range and hopefully evolving forward...for every starting player there always needs to be a bench.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3