Hide table of contents

267

Summary

I have consolidated publicly available grants data from EA organizations into a spreadsheet, which I intend to update periodically[1]. Totals pictured below.

 

Figure 1: publicly available grants by recipient category
Figure 2: publicly available grants by source

(edit: swapped color palette to make graphs easier to read)

Observations

  • $2.6Bn in grants on record since 2012, about 63% of which went to Global Health.
  • With the addition of FTX and impressive fundraising by GiveWell, Animal Welfare looks even more neglected in relative terms—effective animal charities will likely receive something like 5% of EA funding in 2022, the smallest figure since 2015 by a wide margin.

Notes on the data

NB: This is just one observer's tally of public data. Sources are cited in the spreadsheet; I am happy to correct any errors as they are pointed out.

GiveWell:

  • GiveWell uses a 'metrics year' starting 1 Feb (all other sources were tabulated by calendar year).
  • GiveWell started breaking out 'funds directed' vs 'funds raised' for metrics year 2021. Previous years refer to 'money moved', which is close but not exactly the same.
  • I have excluded funds directed through GiveWell by Open Phil and EA Funds, as those are already included in this data set.

Open Phil

  • Open Phil labels their grants using 25 'focus areas'. My subjective mapping to broader cause area is laid out in the spreadsheet.
  • Note that about 20% of funds granted by Open Phil have gone to 'other' areas such as Criminal Justice Reform; these are omitted from the summary figures but still tabulated elsewhere in the spreadsheet.

General

  • 2022 estimates are a bit speculative, but a reasonable guess as to how funding will look with the addition of the Future Fund.
  • The total Global Health figure for 2021 (~$400M) looks surprisingly low considering e.g. that GiveWell just reported over $500M funds directed for 2021 (including Open Phil and EA Funds). I think that this is accounted for by (a) GiveWell's metrics year extending through Jan '22, (Open Phil reported $26M of Global Health grants that month), and (b) the possibility that some of this was 'directed' i.e. 'firm commitment of $X to org Y' by Open Phil in 2021, but paid out or recorded to the grants database months later; still seeking explicit confirmation here.

Future work

If there is any presently available data that seems worth adding, let me know and I may consider it.

I may be interested in a more comprehensive analysis on this topic, e.g. using the full budget of every GiveWell-recommended charity. I'd be interested to hear if anyone has access to this type of data, or if this type of project seems particularly valuable.

 

Thanks to Niel Bowerman for helpful comments

  1. ^

    Currently the bottleneck to synchronizing data is the GiveWell annual metrics report, which is typically published in the second half of the following year. I may update more often if that is useful.

Comments49


Sorted by Click to highlight new comments since:

Not sure it's worth the effort, but I'd find the charts easier to read if you used a wider variety of colors.

DE
24
0
0

+1, I'd also recommend using colours that are accessible for people with colour vision deficiency

The viridis package is good for colourblindness and is also pretty: https://cran.r-project.org/web/packages/viridis/index.html

I find this website helpful for picking colorblind friendly color schemes: https://colorbrewer2.org/

Sure seems like animal welfare could use some more spending!

Worth joining forces with Hamish and effectivealtruismdata.com (see post).

This stuff should be able to be automated, I think/hope.

I also added some tags to this post. with this stuff it’s good to coordinate where we can.

Many of the sources used here can't be automated, but the spreadsheet is simple to update

Fair point but still may be worth joining force or coordinating with Hamish

A data-point on this - today I was looking for and couldn't find this graph. I found effectivealtruismdata.com but sadly it didn't have these graphs on it. So would be cool to have it on there, or at least link to this post from there!

Hamish applied for funding for that website but was rejected. Seems like something we'd pay $100k to exist, right?

This data is very valuable - thanks! Saves me the time of collecting it (or asking someone else to do so). If possible, I think it would be very helpful to also distinguish between biorisk, nuclear risk, AI, and other -- I'd be really curious how this is distributed.

Also I think it could be useful to add data from Longview and Effective Giving, if they ever make data available.

Me too, same for other areas as well!

Kind of bad we didn't have this overview before. Seems very basic to have! So thanks for doing it

Estimates for Open Phil:

 

Thanks! This is for 2014-2022? If so, does it include 2022 projection?

2012-Pres. (first longtermist grant was in 2015) no projection

In light of this, it's interesting  to look back at the March 2021 post by Applied Divinity Studies "Why Hasn't Effective Altruism Grown Since 2015?" The post (reasonably enough) used money moved as a key metric of EA growth, and argued that EA as a movement had been stagnating. The massive increase in EA-aligned funds in the past couple years would seem to suggest otherwise.

 (See also the discussion in GiveWell's 2021 year-end report, which noted: "In 2021, GiveWell continued to enjoy a huge amount of growth in the funds we were able to raise. Overall, our funds raised grew by over 100%, from $293 million in 2020 to $595 million in 2021—the largest absolute increase in funding we've ever experienced.") 

Thanks for doing and sharing this, really interesting!

Random curiosity, how did your spreadsheet make it into the time.com article about EA?

Naina (the Time journalist) and I were chatting about the aggregate funding data but couldn’t quickly find a source. I connected Naina and Tyler to work on this together. Tyler pulled together the data in part for the Time article.

Are the grants adjusted for inflation? If not, doing it might be a good idea, such that the values are more comparable across years.

I'm not sure that inflation makes sense—this money isn't being spent on bread :) I think most of these funds would alternatively be invested, and returning above inflation on average.

[This comment is no longer endorsed by its author]Reply

From Investopedia, "inflation is the rate at which prices for goods and services rise". So my understanding is that it is a broad measure of the purshasing power of money, and matters even if the money is not (directly) going towards buying food.

It seems to me like these amounts would be most useful if they were adjusted for inflation (alternatively, if you want to be fancy possibly even adjusted for an index of the wages of knowledge workers). As it is effective funding dispersed in the early years is being understated.

Yes, sorry, on reflection that seems totally reasonable

Can this spreadsheet be linked on a page on the EA website?

I know I'm late to the party, but thanks for the data!

One question: Why does your number for 2019 so different from 80K's estimates (https://80000hours.org/2021/08/effective-altruism-allocation-resources-cause-areas/)?

Both posts contain a more detailed breakdown of inputs, but in short:

  1. 80k seems to include every entry in the Open Phil grants database, whereas my sheet filters out items such as criminal justice reform that don't map to the type of funding I'm attempting to track.
  2. They also add a couple of 'best guess' terms to estimate unknown/undocumented funding sources; I do not.

Ooh I <3 data. Late to the party, but I cleaned up the raw data by throwing away a small amount of information and stacking into a few columns (variable names self-explanatory). I also adjusted dollar amounts for inflation using that month's CPI. Don't count on this to be absolutely solid just yet!

I must have messed with some settings bc can't embed links, but:

Neat!  Do you want to make a graph using the inflation-adjusted data?

Thank you for putting this together! It's interesting to think about overall trends in volume and direction of giving.

Effective animal charities will likely receive something like 5% of EA funding in 2022, the smallest figure since 2015 by a wide margin.

It would be interesting, though perhaps difficult, to see an analysis like this account for multi-year grants, assuming it isn't already. For instance, part of why animal welfare funding might look so much larger in 2021 compared to 2022 is that Open Phil, the biggest EA funder in the space as far as I know, made multiple large grants in 2021 that pay out over the course of two-three years (e.g., GFI, THL).

So, if I'm interpreting this correctly, lower 2022 numbers for animal welfare might not reflect a deprioritization or funding gap, but just multi-year grants for the largest organizations having been made recently.

I'm a bit surprised that there's ~200M in longtermist grantmaking in the first 8 months of 2022 alone! Where is most of that money going?

(I feel a bit dense in asking this, but where are you getting ~200M in the first 8 months from? In the sheet it's 139.5M; the ~200M is a projected estimate.)

The 139.5M for LTXR splits into 

  • 97M from FTX: 30M to biosecurity, 20M to AI, 16M to "other", 10M to "empowering exceptional people", 8M to epistemic institutions, 7M to econ growth, 2M to great power relations -- all figures from this post
  • 42.5M from Open Phil (raw data):  26.2M to "longtermism" broadly construed, 14.4M to biosecurity (10M being regrants), 1.8M to AI. The longtermism category has some interesting grants, like 3M to Kurzgesagt for making short-form video content

What jumps out to me is (1) FTX's LTXR grants seem broader than OPP's (2) FTX has so far granted 10x more to AI stuff than OPP. Looking into the latter a bit, OPP's 1.8M went to 1 grant (Open Phil AI Fellowship — 2022 Class supporting 11 ML researchers over 5 years) while FTX's 20M went to 76 grants, both big (e.g. 5M to Ought to build a language-model based research assistant) and small (e.g. 50k to one person to support 6 months of AI safety research). My sense is this is driven by some combination of VoI-maxing orientation and fast grant decisions (inspired by Fast Grants -- see this comment for more commentary on this parallel)

(I feel a bit dense in asking this, but where are you getting ~200M in the first 8 months from? In the sheet it's 139.5M; the ~200M is a projected estimate.)

I was just eyeballing the graph! Thanks ,your notes made sense. Cool stuff.
 

FTX has so far granted 10x more to AI stuff than OPP

This is not true, sorry the Open Phil database labels are a bit misleading. 

It appears that there is a nested structure to a couple of the Focus Areas, where e.g. 'Potential Risks from Advanced AI' is a subset of 'Longtermism', and when downloading the database only one tag is included. So for example, this one grant alone from March '22 was over $13M, with both tags applied, and shows up in the .csv as only 'Longtermism'. Edit: this is now flagged more prominently in the spreadsheet.

BTW I'm guessing that you can't "project" OP giving by multiplying grants by 12/8, because OP has a pretty big delay in announcing their grants.

Yeah it looked like grants had been announced roughly through June, so the methodology here was to divide by proportion dated Jan-Jun in prior years (0.49)

Thanks, this is great!

I wonder why Open Philanthropy has not made available a similar analysis.

I may be interested in a more comprehensive analysis on this topic, e.g. using the full budget of every GiveWell-recommended charity. 

I don't know about the full budget, but the Grants by funding opportunity tab in GiveWell's sheet on directed grants with impact information seems like a good start, although it only has figures from 2020 onwards. The Lifetime impact tab has figures from 2009, but not at grant-level granularity, just year / charity aggregated figures.

Thanks for putting this data together! Very useful

The growth in EA Infrastructure is huge in 2022. This should translate into an impressive takeover of the EA community and funding very soon. I’m looking forward to seeing the evolution in upcoming years.
Do we have any clue on why the donors started to pay so much attention to EA Infrastructure in 2022 when compared to the previous eight years?

The biggest factor is the arrival of FTX, which has given more to infrastructure YTD than all others combined the prior two years

Thanks for your response Tyler!

Shouldn't these FTX donations be included under "Longtermism and Catastrophic Risk Prevention" instead of under "EA infrastructure"? Maybe I'm missinterpreting the Cause Areas.

No he's right. FTX gave 34M in the linked report to "Effective Altruism", while "EA Meta" in 2020 and 2021 was "only" 30.4M

EDIT: Also LOL at Tyler's comment being downvoted before my explanation.

Thanks Linch for guiding me to the exact place where to find the information. Sorry that I didn't realize the information was already there from the start.
I put it together to understand where the additional 65,6 M$ are going to.

Welcome! I think there's a calculation error. How are you getting 41.6 for FTX? 7M+3M+24M = 34M.  :)

In the spreadsheet there are two rows: "2022" and "2022 (est.)". I assume the "2022" is actual year to date and the "2022 (est.)" is the expectation for year end. I guess the detailed explanation is somewhere but I don't have the time now to search for it.

For FTX Meta the value for "2022" is 35, and the value for "2022 (est.)" is 41.6. I see two options here for the mismatch between 34 (7+3+24) in my post and 35 in the spreadsheet:
A) 35 is a typo and the value should be 34. I added a comment in the spreadsheet
B) 35 is the good number and there is other 1M$ FTX chapter that can be considered as Meta. Maybe "Research That Can Help Us Improve"?

In reallity it is not so important 1M$.

I also can't understand why Tyler's comment was downvoted.

This is hugely useful, thanks for putting it together!

[anonymous]1
0
0

Appreciate the effort that has gone into this.

Agree with others that automating this across the relevant organisations would be a good idea and probably very simple to implement.

At some point if you have enough donors, coordination becomes important.

Can someone explain what EA infrastructure covers?

More from TylerMaule
94
TylerMaule
· · 3m read
Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies