Hide table of contents

Overall Open Philanthropy funding

Open Philanthropy’s allocation of funding through time looks as follows:

Bar graph of OpenPhil allocation by year. Global health leads for most years. Catastrophic risks are usually second since 2017. Overall spend increases over time.

Dustin Moskovitz’s wealth looks, per Bloomberg, like this:

Line chart of Dustin Moskovitz's wealth over time, with a dip in 2019 and a peak in 2021.

If we plot the two together, we don’t see that much of a correlation:

Combination of the previous two charts. Moskovitz's fortune does not match changes in total spend or category composition.

Holden Karnofsky, head of Open Philanthropy, writes that the Blomberg estimates might not be all that accurate:

Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume)

Edited to add: Moskovitz replies:

Actually the Bloomberg tracker looks pretty close, though missing 3B or so of foundation assets. The Forbes one is like half the Bloomberg estimate 🤷‍♂️

— Dustin Moskovitz (@moskov) November 20, 2022

In mid 2022, Forbes put Sam Bankman-Fried’s wealth at $24B. So in some sense, the amount of money allocated to or according to Effective Altruism™ peaked somewhere close to $50B.

Funding flow restricted to longtermism & global catatrophic risks (GCRs)

The analysis becomes a bit more interesting if we look only at longtermism and GCRs:

Bar graph of OpenPhil allocation to catastrophic risks by year. AI leads most years, followed by biosecurity.

In contrast, per Forbes, the FTX Foundation had given out $160M by September 2022. My sense is that most (say, maybe 50% to 80%) of those grants went to “longtermist” cause areas, broadly defined. In addition, SBF and other FTX employees led a $580M funding round for Anthropic

Further analysis

It’s unclear what would have to happen for Open Philanthropy to pick up the slack here. In practical terms, I’m not sure whether their team has enough evaluation capacity for an additional $100M/year, or whether they will choose to expand that.

Two somewhat informative posts from Open Philanthropy on this are here and here

I’d be curious about both interpretative analysis and forecasting on these numbers. I am up for supporting the later by e.g., committing to rerunning this analysis in a year.

Appendix: Code

The code to produce these plots can be found here; lines 42 to 48 make the division into categories fairly apparent. To execute this code you will need a working R installation and a document named grants.csv, which can be downloaded from Open Philanthropy’s website.

62

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:

Dustin Moskovitz commented on Twitter:

Actually the Bloomberg tracker looks pretty close, though missing 3B or so of foundation assets. The Forbes one is like half the Bloomberg estimate [shrug emoji]

Thanks for this!

There's more discussion here than ever, following the FTX scandal, and most of it is really vague. For the useful parts of it - how to prepare for such shocks and how to deal with them afterwards - I hope data and visualisations like this can make them more productive.

Data comments:

For some reason Open Phil's site is not allowing the spreadsheet download:


 

Thanks for pointing this out — as an FYI, you can DM me about any problems with the OP website. I'll look into the bug (and make sure we improve the error message, yikes).

Update: Now fixed!

That "error"—I just spat out my coffee lol. 

 

I probably got around it. Here's what might be the Open Phil database of grants/investments, accessed today.

https://docs.google.com/spreadsheets/d/1F7-WOHbr5bEfV-rohIoBv4wv_4CSP9AOKCYbYd4s4Ds/edit?usp=sharing

(It has 1500 entries)

 

Mr. Gertler? 

That "error"—I just spat out my coffee lol. 

Lol, I briefly thought this was about my post when seeing this in the notifications and I "jumped" a bit.

Nuno Sempere is a fantastic person. His work and ideas are respected. Nuno is a contributor to goodness in the world. What a special human being he is :)

(There, to try to make amends for any startling)
 

I also provided a link above: <https://nunosempere.com/blog/2022/11/20/brief-update-ea-funding/.source/grants.csv> (though it won't be up to date).

Edit: Whoops, I see that the document cuts halfway through and don't have plans to fix it, so I'm retracting this comment.

[This comment is no longer endorsed by its author]Reply

These are really interesting figures, thanks so much for sharing! 

Is the 2022 data up to date through November? Or does it cut off substantially earlier in the year? Wondering why it's so much lower than 2021. 

Sorry I'm being lazy here and not looking at the raw data myself. 

Hey, data is up to whatever was added to Open Philanthropy's database as of a few days ago. I imagine this does include most grants in November, but not sure.

If we plot the two together, we don’t see that much of a correlation

...That looks like a fairly strong correlation to me? Maybe I'm reading this graph wrong, but only the data point for 2018 looks substantially different.

Thanks for raising awareness about this! 

I wonder how much resources are invested into assessing opportunities to increase the risk-adjusted wealth of Dustin Moskovitz. Intuitively, its trajectory does not seem that much overdetermined, given its high variation. So investing something like 0.01 % to 1 % of the wealth into assessing opportunities appears reasonable to me, but I do not know. For a wealth of 10 G$, these fractions would amount to 1 M$ to 100 M$, so maybe such assessment might kind of be a cause area itself.

I mean, would make more sense to do for GV than for Moskovitz personally.

Sure! Maybe the wealth of Good Ventures correlates well with that of Moskovitz. More generally, I was just curious to know how much is invested in assessing opportunities for growth or decrease risk relative to what would on reflection be optimal.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read