Hide table of contents

6 weeks ago I shared a Metaculus question series I had authored, focused mainly on predicting grants by Open Philanthropy in 2025 and 2030, with some other questions on new large EA-aligned donors also included. This post contains a summary of the predictions on these questions so far. For some context on why we might value the answers to these questions, see the original post.

Key points

  • Participation in forecasting the questions was low, and binary questions were more popular than continuous distribution questions. Future forecasting series authors would do well to put more time than I did into thinking about ways to generate more interest in their questions or make their questions more appealing. This means that the forecasts are likely of somewhat lower quality than they could be had they received more attention.
  • Due to wanting to produce forecasts which could reliably be scored, I focused largely on Open Phil donations, but this is only one source of funding and having good estimates of other future sources seems valuable to me also.
  • Total grants are projected to increase by 70% (vs the average of 2019 and 2020 levels) to $485m in 2025 in the median scenario, with a 20% chance of exceeding $1bn. For 2030, the median is projected as $906m, with a 17% chance of exceeding $2bn. Forecasters project a 3% chance of fewer than $100m in grants by Open Phil in 2025 and a 9% chance of fewer than $100m in grants in 2030.
  • Forecasters expect the most % growth in total grants (taking the median prediction relative to 2019/20 averages) in the Animal Welfare cause area, followed by Scientific Research. Animal Welfare also sees the most absolute $ growth expected by 2030.
  • There is substantial uncertainty about AI risk grants, with a 23% chance projected of <$10m being granted in 2030 and a 10% chance of >$400m.

I think all of these figures should be treated as speculative, particularly the individual cause area projections, which are best viewed as the aggregated estimates of 6-10 people with varying levels of knowledge of Open Phil who have thought about them to varying degrees and think these are plausible numbers. I would place a lot more weight on an internal Open Phil projection than on these forecasts, for example (I would also expect these forecasts to update significantly if such a projection were made public, in the direction of that projection). 

Despite this disclaimer I think there are some lessons one could take from this, mostly in highlighting some plausible future directions OpenPhil could take, e.g.

  • The AI forecast puts a 23% probability that Open Phil will grant less than $10m to the cause area in 2030. This suggests that forecasters think it plausible that the field will not be a central Open Phil or EA cause at that point. This has some implications for how one might make career plans in the area, if one were risk averse and imagined themself working in the field for a long time. I expect for most people this would not have a big effect on their decision, especially as people in this area are likely to build marketable skills which could still serve them well regardless.
  • The Animal Welfare forecasts mainly see this field as expanding substantially. If one were worried about job prospects and therefore reluctant to start a career in this area, this could provide some reassurance that support for work in Animal Welfare is likely to continue and grow.

Number of predictions

The questions mostly took the form “How much money will Open Philanthropy grant towards [cause] in [year]?”, and got the following numbers of unique predictors (links are to the question):

YearTotal     Animal WelfareGlobal Health & DevAI RiskCriminal Justice ReformBiosecurity and PandemicsScientific Research

2025     

16

9

9

10

8

8

9

2030

13

6

7

7

7

6

6


 

Note that in all cases the number of predictors includes me, as I predicted on every question in the series.

There were also 24 unique predictors on the binary question “Will there be another donor on the scale of 2020 Good Ventures in the Effective Altruist space in 2026? and 15 unique predictors on the question “When will Good Ventures first donate 5% of Dustin Moskovitz's wealth in one year?

I later asked another binary question, “Will Sam Bankman-Fried have donated $1bn 2021 USD to charitable causes before 2031?” which has been open for 16 days and so far has 17 unique predictors, without the benefit of being promoted on the EA forum as the others were. This suggests that binary questions more easily attract forecasts, which was my intuition already, and seems relevant to future efforts to write questions - if they can be turned into binary questions without too much loss of value, this might be preferable for getting more attention from forecasters. 

The low numbers of predictors on the grant questions could be because binary questions are easier to think about than continuous ones, or because constructing a distribution for continuous questions is somewhat laborious, or because the formulaic “how much will OP grant to [cause] in [year]?” questions just weren’t very interesting for most forecasters. Trying to optimise more for engagement is something I think anyone trying a similar project should put more effort into than I did in this case.  

The predictions

The following table shows (1) the amounts donated (in millions of USD$) to each cause area and in total in 2019, 2020, and 2021 so far (according to the Open Phil grants database) and (2) the amounts predicted to have various chances of being donated in 2025 and 2030. 

In the cases where the value is “2000+” or “<10” or “<50” below, this was the top or bottom of the permissible forecast range and so being precise outside that range is not possible with this data.

   

2025

    

2030

    
Cause (links)

2019       

2020        

5%

10%

25%

50%

75%

90%

5%

10%

25%

50%

75%

90%

Total (1,2)

298

271

135

205   

318

489    

894    

1450      

<50

130

469    

906   

1640       

2000+     
Animal Welfare(1,2)

40

25

26

40

71

112

173

260

<10    

10

58

160

330

600

Global Health & Dev(1,2)

41

101

42

57

87

129

188

280

<10<10

27

90

237

560

AI Risk(1,2)

63

15

11

18

35

64

118

220

<10<10

10

39

145

405

Criminal Justice Reform(1,2)

56

10

<10  <10<10

14

25

46

<10<10<10

16

48

120

Biosecurity and Pandemics (1,2)

22

26

<10

15

25

40

67

116

<10<10

14

36

82

180

Scientific Research (1,2)

54

67

25

43

85

160

318

540

<10

10

42

120

246

475




 

What can we say from the headline numbers? The clearest takeaway, I think, is that predictors expect Open Phil to continue to ramp up their giving, and that they don’t expect grants to reach a steady state by 2025. Rather, grants are projected to increase by 70% (vs the 2019/20 average) by 2025, and then by a further 85% by 2030. 

Overall

This also suggests an expectation that Open Phil will, in the base case, not begin spending down their endowment until the 2030s at the earliest, as spending $900m annually is lower than a naive (and probably conservative) expectation of a 4% annual return on Dustin Moskovitz’ current wealth (which is currently $24.3bn, per Forbes). This is consistent with the predictions on the question “When will Good Ventures first donate 5% of Dustin Moskovitz's wealth in one year?” which currently has 25th, 50th and 75th percentile predictions of 2029, 2036, and ‘after 2040/never’ respectively.

Other conclusions from this data will be more tentative, as the sample sizes are quite small (the individual cause predictions for 2030 all have either 6 or 7 unique predictors, including me). 

Question consistency

A question I think it is important to consider here is ”are the question predictions consistent with each other?” This is non trivial to answer, as one cannot simply add up the individual medians and see if they match the overall medians. It’s plausible that we would expect at least one of the areas to reach its 75th percentile outcome in the median case, for example (the donations are likely not independent, but nor are they perfectly correlated). 

As the distributions are positively skewed (that is, there is a long tail of possible extremely high values such that the median of each is significantly lower than the mean), and there are other small cause areas not included and the potential for new ones, especially as time horizons get longer, we should expect individual medians to add to less than the overall median. In 2025, they do not, with individual medians adding to $519m, slightly higher than the $489m overall median. This suggests either predictors were somewhat inconsistent, or that those who predicted on individual questions were more optimistic than those who did not. It’s still pretty close though so I don’t really think there’s much inconsistency to explain here. See this Guesstimate model for illustration, noting also that the distributions predictors used have fatter tails than the lognormal distribution used in Guesstimate. 

The 2030 distributions are much more detached from each other, with most cause areas seeing lower median predictions for 2030 than for 2025, but with much higher 75th and 90th percentile figures. This is consistent with expecting Open Phil to find new cause areas not currently listed, or with 1 or two of the current cause areas growing significantly and taking up a much larger fraction of Open Phil giving. 2030 also includes more predictions of causes going to ~0 ($10m was the lower bound and every cause had close to 10% or more at or below this figure) which makes sense to me, as over long time horizons some causes may be dropped from the portfolio. Still the differences here are quite large and may be a consequence of having a different forecaster population.

By cause area

The area which is projected to grow most is Animal Welfare, which is projected to grow from the 5th biggest area in 2019/20 to the 3rd biggest in 2025 and the biggest (by median estimate) in 2030. This last point represents confidence that it will be a big piece of the portfolio. This could be because Animal Welfare, along with Scientific Research and Global Poverty seem like very safe bets to still be areas worth donating to in 2030, as they seem very unlikely to be ‘solved’ or not funding constrained by then. Still, it is a little surprising to me to see it expected to grow most in the median case, and to also have the biggest 75th and 90th percentile numbers. I would be curious to see a sketch of what a world where Open Phil is giving $600m/year to Animal Welfare causes (the 90th percentile) looks like. It’s also possible that, given the small number of predictors on these questions, the population is biased towards one cause or another and more likely to make optimistic predictions, but I don’t have a compelling reason to think this is happening.

I think the biggest surprise to me is AI risk grants being projected to stay the same size in the median case. AI risk has also got the widest interquartile range, at [10m,145m] there’s almost a factor of 15 separating the 25th and 75th percentile cases. This being the most uncertain cause seems approximately correct to me, in that I am much more uncertain where the EA community will be regarding AI risk in 2030 than other areas, but the median estimates look low to me.

Another point of note is that the area seen by forecasters as least likely to grow is Criminal Justice Reform. The 90th percentile value here for 2025 is actually less than the amount granted in 2019. I am not sure why this is, perhaps Open Phil have said something publicly which I haven’t seen, but potentially it being seen as less of a central EA cause area or the area being more crowded after the events of 2020 is influencing predictors here. 

Other questions

The binary question I asked, “Will there be another donor on the scale of 2020 Good Ventures in the Effective Altruist space in 2026?,  currently has a prediction of 51%. This has been pretty constant since the question opened despite more positive news about Sam Bankman-Fried’s wealth in the last few weeks. As Peter Wildeford’s comment on the question suggests, Bankman-Fried does seem the most likely candidate to fulfil the conditions.

I recently asked a further question specifically about Bankman-Fried, Will Sam Bankman-Fried have donated $1bn 2021 USD to charitable causes before 2031? which currently has a community prediction of 62% from 17 predictors. So most predictors consider Bankman-Fried more likely than not to have substantially ramped up his giving by the end of 2030, but they still predict a substantial chance that he does not.
 

This post and question series are a project of Rethink Priorities.

It was written by Charles Dillon, a volunteer for Rethink Priorities. Thanks to Michael Aird, Lizka Vaintrob and Peter Wildeford for feedback on this post. If you like our work, please consider subscribing to our newsletter. You can see all our work to date here.


 

Comments2
Sorted by Click to highlight new comments since:

This suggests that binary questions more easily attract forecasts, which was my intuition already, and seems relevant to future efforts to write questions - if they can be turned into binary questions without too much loss of value, this might be preferable for getting more attention from forecasters. 

  1. Do you have a sense of why this is the case? Is it typically easier/faster to make binary than continuous forecasts? Are there any other mechanisms?
  2. Do you have a sense of how strong that effect might tend to be? Like whether it can typically be expected to increase the number of forecasters by 50%, 100%, 200%, etc. relative to how many would've forecasted on the equivalent continuous questions?

It is definitely easier - the answer is more one dimensional, and for continuous questions there's a lot more going back and forth between the cumulative distribution function and the probability density function, and thinking about corner cases.

E.g. For "When will the next supreme court vacancy arise" vs "will there be a vacancy by [year]", in the former case you have to think about when a decision to retire might be timed, in the latter you just need to think about whether the judge will do it.

Other mechanisms - it's possible the average binary question is more interesting or attention grabbing.

As for your second question, I looked at all the questions from 2019 and 2020 just now, and the median number of unique predictors on a binary question was 75, vs 38 for a continuous one. The mean was 97 vs 46. But this does not control for the questions being different. There were 942 continuous questions over the time window and 727 binary questions.

Curated and popular this week
Relevant opportunities