Hide table of contents

In November, I wrote about Open Philanthropy’s soft pause of new longtermist funding commitments:

We will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously ..

Open Philanthropy also need[s] to raise its bar in light of general market movements (particularly the fall in META stock) and other factors ... the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar ...

It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few ...

Because of this, we are pausing most new longtermist funding commitments (that is, commitments within Potential Risks from Advanced Artificial Intelligence, Biosecurity & Pandemic Preparedness, and Effective Altruism Community Growth) until we gain more clarity, which we hope will be within a month or so ...

It’s not an absolute pause: we will continue to do some longtermist grantmaking, mostly when it is time-sensitive and seems highly likely to end up above our bar (this is especially likely for relatively small grants).

Since then, we’ve done some work to assess where our new funding bar should be, and we have created enough internal guidance that the pause no longer applies. (The pause stopped applying about a month ago, but it took some additional time to write publicly about it.)

What did we do to come up with new guidance on where the bar is?

What we did:

  • Ranking past grants: We created1 a rough ranking of nearly all the grants we’d made over the last 18 months; we also included a number of grants now-defunct FTX-associated funders had made.
    • The basic idea of the ranking is essentially: “For any two grants, the grant we would make if we could only make one should rank higher.”2 It is based on a combination of rough quantitative impact estimates, grantmaker intuitions, etc.
    • With this ranking, we are now able to take any given total Open Phil longtermist budget for the time period in question, and identify which grants would and would not have made the cut under that budget.
    • In some cases, we gave separate rankings to separate “tranches” of a grant (e.g., the first $1 million of a grant might be ranked much higher than the next $1 million, because of diminishing returns to adding more funding to a given project).
  • Annual spending guidelines: We estimated how much annual spending would exhaust the capital we have available for longtermist grantmaking over the next 20-50 years (the difference between 20 years and 50 years is not huge3 in per-year spending terms), to get some anchors on what our annual budget for longtermist grantmaking should be.
    • We planned conservatively here, assuming that longtermist work would end up with 30-50% of all funding available to Open Philanthropy.4 OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.
  • Setting the bar: We divided our ranked list of grants into tiers (tier 1 being the best-ranked grants, tier 2 being next-best-ranked grants, etc.), and considered how much we would spend if we funded everything at tier 2 and better, tier 3 and better, etc. After considering a few possibilities, we had broad all-things-considered agreement (with a heavy dose of intuition) that we should fund everything at tier 4 and better, as well as funding tier-5 grants under various conditions (not having huge time costs of grant investigation, recognizing that we might later stop funding tier-5 grants if our set of giving opportunities continues to grow, etc.)
  • I disseminated guidance along these lines to grant investigators.

This was a very messy, pragmatic exercise intended to get out some quick guidance that would result in a sustainable spending level with some room for growth. We’re making plans to improve on it in a number of ways, and this may lead to further adjustments to our bar and how that bar is operationalized in the relevant program areas.

What is the bar now?

I’m not able to give much public detail on “where the bar” is, because the bar is defined with reference to specific grants (e.g., “fund everything at tier 4 and above” means “fund everything that we think is at least as good value-for-money as low-end tier-4 grants,” and there’s an assumption that grant investigators will have enough familiarity with some specific tier-4 grants to have a sense for what this means). But hopefully these numbers will be somewhat informative:

  • About 40% of our longtermist grantmaking over the last 18 months (by dollars) would have qualified for tier 4 or better (which, under the new guidance, means it would be funded). Note that this figure refers only to our longtermist grantmaking, and does not include grants by other funders (we included some of the latter in our exercise, but I’m reporting a figure based on Open Philanthropy alone because I think it will be easier to interpret).
  • About 70% would have qualified for tier 5 or better (which, under the new guidance, means it would be funded under some conditions - low time costs to investigate, hesitance to make implicit very long-term commitments since we might raise our bar in the future).
  • So about 40-70% of the grantmaking we did over the last 18 months would’ve qualified for funding under the new bar. I think 55% would be a reasonable point estimate.
    • This doesn’t mean we think that 45% of our past grants were “bad.” Our bar has just gotten much higher, due to the decline in other funding available and the growth in the longtermist community and other relevant communities (e.g., AI alignment), noted in the blockquote at the beginning of this post.
    • In spite of the higher bar, we expect our overall longtermist funding to be flat or up in the coming years, because there are now so many more good otherwise-unfunded giving opportunities.

Sometimes, we see strong grant applicants underestimate the strength of their applications. Though we’ve raised the bar, we still encourage people to err on the side of applying for funding.

A note on budget-based vs. value-based bar-setting

In theory, the ideal way to set the bar for our longtermist giving would be to estimate the value-per-dollar of each grant (in terms of the long-run future, perhaps roughly proxied by reduced probability of existential catastrophe), and to estimate the value-per-dollar of unknown future grants, and make any grants whose value-per-dollar exceeds that of the lowest-expected-value5 grant we would ever otherwise make (and correspondingly refrain from that lowest-value grant).

Instead, we set the bar based on something more like: “If we use this policy, we have a reasonable spending rate relative to our capital (specifically, our spending can roughly double before it would be on pace to spend down the capital within ~20 years).” You could think of this as corresponding to an assumption like: “Over the next few years, giving opportunities will grow faster than available capital grows; after that, we aren’t sure and just assume they’ll grow at the same rate for a while; in the long run, we want to be on pace to spend down capital within 20 years or so, and hope that other funders come in by then to the extent this whole operation still makes sense.” Of course, we can adjust our spending rate over time (and as noted above, the difference between “spend down over 20 years” and “spend down over 50 years” is not huge in per-year spending terms); this is just the rough sort of picture that we have in mind at the moment.

The first approach would be better if we could (without too much time cost) produce informative numbers. In practice, I’ve tried and seen a number of attempts to do this, and they haven’t produced action recommendations that I believe/understand enough to deviate from the action recommendations I come up with using more informal methods (like what we’ve done here). This isn’t to say that I think the more formal approach is hopeless, just that I think doing it well enough to affect our actions will require a lot more time investment than has been put in so far, and we’d rather spend our time on other things (such as sourcing and evaluating grants) for the time being.

Notes


  1. Roughly speaking, each team lead ranked grants made by their team, then I merged them into a master ranking that mostly deferred to team leads’ judgments, but incorporated my own as well. 

  2. This is a fairly similar idea to ranking by “impact per dollar (in terms of the long-run future, roughly proxied by reduced probability of existential catastrophe)” but not exactly the same, e.g. in this framework I’d prefer to make a large grant with very (and unusually even by our standards) high impact per dollar than a very small grant with slightly higher impact per dollar (since I expect the money saved making the smaller grant to end up effectively spent at much lower per-dollar value).

    This ranking can depend on the ranker’s opinion of how good future giving opportunities will be (and on lots of other hard-to-estimate things). In practice, I doubt that important variation in opinions on future giving opportunities affected the rankings much. Having discussed this further with Bastian Stern, I think "impact per dollar" is actually better. The instructions for rankers were pretty vague on this point and I suspect they were mostly using "impact per dollar" anyway. 

  3. E.g., at an annual real investment return of 5%, spending ~8% of (initial) capital each year would spend down the capital in ~20 years; spending ~5.5% would spend down the capital in ~50 years; spending a bit under 5% would never exhaust the capital. 

  4. Longtermist giving has accounted for ~30% of the funding we’ve allocated through the end of 2022. 

  5. According to estimates made at the time. Another way of thinking about this is as the “last grant” we’d ever make, in the sense that we’d prioritize all others above it. 

Comments40
Sorted by Click to highlight new comments since: Today at 4:25 PM

Thanks for the useful post Holden.

I think it would be great to see the full published tiered list.

In global health and development funders (i.e. OpenPhil and Givewell) are very specific about the bar and exactly who they think is under it and who they think is over it. Recently global development funders (well GiveWell) have even actively invited open constructive criticism and debate about their decision making. It would be great to have the same level of transparency (and openness to challenge) for longtermist grant making.

Is there a plan to publish the full tiered list? If not what's the reason / best case against having it public?

To flag some of the advantages

  • Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil's goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
  • Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil's tiers in their decision making.
  • It allows OpenPhil to receive useful constructive feedback or critiques.

I think there are various reasons for not having such a list public:

  • It will (literally) create a first tier, second tier, etc of organisations in the effective altruism community, which feels bad/confusing.
  • People will associate the organisation the grant is given to with the tier, while it was actually that specific grant that was evaluated.
  • The information that are provided publicly for a given grant are likely only a small subset of the information that was used to decide the tier, but people just looking through the list won't know or acknowledge that, leading to confusion about the actual bar.
  • If an organisation submits a funding request containing different activities, Open Phil will fund all those above the bar, but the different activities can be in different tiers, so would should be done in this case?
  • Organisations will likely want to have more information why their grant is on a specific tier which will might to additional work for lots of people.
  • Various of the above points just might lead to confusion by people trying to understand what the funding bar is.

I'm also confused slightly confused about the advantages you mention:

  • Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil's goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.

Isn't this to a large extent already possible as OpenPhil is publishing the grants they made? (I acknowledge that there is a time of maybe a year or so that we are in now where this is not really the case because the bar changed and maybe it would help for this period but not in general.)

  • Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil's tiers in their decision making.

I don't understand the first point, I think this would only work if OpenPhil also would publish grant requests that they don't fund (?) The second point might be true, but could also be a disadvantage.

  • It allows OpenPhil to receive useful constructive feedback or critiques.

That's true, but it could also lead to non-constructive feedback and critiques or non-constructive discussions in the community.

I'm not saying that OpenPhil definitively shouldn't publish the list, but I think there would be a lot of points for and against to be weigh up.

Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.

I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one the concerns you raise: if you are worried about people reading too much into the list and judging the organisations who requested the grants rather than specific grants you could publish the list in a pseudoanonymised way where you remove names of organisations and exact amounts of funding – sure people could connect the dots but it would help prevent misunderstanding and make it clearer judgement is for grants not organisations.

 

Anyway to answer your questions:

  • On creating new projects – it is  easier for the Charity Entrepreneurship research team to know how to asses funding availability and the bar to beat for global health projects than for biosecurity projects. Sure we can look at where OpenPhil have given but there is no detail there. It is hard to know how much they base their decisions on different factors such as the trusted-ness of the people running the project versus some bar of expected effectiveness versus something else. Ultimately this can make us more hesitant to try and start new organisations that would be aiming to get funding from OpenPhil's longtermist teams than  we are to start new organisations that would be aiming to get funding from GiveWell (or other very transparent organisations).  This uncertainty about future funding is also a barrier we see in potential entrepreneurs and more clarity feels useful
  • On other funders could fill gaps that they believe OpenPhil has missed – I recently wrote a critique of the Lon-Term Future Fund pointing out that they have ignored policy work. This has led to some other funders looking into the space. This was only possible because their grant and grant evaluations are public. (This did require having inside knowledge of the space about who was looking for funding.) Honestly OpenPhil are already pretty good at this, you can see all their grants and identify gaps (like I believe no longtermist team at OpenPhil has ever given to any policy work outside the US) and then direct funds to fill those gaps. It is unclear to me how much more useful the tiers would be but I expect the lower tiers would highlight areas where OpenPhil is unlikely to fund in the future and other funders could look at what they think is valuable in that space and fund it.

 

(All views my own not speaking for any org or for Charity Entrepreneurship etc)

There's a lot of policy work, it's just not getting identified.

In Biorisk, Openphil funds Center for Health Security, NTI, and Council on Strategic Risks. In AI, they fund GovAI, CNAS, Carnegie, and others. Those are all very policy-heavy.

The OP biosecurity and PP team just gave one recently for health security policy work in Australia, albeit a smaller grant

Great! Its good to see things changing :-) Thank you for the update!

And without minimizing all the effort that went into the list, it was compiled fairly quickly with a specific purpose in mind. For example, I'd expect OP to devote more of the limited time available to classifying grants near where it expected the new bars to be. For example, ensuring high accuracy in tier 1 vs 2 vs 3 (maybe even vs. high 4) probably wasn't at the top of the priority list. So it would probably be safer to view the determined tiers as +/- 1 tier, which significantly limits usefulness.

Also, unless OP released a ranked list, we wouldnt know where in a tier the grant fell. My guess is that there isn't that much difference in absolute quality between the bottom of tier 4 and the top of tier 5, and that line could move based on market conditions, cause area allocation, etc.

I do think that at least grantees should be told.

If grantee concerns are a reason against doing this, you could allow grantees to opt into having their tiers shared publicly. Even an incomplete list could be useful.

I'd personally happily opt in with the Atlas Fellowship, even if the tier wasn't very good.

If a concern is that the community would read too much into the tiers, some disclaimers and encouragement for independent thinking might help counteract that.

[This comment is no longer endorsed by its author]Reply

I happily opt in with regard to Rethink Priorities, even if the tier wasn't very good.

Same for Lightcone.

They made ~142 grants in that 18 month period. Assuming some multiple grants, that's still maybe 100-120 grantees to contact to ask whether they want to opt-in or not. Presumably most grantees will want to see, if not dispute, their tiered ranking before they opt in to publishing it. This will all take a fair amount of time -- and perhaps time at a senior level: eg the relevant relationship-holder (presumably the Program Officer) will need to contact the grantees, and then the CEO of the grantee will want to see the ranking and perhaps dispute it. It also runs a fair risk of damaging relationships with grantees.

So I would not be surprised if OpenPhil did not release the full tiered ranking. What they could do is release the list they considered (or confirm if I or others are correct in our attempted replication). Then we can at least know the 'universe of cases' they considered.

I'd think that getting a half dozen individual data points would be sufficient for 90+% of the value, and we're at least 1/3rd of the way there in this thread alone.

Same for QURI (Assuming OP ever evaluates/funds QURI)

I retracted my comment. I still think it would be useful for the Atlas Fellowship to know its tier, and I'd be happy for others to learn about Atlas's tier even if it was bad. 

But I think people would have all kinds of incorrect interpretations of the tiers, and it would produce further low-quality discussion on the Forum (which already seems pretty low, especially as far as Open Phil critiques go), and it could be a hassle for Open Phil. Basically I agree with this comment, and I don't trust the broader EA community to correctly interpret the tier numbers.

Oh, I also don't know whether publishing the tiers would be straightforwardly good. Just in case anyone is thinking about making any kind of tier list, including Open Phil ranking orgs, feel free to include Lightcone in it.

Similar. I think I'm happy for QURI to be listed if it's deemed useful.

Also though, I think that sharing information is generally a good thing, this type included. 

More transparency here seems pretty good to me. That said, I get that some people really hate public rankings, especially in the early stages of them. 

I happily opt in with regards to any future organization I found, but only if the tier is pretty good.

It would also be useful for organizations to at least privately know the tiers of past grants to them, to have a better idea of how likely they are to be funded in the future. (Edit: Sanjay said this.)

If organisations were privately informed of their tier, then the additional work of asking (even in the email) whether they would want to opt into tier sharing would be low/negligible.

Of course people may dispute their tier or only be happy to share if they are in a high tier, but this should at least slightly go against the argument of it being a lot of additional work to ask people for consent for the public list.

Thank you for the update and insight. A few questions:

1. What can the community expect regarding the renewal of funding for projects previously supported by OP that are now below the new bar? Should we expect a wave of projects to see their funding discontinued?

OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.

2. Can you share more about this process and any potential or anticipated effects for global health and wellbeing program areas?

I expect more funding discontinuations than usual, but we generally try to discontinue funding in a way that gives organizations time to plan around the change.

I’m not leading the longer-term process. I expect Open Philanthropy will publish content about it, but I’m not sure when.

This is useful to share, thank you.

I think it would be good if:

  • you shared with grant recipients which tier you think they are in (maybe you've already done this, but if you haven't, I think they would find it useful feedback)
  • If anyone is in tier 4 and willing to have it publicly shared that they are in that tier, I think the community would find it useful

I appreciate that many people would dislike the idea of it being public that there are three tiers higher than them, but some EA org leaders are very community-spirited and might be OK with this.

My understanding is that grantees are not informed about what tier they are in.

Can you comment a bit more on how the specific number of years (20 and 50) were chosen? Aren't those intervals [very] conservative, especially given that AGI/TAI timeline estimates have shortened for many? E.g., if one took seriously the predictions from 

wouldn't it be reasonable to also have scenarios under which you might want to spend at least the AI risk portfolio in something like 5-10 years instead? Maybe this is covered somewhat by 'Of course, we can adjust our spending rate over time', but I'd still be curious to hear more of your thoughts, especially since I'm not aware of OpenPhil updates on spending plans based on shortened AI timelines, even after e.g. Ajeya has discussed her shortened timelines.

Can the people who agreement-downvoted this explain yourselves? Bogdan has a good point: if we really believe in short timelines to transformative AI we should either be spending our entire AI-philanthropy capital endowment now, or possibly investing it in something that will be useful after TAI exists. What does not make sense is trying to set up a slow funding stream for 50 years of AI alignment research if we'll have AGI in 20 years.

(Edit: the comment above had very negative net agreement when I wrote this.)

That question's definition of AGI is probably too weak—it will probably resolve true a good deal before we have a dangerously powerful AI.

Maybe, though e.g. combined with

it would still result in a high likelihood of very short timelines to superintelligence (there can be inconsistencies between Metaculus forecasts, e.g. with 

as others have pointed out before). I'm not claiming we should only rely on these Metaculus forecasts or that we should only plan for [very] short timelines, but I'm getting the impression the community as a whole and OpenPhil in particular haven't really updated their spending plans with respect to these considerations (or at least this hasn't been made public, to the best of my awareness), even after updating to shorter timelines.

Aiming to spend down in less than 20 years would not obviously be justified even if one’s median for transformative AI timelines were well under 20 years. This is because we may want extra capital in a “crunch time” where we’re close enough to transformative AI for the strategic picture to have become a lot clearer, and because even a 10-25% chance of longer timelines would provide some justification for not spending down on short time frames.

This move could be justified if the existing giving opportunities were strong enough even with a lower bar. That may end up being the case in the future. But we don’t feel it’s the case today, having eyeballed the stack rank.

I agree. This lines with models of optimal spending I worked on which allowed for a post-fire alarm "crunch time" in which one can spend a significant fraction of remaining capital.

Elsewhere, Holden makes this remark about the optimal timing of donations:
 


Right now there aren’t a lot of obvious places to donate (though you can donate to the Long-Term Future Fund12 if you feel so moved).

  • I’m guessing this will change in the future, for a number of reasons.13
  • Something I’d consider doing is setting some pool of money aside, perhaps invested such that it’s particularly likely to grow a lot if and when AI systems become a lot more capable and impressive,14 in case giving opportunities come up in the future.
  • You can also, of course, donate to things today that others aren’t funding for whatever reason.


And in footnote 13:

 I generally expect there to be more and more clarity about what actions would be helpful, and more and more people willing to work on them if they can get funded. A bit more specifically and speculatively, I expect AI safety research to get more expensive as it requires access to increasingly large, expensive AI models.

 

I'm taking the quote out of context a little bit here. I don't know if Holden's guess that giving opportunities will increase is one of OpenPhil's reasons to spend at a low rate. There might be other reasons. Also, Holden is talking about individual donations here, not necessarily about OpenPhil spending.

I'm adding it here because it might help answer the question "Why is the spending rate so low relative to AI timelines?" even though it's only tangentially relevant.

Post summary (feel free to suggest edits!):
In November 2022, Open Philanthropy (OP) announced a soft pause on new longtermist funding commitments, while they re-evaluated their bar for funding. This is now lifted and a new bar set.

The process for setting the new bar was:

  1. Rank past grants by both OP and now-defunct FTX-associated funders, and divide these into tiers.
  2. Under the assumption of 30-50% of OP’s funding going to longtermist causes, estimate the annual spending needed to exhaust these funds in 20-50 years.
  3. Play around with what grants would have made the cut at different budget levels, and using a heavy dose of intuition come to an all-things-considered new bar.

They landed on funding everything that was ‘tier 4’ or above, and some ‘tier 5’ under certain conditions (eg. low time cost to evaluate, potentially stopping funding in future). In practice this means ~55% of OP longtermist grants over the past 18 months would have been funded under the new bar.

(This will appear in this week's forum summary. If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries series.)

Thanks for the communication, and especially giving percentages. Would you be able to either break it down by grants for individuals vs. grants to organizations, or note if the two groups were affected equally? While I appreciate knowing how high the bar has risen in general, I would be particularly interested in  how high it has risen for the kinds of applications I might submit in the future.

One reading of this is that Open Phil's new 'bar for funding' is that

"we should fund everything at tier 4 and better, as well as funding tier-5 grants under various conditions"
"about 40-70% of the grantmaking we did over the last 18 months would’ve qualified for funding under the new bar. I think 55% would be a reasonable point estimate."

So (very roughly!) a new project would need to be ranked as better than the average longtermist grant over the past 18 months in order to be funded.

Is that a wild misrepresentation?

I looked at OpenPhil grants  tagged as "Longtermism" in 2021 or 2022 (from here or the downloadable spreadsheet). I count 142 grants made since the beginning of July 2021.  The bar isn't being set by grants per se, but as a percentage: "% of our longtermist grantmaking over the last 18 months (by dollars)". These 142 grants are about $235m, so half would be $117m. 

For context, some grants are tiny, e.g. the smallest is $2,400. The top 10 represent over half that funding ($129m).

Centre for Effective Altruism — Biosecurity Coworking SpaceCentre for Effective AltruismBiosecurity & Pandemic Preparedness5,318,000Aug-22
Effective Altruism Funds — Re-Granting SupportCenter for Effective AltruismEffective Altruism Community Growth (Longtermism)7,084,000Feb-22
Centre for Effective Altruism — Harvard Square Coworking SpaceCenter for Effective AltruismEffective Altruism Community Growth (Longtermism)8,875,000Aug-22
Redwood Research — General SupportRedwood ResearchPotential Risks from Advanced AI9,420,000Nov-21
Good Forever — Regranting for Biosecurity ProjectsGood Forever FoundationBiosecurity & Pandemic Preparedness10,000,000Feb-22
Redwood Research — General Support (2022)Redwood ResearchPotential Risks from Advanced AI10,700,000Aug-22
Californians Against Pandemics — California Pandemic Early Detection and Prevention Act Ballot InitiativeCalifornians Against PandemicsBiosecurity & Pandemic Preparedness11,100,000Oct-21
Massachusetts Institute of Technology — AI Trends and Impacts Research (2022)Massachusetts Institute of TechnologyPotential Risks from Advanced AI13,277,348Mar-22
Funding for AI Alignment Projects Working With Deep Learning Systems Potential Risks from Advanced AI14,459,002Apr-22
Center for Security and Emerging Technology — General Support (August 2021)Center for Security and Emerging TechnologyPotential Risks from Advanced AI38,920,000Aug-21

Edit: cut the following text thanks to Linch's spot:

Though I note that you mentioned "we also included a number of grants now-defunct FTX-associated funders had made." -- it would be helpful for you to at least release the spreadsheet of grants you considered, even if for very understandable reasons you don't want to publish the Tier ranking.

I looked at OpenPhil grants  tagged as "Longtermism" in 2021 or 2022 (from here or the downloadable spreadsheet). Though I note that you mentioned "we also included a number of grants now-defunct FTX-associated funders had made."

I think this is screened off by Holden's explanation:

About 40% of our longtermist grantmaking over the last 18 months (by dollars) would have qualified for tier 4 or better (which, under the new guidance, means it would be funded). Note that this figure refers only to our longtermist grantmaking, and does not include grants by other funders (we included some of the latter in our exercise, but I’m reporting a figure based on Open Philanthropy alone because I think it will be easier to interpret). [emphasis mine, included surrounding text  in quote so easier to interpret]

ah awesome, thanks! I second-guessed myself by adding that, but I should have third-guessed myself. the midwit meme in real life

This is very helpful.

Might you have a rough estimate for how much the bar has gone up in expected value?

E.g. is the marginal grant now 2x, 3x etc. higher impact than before?

I don’t have a good answer, sorry. The difficulty of getting cardinal estimates for longtermist grants is a lot of what drove our decision to go with an ordinal approach instead.

Thanks for the update mate. Does this increased selectivity affect just longtermist grants and/or grants from EA funders other than OpenPhil?

Meta shares back up ;) 

Curated and popular this week
Relevant opportunities