OpenPhil reviews their “near-termist, human-centric OpenPhil grants” (i.e., criminal justice reform, immigration policy, land use reform, macroeconomic stabilization policy, and scientific research) grantmaking and find that many of these grants have substantial risk of failing to exceed the cost-effectiveness of GiveWell's top charities.

It appears that over the past several years, the estimated cost-effectiveness of GiveWell's top charities (as a class) has increased higher than expected, whereas so far the estimated cost-effectiveness of “near-termist human-centric OpenPhil grants” (as a class) has not produced as many hits at a similar or better level as expected. There are also further notes around comparing the robustness of these estimations and additional considerations for why non-GiveWell near-termist human-centric grantmaking is valuable.

OpenPhil says they're "planning to write more at a later date about the cost-effectiveness of [their] 'long-termist' and animal-inclusive grantmaking and the implications for our future resource allocation" which I'm especially excited about seeing next.

42

0
0

Reactions

0
0
Comments7
Sorted by Click to highlight new comments since:

Corporate cage-free campaigns aren't considered among the “near-termist human-centric OpenPhil grants”... they're instead in a separate "animal-inclusive" granting bucket that will be evaluated later.

Oh, right! (I skimmed past the "human-centric" part.)

Do you know if these take into account criticisms of Givewell's methodology for estimating the effectiveness of their recommended charities?

Can you elaborate more about what you mean?

This is a recent criticism of Givewell that I didn't see responded to or accounted for in any clear way in the linked post. I haven't read the whole thing closely yet, but no section appears to go over the considerations raised in that post. If they were sound, these criticisms incorporated into the analysis might make Givewell's top-recommended charities look more 'beatable'. I was wondering if I was missing something in the post, and Open Phil's analysis either accounts for or incorporates for that possibility.

I'm not sure this is well-described as a "criticism of Givewell's methodology for estimating the effectiveness of their recommended charities." The problem seems to apply to cost-effectiveness estimates more broadly and the author explicitly says "Due to my familiarity with GiveWell, I mention it in a lot of examples. I don’t think the issues I raise in this post should be more concerning for GiveWell than other organizations associated with the EA movement". As such, I don't think these criticisms would make GiveWell's recommendations look more 'beatable.' Indeed, one might even think that it's partly because of considerations like those cited in the article you link, that GiveWell's top charities remain hard to beat, while other areas, which prima facie seemed like they would be extremely promising have turned out to be not so promising.

Curated and popular this week
Relevant opportunities