Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.
Executive Summary
* Performing prioritization work has been one of the main tasks, and arguably achievements, of EA.
* We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization.
* We ask how much of EA prioritization work falls in each of these categories:
* Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization.
* We then explore strengths and potential pitfalls of each level:
* Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success.
* Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere.
* Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement.
* See the Summary Table below to view the considerations.
* We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types.
* With this in mind, we outline eight cruxes that sketch what factors could favor some types over others.
* We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
Hi Aaron,
I think it's great that you ask these questions. I wouldn't assume that the majority of the community already has a crystal-clear grasp of them since (1) they are not straightforward at all and (2) as far as I know, the answers are not really consolidated in some single post you can read in 5 minutes.
GiveWell's current estimate is $3,000-$5,000 per life saved. This is the range they communicate on their Top Charities page and which they explain here. They have not updated this messaging since November 2020, so that may change soon.
As for a rough overview of the calculation process, this example may help. It is a complex process that starts with the evidence of effectiveness for a particular intervention but then includes a host of factors for which GiveWell calculate and regularly update their best estimates. Some mentioned in the example I just linked to are:
If you want to dig deeper, you can go over the cost-effectiveness spreadsheets on this page Michael shared in a previous answer or read this detailed guide.
Your second question was about what "saving a life" actually means. Holden (GiveWell co-founder, now Open Philanthropy's co-CEO) wrote this post about it in 2007. Some snippets:
I will find out whether GiveWell has revisited that more recently.
There is no perfect calculation of all the effects of a program but I think GiveWell's effort is impressive (and, as far as I can tell, unmatched in terms of rigor). I think the highest value is in the ability to differentiate top programs from the rest, even if the figures are imperfect.