Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.
Executive Summary
* Performing prioritization work has been one of the main tasks, and arguably achievements, of EA.
* We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization.
* We ask how much of EA prioritization work falls in each of these categories:
* Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization.
* We then explore strengths and potential pitfalls of each level:
* Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success.
* Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere.
* Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement.
* See the Summary Table below to view the considerations.
* We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types.
* With this in mind, we outline eight cruxes that sketch what factors could favor some types over others.
* We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
From the comments in the NYT, two notes on communicating longtermism to people-like-NYT-readers:
Re 2, I do think it's confusing to act like longtermism is nonobvious unless you're emphasizing weird implications like our calculations being dominated by the distant future and x-risk and things at least as weird as digital minds filling the universe.
Basically, William MacAskill's longtermism, or EA longtermism is trying to solve the distributional shift issue. Most cultures that have long-term thinking assume that there's no distributional shift such that no key assumptions of the present are wrong. Now if this assumption is correct, we shouldn't interfere with cultures, as they will go to local optimums. But it isn't and thus longtermism from has to deal with weird scenarios like AI or x-risk.
Thus the form of EA longtermism is not obvious, as it can't assume that there's no distributional shift into... (read more)