Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.
Executive Summary
* Performing prioritization work has been one of the main tasks, and arguably achievements, of EA.
* We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization.
* We ask how much of EA prioritization work falls in each of these categories:
* Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization.
* We then explore strengths and potential pitfalls of each level:
* Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success.
* Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere.
* Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement.
* See the Summary Table below to view the considerations.
* We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types.
* With this in mind, we outline eight cruxes that sketch what factors could favor some types over others.
* We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
I've worked in advocacy for EA causes for a bit, so I definitely believe in the power of it, but I also think the Overton window is a pretty crucial consideration for folks who are trying to mobilize the public. I'm guessing this is a popular view among people who work in advocacy for EA causes, but I might be wrong.
To be fair, I do think there could be value in making bold asks outside the Overton window. James Ozden has a really good piece about this. I think groups like DxE and PETA have done this for the animal movement, and it seems totally plausible to me that this has had a net positive effect.
But on the other hand, I think lots of the tangible changes we've seen for farmed animals have come from the incremental welfare asks that groups like Mercy For Animals and The Humane League focus on (disclaimer: I worked at the latter). The fact that these groups have been very careful to keep their asks within the Overton window has had the benefit of (1) helping advocates gain broad-based public support; and (2) getting corporations and policymakers on board and willing to actually adopt the changes they are asking for.
It seems likely to me that the second point applies for AI safety, but I'm not sure about the first and would probably need to see more polling or message testing to know. Nonetheless I suspect these concerns might be part of why the AI pause ask hasn't been as widely adopted among EAs (although a number of them did sign the FLI letter).