Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

5
7d
4
At what level of compute spending will AI Safety research be cut off from being considered effective altruism (if any)? Of course, saving humanity from misaligned AI could be argued to be close to priceless. But how many experiments have a direct theory of change (ToC) of how it's going to mitigate existential risk?  Perhaps a general one is fine at low compute ("it only costs $10 and 'control research' is generally thought to be a good research agenda").   But what about $5,000? What about $10,000? These numbers start to compare to or surpass what organizations like Giving What We Can receive from someone who donates for a whole year. It also starts to compete with saving a human life via programmes like those in GiveWell's top charities.  What about $20,000? $30,000? $50,000?  Over what time frame are we comfortable spending that much money on compute and still considering that money well (effectively) spent? A year? A month? A single experiment?  What kind of discovery is worth $50,000 in AIS research? Should we expect a clear ToC?  I'm very pro AI Safety, but I'm worried about some of the numbers I'm hearing for compute budgets being thrown around (compared to the information gained). I'm wondering - is anyone else is worried about a movement being (famously) concerned with cost effectiveness continuing on this path? Should we encourage more accountability?   
122
1y
14
I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins. Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause. That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-v
36
6mo
6
An informal research agenda on robust animal welfare interventions and adjacent cause prioritization questions Context: As I started filling out this expression of interest form to be a mentor for Sentient Futures' project incubator program, I came up with the following list of topics I might be interested in mentoring. And I thought it was worth sharing here. :) (Feedback welcome!) Last small update to add links to new things: January 30, 2026.  Animal-welfare-related research/work: 1. What are the safest (i.e., most backfire-proof)[1] consensual EAA interventions? (overlaps with #3.c and may require #6.) 1. How should we compare their cost-effectiveness to that of interventions that require something like spotlighting or bracketing (or more thereof) to be considered positive?[2] (may require A.) 2. Robust ways to reduce wild animal suffering 1. New/underrated arguments regarding whether reducing some wild animal populations is good for wild animals (a brief overview of the academic debate so far here). 2. Consensual ways of affecting the size of some wild animal populations (contingency planning that might become relevant depending on results from the above kind of research). 1. How do these and the safest consensual EAA interventions (see 1) interact? 3. Preventing the off-Earth replication of wild ecosystems. 3. Uncertainty on moral weights (some relevant context in this comment thread). 1. Red-teaming of different moral weights that have been explicitly proposed and defended (by Rethink Priorities, Vasco Grilo, ...). 2. How and how much do cluelessness arguments apply to moral weights and inter-species tradeoffs? 3. What actions are robust to severe uncertainty about inter-species tradeoffs? (overlaps with #1.) 4. Considerations regarding the impact of saving human lives (c.f. top-GiveWell charities) on farmed and wild animals. (may require 3 and 5.) 5. The impact of agriculture on soil nematodes and other numerous so
28
4mo
5
Gavi's investment opportunity for 2026-2030 says they expect to save 8 to 9 million lives, for which they would require a budget of at least $11.9 billion[1]. Unfortunately, Gavi only raised $9 billion, so they have to make some cuts to their plans[2]. And you really can't reduce spending by $3 billion without making some life-or-death decisions. Gavi's CEO has said that "for every $1.5 billion less, your ability to save 1.1 million lives is compromised"[3]. This would equal a marginal cost of $1,607 $1,363 per life saved, which seems a bit low to me. But I think there is a good chance Gavi's marginal cost per life saved is still cheap enough to clear GiveWell's cost-effectiveness bar. GiveWell hasn't made grants to Gavi, though. Why? ---------------------------------------- 1. https://www.gavi.org/sites/default/files/investing/funding/resource-mobilisation/Gavi-Investment-Opportunity-2026-2030.pdf, pp. 20 & 43 ↩︎ 2. https://www.devex.com/news/gavi-s-board-tasked-with-strategy-shift-in-light-of-3b-funding-gap-110595 ↩︎ 3. https://www.nature.com/articles/d41586-025-02270-x ↩︎
29
5mo
6
* Re the new 2024 Rethink Cause Prio survey: "The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]" 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree. * This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong. * (Charitable interpretation) It's also just the case that EAs tend to have lots of views that they're being contrarian about because they're trying to maximize the the expected value of information (often justified with something like: "usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees"). * If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for "being correct" reasons.  Tho lmk if you disagree. 
4
17d
Is the School for Moral Ambition's Tax Fairness Fellowship a good example of EA principles in action? The fellowship places professionals inside tax justice organisations working on taxing the ultra-wealthy. Their Tobacco Free Future Fellowship has apparently been evaluated in terms of DALYs (internal analysis) and found to be competitive with GiveWell's top charities. I suppose a similar evaluation would be even harder to run for the tax fairness programme. I would appreciate multiple short intuitions, low bar to answer!
2
9d
1
The 85 million children we cannot count New wars are starting before the old ones have ended. Humanitarian budgets are being cut with a chainsaw. And in this time of ultra-prioritisation, even more than before, we are asked to prove that every euro or dollar is spent on saving lives. I have been working in this sector for 15 years. I have seen its inefficiencies up close. I have also seen what it holds together. For the last few years, I have been exploring Effective Altruism and asking whether its principles can be brought into mainstream humanitarian aid. Whether that is even possible. The global aid cuts are now forcing that question into the open. I find that both necessary and deeply unsettling. Necessary, because the push toward cost-effectiveness is overdue. The tools are strong. Metrics like the Disability-Adjusted Life Year and the Wellbeing-Adjusted Life Year have made trade-offs clearer. Work by GiveWell and Rethink Priorities has improved how we compare and prioritise interventions. Unsettling, because the version of effectiveness thinking now leaking into institutional aid is the narrowest version available. EA itself has internal language for working under uncertainty. Hits-based giving, cluster thinking, and work under deep uncertainty are part of the framework. Funders like Open Philanthropy¹ regularly support areas with long causal chains and incomplete evidence when the potential upside is large. None of that nuance is what is showing up in the rooms where humanitarian budgets are being cut. What is showing up is the most legible version of cost-effectiveness, deployed as a universal filter. There is also a division of labour problem. Effective Altruism began as a framework for philanthropic choice: how should private donors direct marginal giving if they want to do the most good? Official development assistance was meant to do something different. Public funding is supposed to hold up systems, sustain services, and maintain the protective
52
1y
2
I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes's current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.
Load more (8/96)