I like Open Phil's worldview diversification. But I don't think their current roster of worldviews does a good job of justifying their current practice. In this post, I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice. Something along these lines strikes me as necessary to justify giving substantial support to paradigmatic Global Health & Development charities in the face of competition from both Longtermist/x-risk and Animal Welfare competitor causes.
Current Orthodoxy
I take it that Open Philanthropy's current "cause buckets" or candidate worldviews are typically conceived of as follows:
- neartermist - incl. animal welfare
- neartermist - human-only
- longtermism / x-risk
We're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree).
The Problem
I worry that the "human-only neartermist" bucket lacks adequate philosophical foundations. I think Global Health & Development charities are great and worth supporting (not just for speciesist presentists), so I hope to suggest a firmer grounding. Here's a rough attempt to capture my guiding thought in one paragraph:
Insofar as the GHD bucket is really motivated by something like sticking close to common sense, "neartermism" turns out to be the wrong label for this. Neartermism may mandate prioritizing aggregate shrimp over poor people; common sense certainly does not. When the two come apart, we should give more weight to the possibility that (as-yet-unidentified) good principles support the common-sense worldview. So we should be especially cautious of completely dismissing commonsense priorities in a worldview-diversified portfolio (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas).
A couple of more concrete intuitions that guide my thinking here: (1) fetal anesthesia as a cause area intuitively belongs with 'animal welfare' rather than 'global health & development', even though fetuses are human. (2) It's a mistake to conceive of global health & development as purely neartermist: the strongest case for it stems from positive, reliable flow-through effects.
A Proposed Solution
I suggest that we instead conceive of (1) Animal Welfare, (2) Global Health & Development, and (3) Longtermist / x-risk causes as respectively justified by the following three "cause buckets":
- Pure suffering reduction
- Reliable global capacity growth
- High-impact long-shots
In terms of the underlying worldview differences, I think the key questions are something like:
(i) How confident should we be in our explicit expected value estimates? How strongly should we discount highly speculative endeavors, relative to "commonsense" do-gooding?
(ii) How does the total (intrinsic + instrumental) value of improving human lives & capacities compare to the total (intrinsic) value of pure suffering reduction?
[Aside: I think it's much more reasonable to be uncertain about these (largely empirical) questions than about the (largely moral) questions that underpin the orthodox breakdown of EA worldviews.]
Hopefully it's clear how these play out: greater confidence in EEV lends itself to supporting longshots to reduce x-risk or otherwise seek to improve the long-term future in a highly targeted, deliberate way. Less confidence here may support more generic methods of global capacity-building, such as improving health and (were there any promising interventions in this area) education. Only if you're both dubious of longshots and doubt that there's all that much instrumental value to human lives do you end up seeing "pure suffering reduction" as the top priority.[1] But insofar as you're open to pure suffering reduction, there's no grounds for being speciesist about it.
Implications
- Global health & development is actually philosophically defensible, and shouldn't necessarily be swamped by either x-risk reduction or animal welfare. But it requires recognizing that the case for GHD requires a strong prior on which positive "flow-through" effects are assumed to strongly correlate with traditional neartermist metrics like QALYs. Research into the prospects for improved tracking and prediction of potential flow-through effects should be a priority.
- In cases where the correlation transparently breaks down (e.g. elder care, end-of-life care, fetal anesthesia, dream hedonic quality, wireheading, etc.), humanitarian causes should instead need to meet the higher bar for pure suffering reduction - they shouldn't be prioritized above animal welfare out of pure speciesism.[2]
- If we can identify other broad, reliable means to boosting global capacity (maybe fertility / population growth?),[3] then these should trade off against Global Health & Development (rather than against x-risk reduction or other longshots).
- ^
It's sometimes suggested that an animal welfare focus also has the potential for positive flow-through effects, primarily through improving human values (maybe especially important if AI ends up implementing a "coherent extrapolation" of human values). I think that's an interesting idea, but it sounds much more speculative to me than the obvious sort of capacity-building you get from having an extra healthy worker in the world.
- ^
This involves some revisions from ordinary moral assumptions, but I think a healthy balance: neither the unreflective dogmatism of the ineffective altruist, nor the extremism of the "redirect all GHD funding to animals" crowd.
- ^
Other possibilities may include scientific research, economic growth, improving institutional decision-making, etc. It's not clear exactly where to draw the line for what counts as "speculative" as opposed to "reliably" good, so I could see a case for a further split between "moderate" vs "extreme" speculativeness. (Pandemic preparedness seems a solid intermediate cause area, for example - far more robust, but lower EV, than AI risk work.)
Thanks for these ideas, this is an interesting perspective.
I'm a little uncertain about one of your baseline assumptions here.
"We're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree)."
I think I disagree with this framing and/or perhaps there might be a bit of unintentional strawmanning here? Can you point out the EAs or EA arguments (perhaps on the forum) that distinguish between the strength of these worldviews that are explicitly speciesist? Or only care about present beings?
Personally I'm focused largely on GHD (while deeply respecting other worldviews) not because I'm speciest, but because I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism), and also that even assuming hedonism I'm not yet convinced by Rethink Priorities amazing research which places the moral weights of pigs, chickens and other animals extremely close to humans. Of course you could argue I think that because I'm a biased speciest human and you might be right - but that's not my intention.
And I do care about both present and future human beings, and am into longtermism as a concept, but am dubious right now about our ability to predictably and positively influence the long-term future, especially in the field of AI given EA's track record so far - with the exception of policy and advocacy work by EAs which I think has been hugely valuable.
Others may have very different (but valid) reasons than me for distinguishing between the importance of these worldviews but I'm not sure that you are right when you say "EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings? Neither of which strikes me as especially uncertain (though I know others disagree)."
I probably agree with OpenPhil that there are indeed a range of important, uncertain questions different from the somewhat obvious ones you stated which can swing someone between caring more about current humans, animals or longtermism.
But even setting this issue aside, I see merit in your framework as well.
I would be interested to see if other people thing that those two not "especially uncertain questions" are what pushes people towards one or another worldview.
Hi Nick, I'm reacting especially to the influential post, Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, which seems to me to frame the issues in the ways I describe here as "orthodox". (But fair point that many supporters of GHD would reject that framing! I'm with you on that; I'm just suggesting that we need to do a better job of elucidating an alternative framing of the crucial questions.)
Thanks,... (read more)