Why I Still Think AW >> GH At The Margin
Last year, I argued that Open Phil (OP) should allocate a majority of its neartermist resources to animal welfare (AW) rather than global health (GH).
Most of the critical comments still agreed that AW > GH at the margin:
- Though Carl Shulman was unmoved by Rethink Priorities' Moral Weights Project, he's still "a fan of animal welfare work relative to GHW's other grants at the margin because animal welfare work is so highly neglected".
- Though Hamish McDoodles thinks neuron count ratios are a better proxy for moral weight than Rethink's method, he agrees that even if neuron counts are used, "animal charities still come out an order of magnitude ahead of human charities".
I really appreciate OP for their engagement, which gave some helpful transparency about where they disagree. Like James Özden, I think it's plausible that even OP's non-animal-friendly internal estimates still imply AW > GH at the margin. (One reason to think this is that OP wrote that "our current estimates of the gap between marginal animal and human funding opportunities is…within one order of magnitude, not three", when they could have written "GH looks better within one order of magnitude".)
Even if that understanding is incorrect, given that OP agrees that "one order of magnitude is well within the 'margin of error'", I still struggle to understand the rationale behind OP funding GH 6x as much as AW. Though I appreciate OP explaining how their internal estimates differ, the details of why their estimates differ remain unknown. If GH is truly better than AW at the margin, I would like nothing more than to be persuaded of that. While I endeavor to keep an open mind, it's difficult for me and many community members to update without knowing OP's answers to the headline questions:
- How much weight does OP's theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?
- Precisely how much more does OP value one unit of a human's welfare than one unit of another animal's welfare, just because the former is a human? How does OP derive this tradeoff?
- How would OP's views have to change for OP to prioritize animal welfare in neartermism?
OP has no obligation to answer these (or any) questions, but I continue to think that a transparent discussion about this between OP and community leaders/members would be deeply valuable. This Debate Week, the EA Leaders Forum, 80k's updated rankings, and the Community Survey have made it clear that there's a large gap between the community consensus on GH/AW allocation and OP's. This is a question of enormous importance for millions of people and trillions of animals. Anything we can do to get this right would be incredibly valuable.
Responses to Objections Not Discussed In Last Year's Post
Could GH > AW When Optimizing For Reliable Ripple Effects?
Richard Chappell has argued that while "animal welfare clearly wins by the lights of pure suffering reduction", GH could be competitive with AW when optimizing for reliable ripple effects like long-term human population growth or economic growth.
AW Is Plausibly More Robustly Good Than GH's Ripple Effects
I don't think it's obvious that human population growth or economic growth are robustly good. Historically, these ripple effects have had even larger effects on farmed and wild animal populations:
- Humanity-caused climate change and land use have contributed to a loss of 69% of wildlife since 1970.
- The number of farmed fish has increased by nearly 10x since 1990.
- Brian Tomasik has estimated that each dollar donated to AMF prevents 10,000 invertebrate life-years by reducing invertebrate populations.
Trying to account for all of these AW effects makes me feel rather clueless about the long-term ripple effects of GH interventions. In contrast, AW interventions such as humane slaughter seem more likely to me to be robustly good. While humane slaughter may slightly reduce demand for meat due to increased meat prices, it is unlikely to affect farmed or wild animal populations nearly as much as economic growth or human population growth would.
Implications of Optimizing for Reliable Ripple Effects in GH
Vasco Grilo points out that longtermist interventions like global priorities research and improving institutional decisionmaking seem to be better for reliable long-term ripple effects than GiveWell Top Charities. It would be surprising if the results of GiveWell's process, which optimizes for the cheapest immediate QALYs/lives saved/income doublings, would also have the best long-term ripple effects.
Rohin Shah suggests further implications of optimizing for reliable ripple effects:
- Given an inability to help everyone, you'd want to target interventions based on people's future ability to contribute. (E.g. you should probably stop any interventions that target people in extreme poverty.)
- You'd either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
- You'd want to invest more in education than would be suggested by typical metrics like QALYs or income doublings.
I think it's plausible that some AW causes, such as moral circle expansion, could also rank high on the rubric of reliable ripple effects.
In summary, it seems that people sympathetic to Richard's argument should still be advocating for a radical rethinking of almost all large funders' GH portfolios.
What if I'm a Longtermist?
Some of my fellow longtermists have been framing this discussion by debating which of GH or AW is the best for the long-term future. If the debate is framed this way, it collapses to comparing between longtermist interventions which could be characterized as GH and those which could be characterized as AW:
- "GH": Global priorities research and improving institutional decisionmaking
- "AW": Moral circle expansion and digital mind research
This doesn't seem like a useful discussion if the debate participants would all privately prefer that the 100M simply be allocated to unrestricted longtermism.
Instead, I think we would all learn more from the debate if it were instead framed within the context of neartermism. Like OP and the Navigation Fund, I think there are lots of reasons to allocate some of our resources to neartermism, including worldview diversification, cluelessness, moral parliament, risk aversion, and more. If you agree, then I think it would make more sense to frame this debate within neartermism, because that's likely what determines each of our personal splits between our GH and AW donations.
I can't speak for OP but I thought the whole point of its "worldview diversification buckets" was to discourage this sort of comparison by acknowledging the size of the error bars around these kind of comparisons, and that fundamentally prioritisation decisions between them are influenced more by different worldviews rather than the possibility of acquiring better data or making more accurate predictions around outcomes. This could be interpreted as an argument against the theme of the week and not just this post :-)
But I don't think neuron counts are by any means the most unfavourable [reasonable] comparison for animal welfare causes: the heuristic that we have a decent understanding of human suffering and gratification whereas the possibility a particular intervention has a positive or negative or neutral impact on the welfare of a fish is guesswork seems very reasonable and very unfavourable to many animal related causes (even granting that fish have significant welfare ranges and that hedonic utiitarianism is the appropriate method for moral resource allocation). And of course there are non-utilitarian moral arguments in favour of one group of philanthropic causes or another (prioritise helping fellow moral beings vs prioritise stopping fellow moral beings from actively causing harm) which feel a little less fuzzy but aren't any less contentious.
There are also of course error bars wrapped around individual causes within the buckets, which is part of the reason why GHW funds both GiveWell recommended charities and neartermist policy work that might affect more organism life years per dollar than Legal Impact for Chickens (but might actually be more likely to be counterproductive or ineffectual)[1] but that's another reason why I think blanket comparisons are unhelpful. A related issue is that it's much more difficult to estimate marginal impacts of research and policy work than dispensing medicine or nets. The marginal impact of $100k more nets is easy to predict; the marginal impact of $100k more to a lobbying organization is not even if you entirely agree with the moral weight they apply to their cause, and average cost-effectiveness is not always a reliable guide to scaling up funding, particularly not if they're small, scrappy organizations doing an admirable job of prioritising quick wins and also likely to face increase opposition if they scale.[2] Some organizations which fit that bill fit in the GHW category, but it's much more representative of the typical EA-incubated AW cause. Some of them will run into diminishing returns as they run out of companies actually willing to engage with their welfare initiatives, others may become locked in positional stalemates, some of them are much more capable of absorbing significant extra funding and putting it to good use than others. Past performance really doesn't guarantee future returns to scale, and some types of organization are much more capable of achieving it than others, which happens to include many of the classic GiveWell type GHW charities, and not many of the AW or speculative "ripple effect" GHW charities[3]
I guess there are sound reasons why people could conclude that AW causes funded by OP were universally more effective than GHW ones or vice versa, but those appear to come more from strong philosophical positions (meat eater problems or disagreement with the moral relevance of animals) than evidence and measurement.
For the avoidance of doubt, I'm acknowledging that there's probably more evidence about negative welfare impacts of practices Legal Impact for Chickens is targeting and their theory of change than of the positive welfare impacts and efficacy of some reforms promoted in the GHW bucket , even given my much higher level of certainty about the significance of the magnitude of human welfare. And by extension pointing out that sometimes comparisons between individual AW and GHW charities run the opposite way from the characteristic "AW helps more organisms but with more uncertainty" comparison.
There are much more likely to be well-funded campaigns to negate the impact of an organization targeting factory farming than ones to negate the impact of campaigns against malaria . Though on the other hand, animal cruelty doesn't have as many proponents as the other side of virtually any economic or institutional reform debate.
There are diminishing returns to healthcare too: malaria nets' cost-effectiveness is broadly proportional to malaria prevalence. But that's rather more predictable than the returns to scale of anti-cruelty lobbying, which aren't even necessarily positive beyond a certain point if the well-funded meat lobby gets worried enough.
My understanding from conversation with SWP is that for shrimp, the electric stunning also just kills the shrimp, and it's all over very quickly.
It might be different for fish.