Hide table of contents

Most debate week responses so far seem to strongly favor animal welfare, on the grounds that it is (likely) vastly more cost-effective in terms of pure suffering-reduction. I see two main ways to resist prioritizing suffering-reduction:

(1) Nietzschean Perfectionism: maybe the best things in life—objective goods that only psychologically complex “persons” get to experience—are just more important than creature comforts (even to the point of discounting the significance of agony?). The agony-discounting implication seems implausibly extreme, but I’d give the view a minority seat at the table in my “moral parliament”.[1] Not enough to carry the day.

(2) Strong longtermism: since almost all expected value lies in the far future, a reasonable heuristic for maximizing EV (note: not the same thing as an account of one’s fundamental moral concerns) is to not count near-term benefits at all, and instead prioritize those actions that appear the most promising for creating sustained “flow-through” or “ripple” effects that will continue to “pay it forward”, so to speak.

Assessing potential ripple effects

Global health seems much more promising than animal welfare in this respect.[2] If you help an animal (especially if the help in question is preventing their existence), they aren’t going to pay it forward. A person might. Probably not in any especially intentional way, but I assume that an additional healthy person in a minimally functional society will have positive externalities, some of which may—like economic growth—continue to compound over time. (If there are dysfunctional societies in which this is not true, the ripple effect heuristic would no longer prioritize trying to save lives there.)

So if we ask, which cause area is such that marginal funding is most likely to positively effect the future beyond our immediate lifetimes?, the answer is surely global health over animal welfare.

But that may not be the right question to ask. We might instead ask which has the highest expected value, which is not necessarily the same as the highest likelihood of value, once high-impact long-shots are taken into account.

Animal welfare efforts (esp. potentially transformative ones like lab-grown meat) may turn out to have beneficial second-order effects via their effect on human conscience—reducing the risk of future dystopias in which humanity continues to cause suffering at industrial scale. I view this as unlikely: I assume that marginal animal welfare funding mostly just serves to accelerate developments that will otherwise happen a bit later. But the timing could conceivably matter to the long-term if transformative AI is both (i) near, and (ii) results in value lock-in. Given the stakes, even a low credence in this conjunction shouldn’t be dismissed.

A cow making ripples in outer space

For global health & development funding to have a similar chance of transformative effect, I suspect it would need to be combined with talent scouting to boost the chances that a “missing genius” can reach their full potential.

That said, there’s a reasonable viewpoint on which we should not bother aiming at super-speculative transformative effects because we’re just too clueless to assess such matters with any (even wildly approximate) accuracy. On that view, longtermists should stick to more robustly reliable “ripple effects”, as we plausibly get from broadly helping people in ordinary (capacity-building) ways.

Three “worldview” perspectives worth considering

I’ve previously suggested that there are three EA “worldviews” (or broad strategic visions) worth taking into account:

(1) Pure suffering reduction.
(2) Reliable global capacity growth (i.e., long-term ripple effects).
(3) High-impact long-shots.

Animal welfare clearly wins by the lights of pure suffering reduction. Global Health clearly wins by the lights of reliable global capacity growth. The most promising high-impact long-shots are probably going to be explicitly longtermist or x-risk projects, but between animal welfare and ordinary global health charities, I think there’s actually a reasonable case for judging animal welfare to have the greater potential for transformative impact on current margins (as explained above).

I’m personally very open to high-impact long-shots, especially when they align well with more immediate values (like suffering reduction), so I think there’s a strong case for prioritizing transformative animal welfare causes here—just on the off chance that it indirectly improves our values during an especially transformative period of history.[3] But there’s huge uncertainty in such judgments, so I think someone could also make a reasonable case for prioritizing reliable global capacity growth, and hence global health over animal welfare.[4]

 

  1. ^

    To help pump the perfectionist intuition: suppose that zillions of insects experience mild discomfort, on net, over their lifetimes. We’re given the option to blow up the world. It would seem incredible to allow any amount of mild discomfort to trump all complex goods and vindicate choosing the apocalypse here. (I’m not suggesting that we wholeheartedly endorse this intuition; but maybe we should give at least some non-trivial weight to a striving/life-affirming view that powerfully resists the void across a wider range of empirical contingencies than Benthamite utilitarianism allows.)

  2. ^

    I owe the basic idea to Nick Beckstead’s dissertation.

  3. ^

    But again, one could likely find even better candidate long-shots outside of both global health and animal welfare.

  4. ^

    It might even seem a bit perverse to prioritize animal welfare due to valuing the “high impact long shots” funding bucket, if the most promising causes in that bucket lie outside of both animal welfare and global health. If we imagine the question bracketing the “long shots” bucket, and just inviting us to trade off between the first two, then I would really want to direct more funds into “reliable global capacity growth” over “pure suffering reduction”. So interpreting the question that way could also lead one to prioritize global health.

30

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

I think there is something to this. Besides economic growth, additional humans today could mean more humans (or beings descended from us, directly or artificially) across the far future, through their descendants.

I would be interested in further exploration of possible ripple effects of animal welfare work, too. For the most part, I expect far future indirect effects of animal welfare work to go through events that shape the distribution of values and attitudes of humans, our descendents and AIs. Some ideas:

  1. Animal welfare work affects people's values, attitudes and institutions, and engages people. There's moral circle expansion and capacity building. The capacity here is the labour, knowledge and resources of a community of people sensitive to the welfare of nonhuman animals, and often nonhuman beings more generally. Animal advocacy work grows the capacity of the animal advocacy community. Effective animal advocacy (EAA) work grows the capacity of the EAA and EA communities. Perhaps the case for far future effects is weaker here than for economic growth, though.
  2. More speculatively, the values and practices of future space colonies may disproprortionately reflect the values of early space colonizers from whom they inherit their institutions, attitudes and/or genetic dispositions (which in turn influence their attitudes). Ensuring early space colonizers are more animal-friendly, by changing their attitudes or ensuring hard of soft selection in a way related to their attitudes could be very important. For example, requiring the food of early space colonizers to be plant-based will cause those with dispositions that lead them to use animals for food to self-select out of space colonization. Those dispositions would then be less common across the far future, if space colonizers have more children on average. The Earth-bound will have a maximum population size, but colonizers may not or could have a far larger one, and may need to grow their populations above replacement long-term for successful space exploration and colonization.
  3. And, of course, potential AI value lock-in.

Upvoted for sharing an interesting framing!

Although once you start accounting for ripple effects, it becomes very suspicious if someone claims that the best way to improve the future is to work on global poverty or donate to animal welfare and they aren't proposing a specific intervention that is especially likely to ripple in a positive way.

I'd guess that basically any GHD charity that helps young people (whether saving lives from malaria or improving health and life prospects during developmentally important years) has positive ripple effects. I've love to see more evaluation of which are especially good prospects here, but I'm not aware of any such research upon which to base such a judgment.

For animal welfare, I highlighted lab-grown meat as having the greatest potential for transformative impact IMO -- but note that I'm no expert here!

I'm not entirely convinced that either is "the best way to improve the future", but the debate week limits us to picking between those two cause areas. Given unrestricted options, I'd probably pick different long-shots; but I still think GHD is well worth supporting from the perspective of (what I call) reliable global capacity growth, alongside things like basic research and lobbying for "progress" (pro-innovation policies and institutions).

Thanks for the post, Richard.

(2) Strong longtermism: since almost all expected value lies in the far future, a reasonable heuristic for maximizing EV (note: not the same thing as an account of one’s fundamental moral concerns) is to not count near-term benefits at all, and instead prioritize those actions that appear the most promising for creating sustained “flow-through” or “ripple” effects that will continue to “pay it forward”, so to speak.

As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by, and can be modelled as increasing the value of the world for a few years or decades. If strong longtermism was true, we would expect some interventions to have a constant or increasing effect over time?

Economic growth and population size both seem to have persisting effects. If you limit attention to just what can be "accurately measured" (by some narrow conception that rules out the above), your final judgment will be badly distorted by measurability bias.

This reminded me of this older post: https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence

I feel like while ripple effects from health/animal welfare interventions are certainly something to consider, I wouldn't base too much of my decision on those because there are likely other more effective methods to achieve those impacts - for example, if the case for health is reducing suffering+ ripple effects in economic/technological growth, I would suspect that doing animal interventions (for suffering) and tech/growth interventions (for tech/growth) would do a better job at achieving both outcomes than making a single intervention which you hope will solve both.

Curated and popular this week
Relevant opportunities