Hide table of contents

I've been thinking recently about what it would mean to expand my moral circle concretely in terms of what actions it would logically force me to take, and I've really run up against a wall toward being anything but a speciest from a utilitarian perspective.

I assume a long-termist perspective based largely on the arguments touched upon in this video by Hilary Greaves and the papers she cites.  Broadly, it is really, really hard to measure the consequences of our actions over the short-term and especially over the long-term.  So if we care about the long-term future, then we must focus our efforts on interventions whose effects on the further future are more predictable - like those reducing existential risks.

I assume:

  1. this basic long-termist argument for mitigating existential risks
  2. the expansion of my moral circle to include at least certain animals (i.e. at least monkeys or domesticated animals)
  3. the highest risks to the extinction of humanity are anthropogenic

Thinking about the highest risks to the extinction of humanity based on Toby Ord's estimates in the Precipice:

  • AI - 1/10
  • engineered pandemics - 1/30
  • climate change/nuclear war - 1/1000

Many outcomes of AI, nuclear war, and climate change seem very likely to also pose extinction risks for most animals, especially those which tend to be given priority in the expansion of one's moral circle (i.e. monkeys, domesticated animals).  

I believe there is more uncertainty for engineered pandemics.  Only 61% of all human diseases are zoonotic in origin, though 75% of new diseases discovered in the last decade are zoonotic.  It seems unlikely that even an engineered pandemic (unless it was specifically designed to destroy all life on Earth) would affect all animals.  So maybe the risk to animals from engineered pandemics is more like 1/100, 1/1000, or even less.

Despite discounting the this animal extinction rate, anthropogenic animal extinction risks still likely dwarf non-anthropogenic animal extinction risks.

So - assuming the expansion of my moral circle to include at least certain animals - it seems that getting rid of humans would be the best thing to do over the long-term.  The source of those risks is clearly us, humans.  So the best thing to do in the long-term is to get rid of humans.

But I, like the Avengers in their fight against Thanos, do not believe that getting rid of humans -those most likely to cause extinction (or suffering, in the case of the Avengers) - is the answer.  Thus, I am clearly overvaluing humans to a very, very large degree over the long-term.  Why am I wrong?

Some arguments I thought of to counter this argument but which didn't seem very strong to me:

A. The relative value of a human compared to an animal is so high that even 4 billion more years of animal life without humanity is worth it.  I think the quantities of life here are difficult to conceptualize, but it seems unlikely that- from a utilitarian perspective putting human lives and animal lives in the same moral circle - humanity's existence is worth it compared the billions of years of animals that would continue to live without us.

B. The opportunity cost is worth it to have humanity try to protect life on Earth from natural existential risks and extend animal life past whatever natural risks Earth may encounter.  This doesn't seem reasonable based on the current order-of-magnitude differences between anthopogenic and non-anthropogenic extinction risks. 

Additionally, all of Ord's largest existential risks were created in the past couple hundred years. Nukes, engineered pandemics, and AI were created in the last hundred.  From this historical evidence, it seems likely that future human life will result in greater animal existential risk rather than less animal existential risk.

C. The opportunity cost is worth it to have humanity try to extend animal life past the 4 billion-year mark.  This argument seems stronger because it creates infinite potential for future animal life.  But there is a pretty big 'if' that we (humans) will make it that far and solve the uninhabitable Earth problem.  Is this risk worth billions of years of animal life?  I don't think so.

Additionally, if we include potential non-Earth animal life in the mix, then our solving the uninhabitable Earth problem would likely present exaggerated risks to all of that non-Earth animal life.

D. We expand out moral circle to include animals that are more likely to survive anthropogenic existential risks and weight their life comparably high over 4 billion years to counteract all of the animal life unlikely to survive anthropogenic existential risks.  This argument can be combined with the prior three to strengthen them.  But (towards argument C) I still think that most animal life for up to 4 billion years beats very limited animal life for 4 billion years and the slim potential of infinite human and animal life.

E. This argument isn't practical because humans cannot be wiped out.  I agree, but I don't think that the impracticality of the thought experiment invalidates the merits of the arguments.

0

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 3:42 AM

If you take moral uncertainty even slightly seriously, you should probably avoid doing things which would be horrifically evil according to a whole load of worldviews you don't subscribe to, even if according to your preferred worldview it would be fine.

Strongly agree with alexrjl here. 

And even if you assume consequentialism to be true and set moral uncertainty aside, I believe this is the sort of thing where the empirical uncertainty is so deep, and the potential for profound harm so great, that we should seriously err on the side of not doing things that intuitively seem terribly wrong, since commonsense morality is a decent (if not perfect) starting point for determining the net consequences of actions.  Not sure I'm making this point very clearly, but the general reasoning is discussed in this essay: Ethical Injunctions.

More generally I would say that – with all due respect to OP – this is an example of a risk associated with longtermist reasoning, whereby terrible things can seem alluring when astronomical stakes are involved. I think we, as a community, should be extremely careful about that.

(for what it's worth, I don't actually think utilitarianism leads to the conclusions in the post, but I think other commenters have discussed this, and I think the general point in my first comment is more important)

You assume here that other wild animals have net-positive lives. It is also possible from a utilitarian viewpoint that their lives are net-negative, or that their lives are neutral since they lack conscioussness. I don't think there is any way, even in principle, of knowing which is true. I do feel comfortable saying however  that humans are both more intrinsically valuable than other animals, and have a higher potential to live a good life than other animals. 

It is definitely possible to reach the utilitarian conclusion that the extinction of humanity would be good because of our impact on other animals, but I don't think a utilitarian has to reach that conclusion. I think it is one of several issues where a utilitarian has to make some arbitrary choice of auxillary hypothesis if he wants to get clear utilitarian answer. 

Personally I am not a utilitarian. I value animals to some extent: I would love to see an end to factory farming, and in the future want us to spend som resources helping wild animals (in ways that seem unlikely to do harm whether you think their lives are net-positive or net-negative.) But I am not willing to value them above humanity.

I think most Utilitarians typically don't care about extinction of some species per se, but instead more about something like how it affects the total amount of good and bad experiences that are experienced. From that perspective, some billion years of life of continued existince of animals on Earth is probably way less exciting given there's one species, humanity, that probably is headed to become or give rise to a space faring civilization with the potential to vastly exceed any Utopian imaginations. Additionally, given that animals in nature are probably living way less enjoyable lives than most people imagine, I personally don't feel so good about the idea of dragging out the current state of nature for longer than necessary. 

Some additional arguments:

  • This one, arguing that humanity's long-term future will be good
  • These, arguing that we should be nice/cooperative with others' value systems
  • In practice, violence is often extremely counterproductive; it often fails and then brings about massive cultural/political/military backlash against whatever the perpetrator of violence stood for.

Thank you @alexrjl - I really appreciate the succinctness of your post.

To @jtm, yeah agree with the first part and towards the second - I was trying to get at that in the post i.e. there are significant risks with taking things too far/thinking too big picture without appropriate grounding.

@rogerackroyd - I like your point about utilitarians not having  to reach the conclusion I came to above - similar to @alexrjl except within long-termist utilitarianism itself, and that small likelihood carries significant weight - getting at what @jtm was mentioning.

Thanks to the others who commented as well!

"The argument against" is that a Thanos-ing all humanity would not save the lives of other sentient beings, it would just allow those lives to continue being, much too often, miserable: human animals are currently the only chance for all animals to escape the grips of excessive suffering. The problem here, "somethoughts", is that you, like countless of us, value life so much more than the alleviation of suffering that you pose horribly absurd problems, and with such an unexamined value in the background lurks a nihilism that represents, to be frank, an existential risk. 

Parasitic wasps may be the most diverse group of animals (beating out beetles). In some areas environments, a shocking fraction of prey insects are parasitized.

 If you value 'life' you should probably keep humans around so we can spread life beyond earth. The expected amount of life in the Galaxy seems much higher if humans stick around. Imo the other logical position is 'blow up the sun'. Don't just take out the humans, take out the wasps too. The earth is full of really horrible suffering and if the humans die out then wasp parasitism will probably go on for hundreds of millions of additional years.

Of course, humans literally spread parasitic wasps as a form of 'natural' pest control so maybe the life spread by humans will be unusually terrible? I suppose 'life on earth is net-good, but life specifically spread by humans will be net bad'. It is worth noting humans might create huge amounts of digital life. RobinHanson's 'Age of Em' makes me wonder about their quality of life.

Just killing all humans but leaving the rest of the biosphere intact seems like it's 'threading the needle'. Maybe you can clarify more what you are valuing specifically.

Of course, don't do anything crazy. Give the absurdity heuristic a little respect.

I wrote some counter-arguments, why we could prefer human lives from an impartial (antispeciesist) perspective: https://stijnbruers.wordpress.com/2020/02/25/arguments-for-an-impartial-preference-for-human-lives/

Comments about moral uncertainty and wild animal suffering are valid, but I think kind of unneccessary. I don't think the argument works at all in its current form. 

I think the argument is something like this:

  1. Human existence is bad for animals because they cause a much greater probability of complete animal extinction (via anthropogenic extinction risk)
  2. Animal life is more important than human life on net (because there's more of them, and perhaps because non-human animals don't pose significant extinction risk)
  3. Therefore humans should destroy themselves.

If so, the conclusion is invalid. At most, it shows the world would be better on net if humans suddenly stopped existing. But there is something quite absurd about trying to protect animals from the risks of anthropogenic extinction... via anthropogenic extinction. The more obvious thing to do would be to reduce the risks of anthropogenic extinction.

So for the argument to work, you need to believe that it's not possible to significantly reduce anthropogenic risk (implausible I think), but it is possible to engineer a human extinction event that is, in expectation, much less risky to animal life than an accidential human extinction event. Engineering such an extinction might well be possible, but since you only get one shot, you would surely need an implausibly high level of confidence. 

Curated and popular this week
Relevant opportunities