[ Question ]

Not getting carried away with reducing extinction risk?

by jackmalde 4mo1st Jun 201920 comments

10


I get the sense that some in the EA community would solely focus on reducing extinction risk if they could have it their way. But is there a danger with such an extreme focus on reducing extinction risk that we end up successfully prolonging a world that may not even be desirable?

It seems at least slightly plausible that the immense suffering of wild animals could mean that the sum of utilities in the world is negative (please let me know if you find this to be a ludicrous claim).

If this is true, and if hypothetically things were to stay this way, it may not be the case that reducing extinction risk is doing the most good, even under a 'total utilitarian' population axiology.

Whilst I would like to see us flourish into the far future, I think we may have to focus on the 'flourish' part as well as the 'far future' part. It seems to me that reducing extinction risk may only be a worthwhile endeavour if it is done alongside other things such as eradicating wild animal suffering.

What do you think? Can solely focusing on extinction risk be doing the most good or do we need to do it in tandem with other things that actually make the world worth prolonging?

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

5 Answers

If humanity wipes itself out, those wild animals are going to continue suffering forever.

If we only partially destroy civilization, we're going to set back the solution to problems like wild animal suffering until (and if) we rebuild civilization. (And in the meantime, we will suffer as our ancestors suffered).

If we nuke the entire planet down to bedrock or turn the universe into paperclips, that might be a better scenario than the first one in terms of suffering, but then all of the anthropic measure is confined to the past, where it suffers, and we're foregoing the creation of an immeasurably larger measure of extremely positive experiences to balance things out.

On the other hand, if we just manage to pass through the imminent bottleneck of potential destruction and emerge victorious on the other side—where we have solved coordination and AI—we will have the capacity to solve problems like wild animal suffering, global poverty, or climate change with a snap of our fingers, so to speak.

That is to say, problems like wild animal suffering will either be solved with trivial effort a few decades from now, or we will have much, much bigger problems. Either way—this is my personal view, not necessarily other "long-termists"—current work on these issues will be mostly in vain.

Since most of the responders here are defending x-risk reduction, I wanted to chime in and say that I think your argument is far from ludicrous and is in-fact why I don't prioritize x-risk reduction, even as a total utilitarian.

The main reason it's difficult for me to be on board with pro-x-risk-reduction arguments is that much of it seems to rely on projections about what might happen in the future, which seems very prone to miss important considerations. For example, saying that WAS will be trivially easy to solve once we have an aligned AI, or saying that the future is more likely to be optimized for value rather than disvalue, both seem overconfident and speculative (even if you can give some plausible sounding arguments).

If I were more comfortable with projections about what will happen in the far future, I'm still not sure I would end up favoring x-risk reduction. Take AI x-risk: it's possible that we have a truly aligned AI, or that we have a paperclip maximizer, but it's also possible that we have a powerful general AI whose values are not as badly misaligned as a paperclip maximizer's, but that are somehow dependent on the values of its creators. In this scenario, it seems crucially important to speed up the improvement of humanity's values.

I agree with Moses in that I much prefer a scenario where everything in our light cone is turned into paperclips to one e.g. where humans are wiped out due to some deadly pathogen, but other life continues to exist here and wherever else in the universe. This doesn't necessarily mean that I favor biorisk reduction over AI risk reduction, since AI risk reduction also has the favorable effect of making a remarkable outcome (aligned AI) more likely. I don't know which one I'd favor more all things considered.

If one doesn't have strong time discounting in favor of the present, the vast majority of the value that can be theoretically realized exists in the far future.

As a toy model, suppose the world is habitable for a billion years, but there is an extinction risk in 100 years which requires substantial effort to avert.

If resources are dedicated entirely to mitigating extinction risks, there is net -1 utility each year for 100 years but a 90% chance that the world can be at +5 utility every year afterwards once these resources are freed up for direct work. (In the extinction case, there is no more utility to be had by anyone.)

If resources are split between extinction risk and improving current subjective experience, there is net +2 utility each year for 100 years, and a 50% chance that the world survives to the positive longterm future state above. It's not hard to see that the former case has massively higher total utility, and remains such under almost any numbers in the model so long as we can expect billions of years of potential future good.

A model like this relies crucially on the idea that at some point we can stop diverting resources to global catastrophic risk, or at least do so less intensively, but I think this is an accurate assumption. We currently live in an unusually risk-prone world; it seems very plausible that pandemic risk, nuclear warfare, catastrophic climate change, unfriendly AGI, etc. are all safely dealt with in a few centuries if modern civilization endures long enough to keep working on them.

One's priorities can change over time as their marginal value shifts; ignoring other considerations for the moment doesn't preclude focusing on them once we've passed various x-risk hurdles.


Hey Jack, I think this is a great question and I dedicate a portion of my MA philosophy thesis to this. Here are some general points:

  • It is likely that the expected moral value of the future is dominated by futures in which there is optimization for moral (dis)value. Since we would expect it to be much more likely there will be optimization for value than for disvalue, the expected moral value of the future seems positive (unless you adhere to strict/lexical negative utilitarianism). This claim depends on the difference between possible worlds that are optimized for (dis)value vs. states that are subject to other pressures, like competition: this difference is not obviously large (see Price of Anarchy).
  • There seems substantial convergence between improving the quality of the long-term future and reducing extinction risk. Things that can bring humanity to extinction (superintelligence, virus, nuclear winter, extreme climate change) can also very bad for the long-term future of humanity if they do not lead to extinction. Therefore, reducing extinction risk also has very positive effects on the quality of the long-term future. Potential suffering risk from misaligned AI is one candidate. In addition, I think global catastrophes, if they don't lead to extinction, create a negative trajectory change in expectation. Either because civilizational collapse puts us on a worse trajectory, but the most likely outcome is what I call general global disruption: civilization doesn't quite collapse, but think are shaken up a lot. From my thesis:
Should we expect global disruption to be (in expectation) good or bad for the value of the future? This is speculative, but Beckstead lays out some reasons to expect that global disruption will put humanity most certainly on a worse trajectory: it may reverse social progress, limit the ability to adequately regulate the development of dangerous technologies, open an opportunity for authoritarian regimes to take hold, or increase inter-state conflict (Beckstead, 2015). We can also approach the issue abstractly: disruption can be seen as injecting more noise into a previously more stable global system, increasing the probability that the world settles into a different semi-stable configuration. If there are many more undesirable configurations of the world than desirable ones, increasing randomness is more likely to lead to an undesirable state of the world. I am convinced that, unless we are currently in a particularly bad state of the world, global disruption would have a very negative effect (in expectation) on the value of the long-term future.
  • I find it unlikely that we would export wild-animal suffering beyond our solar system. It takes a lot of time to move to different solar systems, and I don't think future civilizations will require a lot of wilderness: it's a very inefficient use of resources. So I believe the amount of suffering is relatively small from that source. However, I think some competitive dynamics between digital beings could create astronomical amounts of suffering, and this could come about if we focus only on reducing extinction risk.
  • Whether you want to focus on the quality of the future also depends on your moral views. Some people weigh preventing future suffering much more heavily than enabling the creation of future happiness. For them, part of the value of reducing extinction risk is taken away, and they will have stronger reasons to focus on the quality of the future.
  • I found the post by Brauner & Grosse-Holz and the post by Beckstead most helpful. I know that Haydn Belfield (CSER) is currently working on a longer article about the long-term significance of reducing Global Catastrophic Risks.

In conclusion, I think reducing extinction risk is a very positive in terms of expected value, even if one expects the future to be negative! However, depending on different parameters, there might be better options than focusing on extinction risk. Candidates involve particular parts of moral circle expansion and suffering risks from AI.

I can send you the current draft of my thesis in case you're interested, and will post it online once I have finished it.