We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
This is a difficult one, and both my thoughts and my justifications (especially the few sources I cite) are very incomplete.
It seems to me for now that existential risk reduction is likely to be negative, as both human and AI-controlled futures could contain immense orders of magnitude more suffering than the current world (and technological developments could also enable more intense suffering, whether in humans or in digital minds). The most salient ethical problems with the extinction of earth-originating intelligent life seem to be the likelihood of biological suffering continuing on earth for millions of years (though it's not clear to me whether it would be more or less intense without intelligent life on earth), and the possibility of space (and eventually earth) being colonized by aliens (though whether their values will be better or worse remains an open question in my view).
Another point (which I'm not certain about how to weigh in my considerations) is that certain extinction events could massively reduce suffering on earth, by preventing digital sentience or even by causing the end of biological sentient life (this seems unlikely, and I've asked here how likely or unlikely EAs thought this was).
However, I am very uncertain of the tractability of improving future outcomes, especially considering recent posts by researchers at the Center on Long-Term Risk, or this one by a former researcher there, highlighting how uncertain it is that we are well-placed to improve the future. Nonetheless, I think that efforts made to improve the future, like the work of the Center for Reducing Suffering, the Center on Long-Term Risk, or the Sentience Institute, advocate for important values and could have some positive flow-through effects in the medium-term (though I don't necessarily think that this robustly improves the longer term future). I will note, however, that I am biased since work related the Center for Reducing Suffering was the primary reason I got into EA.
I am very open to changing my mind on this, but for now I'm under 50% agree because it seems to me that, in short:
Lots of uncertainties. I expect to have moved my cursor before the end of the week!