We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
The salient question for me is how much does reducing extinction risk change the long run experience of moral patients? One argument is that meaningfully reducing risk would require substantial coordination, and that coordination is likely to result in better worlds. I think it is as or more likely that reducing extinction risk can result in some worlds where most moral patients are used as means without regard to their suffering.
I think an AI aligned to roughly to the output of all current human coordination would be net-negative. I would shift to thinking addressing extinction risk is more important if factory farming stopped, humanity was taking serious steps to address wild animal suffering, all sustainable development goals were met within 5 years of the initial timeline, and global inequality was reduced to something like <0.25 GINI coefficient.