We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
43%➔ 7% disagreeIntuitively, I don't see the point to perpetuate humanity if it's with life full of suffering.
After reading arguments on the other side, feel much more uncertain.
Indeed, it will be hard to fix value issues without any humans (based on the fact that we are the only species that think about moral issues)
My bad, I wasn't very clear when I used the term "counterargument", and "nuance" or something else might have fit better. It doesn't argue against the fact that without humans, there won't be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a "counterargument" to the idea that we'd need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).