We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
The total view is not the only view on which future good lives starting has moral value. You can also think that if you believe in (amongst other things):
-Maximizing average utility across all people who ever live, in which case future people coming into existence is good if their level of well-being is above the mean level of well-being of the people before them.
-A view on which adding happy lives gets less and less valuable the more happy people have lived, but never reaches zero. (Possibly helpful with avoiding the repugnant conclusion.)
-A view like the previous one on which both the total amount of utility and how fairly it is distributed matter, so that more utility is always in itself better, and so adding happy people is always intrinsically good in itself, but a population with less total utility but a fairer distribution of utility can sometimes be better than a population with more utility, less fairly distributed.
This isn't just nitpicking: the total view is extreme in various ways that the mere idea that happy people coming into existence is good is not.
Also, even if you reject the view that creating happy people is intrinsically valuable, you might want to ensure there are happy people in the future just to satisfy the preferences of current people, most of whom probably have at least some desire for happy descendants of at least one of their family/culture/humanity as whole, although it is true that this won't get you the view that preventing extinction is astronomically valuable.