We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
Thanks for the considered response. You're right that the Total View is not the only view on which future good lives has moral value (though that does seem to be the main one bandied about). Perhaps I should have written "I don't subscribe to the idea that adding happy people is intrinsically good in itself" as I think that better reflects my position — I subscribe to the Person-Affecting View (PAV).
The reason I prefer the PAV is not because of the repugnant conclusion (which I don't actually find "repugnant") but more the problem of existence comparativism — I don't think that, for a given person, existing can be better or worse than not existing.
Given my PAV, I agree with your last point that there is some moral value to ensuring happy people in the future, if that would satisfy the preferences of current people. But in my experience, most people seem to have very weak preferences for the continued existence of "humanity" as a whole. Most people seem very concerned about the immediate impacts on those within their moral circle (i.e. themselves and their children, maybe grandchildren), but not that much beyond that. So on that basis, I don't think reducing extinction risk will beat out increasing the value of futures where we survive.
To be clear, I don't have an objection to the extinction risk work EA endorses that is robustly good on a variety of worldviews (e.g. preventing all-out nuclear war is great on the PAV, too). But I don't have a problem with humans or digital minds going extinct per se. For example, if humans went extinct because of declining fertility rates (which I don't think is likely), I wouldn't see that as a big moral catastrophe that requires intervention.