We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
This is a question I could easily change my mind on.
The experience of digital minds seems to dominate far future calculations. We can get a lot of value from this, a lot of disvalue, or anything in between.
If we go extinct then we get 0 value from digital minds. This seems bad, but we also avoid the futures where we create them and they suffer. It’s hard to say if we are on track to creating them to flourish or suffer - I think there are arguments on both sides. The futures where we create digital minds may be the ones where we wanted to “use” them, which could mean them suffering. Alternatively, we have seen our moral circle expand over time and this may continue, so there is a real possibility we could create them to flourish. I don’t have a clear view which side wins here, so overall going extinct doesn’t seem obviously terrible to me.
We could instead focus on raising the moral status of digital minds, our ability to understand sentience and consciousness, improve societal values, making sure AI goes well and helps us with these things. These robustly increase the expected value of digital sentience in futures where we survive.
So because reducing extinction risk is close to 0 expected value to me, and increasing the value of futures where we survive is robustly positive in expected value, I lean towards increasing the value of futures where we survive.