We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
Our lightcone is an enormous endowment. We get to have a lot of computation, in a universe with simple physics. What these resources are spent on matters a lot.
If we get AI right (create a CEV-aligned ASI), we get most of the utility out of these resources automatically (almost tautologically, see CEV: to the extent after considering all the arguments and reflecting we think we're ought to value something, this is what CEV points to as an optimization target). If it takes us a long time to get AI right, we lose a literal galaxy of resources every year, but this is approximately nothing in relative terms.
If we die because of AI, we get ~0% of the possible value/max CEV.
Increasing the chance AI goes well is what's important. Work to marginally shift the % around the maximum seems relatively unimportant compared to the chance AI goes well. Whether we die because of AI is the largest input.
(I find negative % of CEV very implausible because it almost always doesn't make sense to spend resources on penalizing other agent's utility if the other agent is smart enough to make it not worth it and for other, more speculative reasons.)