We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
71%➔ 50% disagreeAI NotKillEveryoneism is the first order approximation of x-risk work.
I think we probably will manage to make enough AI alignment progress to avoid extinction. AI capabilities advancement seems to be on a relatively good path (less foomy) and AI Safety work is starting to make real progress for avoiding the worst outcomes (although a new RL paradigm, illegible/unfaithful CoT could make this more scary).
Yet gradual disempowerment risks seem extremely hard to mitigate, very important and pretty neglected. The AI Alignment/Safety bar for good outcomes could be significantly higher than avoiding extinction.
Most fundamentally human welfare currently seems highly contingent on our productivity and decoupling that could be very hard.
I think the bonus/extra credit questions are part of the main test - if you don't get them right everyone still dies, but maybe a bit more slowly.
All the doom flows through the cracks of imperfect alignmen... (read more)