We are discussing the debate statement: "On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive[3]". You can find more information in this post.
When you vote and comment on the debate week banner, your comment will also appear here, along with a note indicating your initial vote, and your most recent vote (if your opinion has changed).
However, you can also comment here any time throughout the week. Use this thread to respond to other people's arguments, and develop your own.
If there are a lot of comments - consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet.
Also - perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner.
- ^
‘on the margin’ = think about where we would get the most value out of directing the next indifferent talented person, or indifferent funder.
- ^
‘our’ and 'we' = earth-originating intelligent life (i.e. we aren’t just talking about humans because most of the value in expected futures is probably in worlds where digital minds matter morally and are flourishing)
- ^
Through means other than extinction risk reduction.
You’re basically saying happier machines will be more productive and so we are likely to make them to be happy?
Firstly we don’t necessarily understand consciousness enough to know if we are making them happy, or even if they are conscious.
Also, I’m not so sure if happier means more productive. More computing power, better algorithms and more data will mean more productive. I’m open to hearing arguments why this would also mean the machine is more likely to be happy.
Maybe the causality goes the other way - more productive means more happy. If machines achieve their goals they get more satisfaction. Then maybe happiness just depends on how easy the goals we give it is. If we set AI on an intractable problem and it never fulfills it maybe it will suffer. But if AIs are constantly achieving things they will be happy.
I’m not saying you’re wrong just that it seems there’s a lot we still don’t know and the link between optimization and happiness isn’t straightforward to me.