In Superintelligence, Bostrom writes:
We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.
Most of the scenarios I've seen bandied about for grand galactic futures involve a primarily (or entirely) non-biological civilisation. As a non-expert in AI or consciousness, it seems to me that such scenarios are at a high risk of being childless Disneylands unless we specifically foresee and act to prevent this outcome. I think this partly because consciousness seems like a really hard problem, and partly because of stuff like this (from Paul Christiano in Ep. 44 of the 80,000 Hours Podcast):
I guess another point is that I’m also kind of scared of [the topic of the moral value of AI systems] in that I think a reasonably likely way that AI being unaligned ends up looking in practice is like: people build a bunch of AI systems. They’re extremely persuasive and personable because [...] they can be optimized effectively for having whatever superficial properties you want, so you’d live in a world with just a ton of AI systems that want random garbage, but they look really sympathetic and they’re making really great pleas. They’re like, “Really, this is incredibly inhumane. They’re killing us after this or [...] imposing your values on us.” And then, I expect … I think the current way overall, as actual consensus goes is to be much more concerned about people being bigoted or failing to respect the rights of AI systems than to be concerned the actual character of those systems. I think it’s a pretty likely failure mode, it's something I’m concerned about.
This is pretty scary because it means we could end up happily walking into an X-risk scenario and never even know it. But I'm super uncertain about this and there could easily be some fundamental idea I'm missing here.
On the other hand, if I am right that Disneylands without children are fairly likely, how should we respond? Should we invest more in consciousness research? What mistakes am I making here?
This argument presupposes that the resulting AI systems are either totally aligned with us (and our extrapolated moral values) or totally misaligned.
If there is much room for successful partial alignment (say, maximising on some partial values we have), and we can do actual work to steer that to something which is better, then it may well be the case that we should work on that. Specifically, if we imagine the AI systems to maximise some hard coded value (or something which was learned from a single database) then it is seems easy to make a case for workin
... (read more)