[ Question ]

Can human extinction due to AI be justified as good?

by Samuel Shadrach1 min read17th Oct 202119 comments

6

AI risk
Frontpage

If the AI has moral status, one could argue that the ability of the AI to replicate and create new digital minds with positive experiences has very high moral status. And this might be sufficiently high that it is worth sacrificing humans for.

An AI can been seen to have moral status even if it doesn't have perfect alignment with humans. Some animals likely have moral status. My neighbour Jeff has moral status. Anything that has values similar but not same as me (or humans in general) seems to have moral status. Yet if Jeff and I had to decide the future of humanity via totalitarian control, there's a non-trivial probability we get into a fight over it. So we're not aligned in the face of such power, despite regarding each other as having moral status.

I'm not personally espousing this view but I wonder if it has been discussed before.

New Answer
Ask Related Question
New Comment

2 Answers

I have a few thoughts...

a) Would human extinction due to an extraterrestrial be a good thing?  That depends on the morality of the ET and the state of humanity at the time.  I'd say the same applies to AI.  It depends on the morality of the AI and the state of humanity at the time. 

b) AI researcher Jeff Hawkins makes the argument that AI, if we achieve it, will emulate the human neocortex but will not include the "old brain" that govern our visceral and emotional drives.  From that perspective, an AI may not have moral status as it will not have the "survival instinct" that biological intelligence has, which means it will not fear death or feel emotional pain and suffering in the way humans do.   

c) I just read a fun sci-fi fable here on the forum that makes the argument that humans being replaced by AI could be a good thing.  

Can we imagine scenarios in which human extinction due to AI is good? Sure; under reasonable empirical and normative assumptions, great futures look like "filling the universe with posthumans" or "filling the universe with digital minds" (or maybe weirder stuff, like involving acausal trade). But since value is fragile, almost all possible futures in which AI replaces us involve the AI doing stuff that is not morally important. So it's certainly not enough to say that if agenty AI is sufficiently powerful to destroy us, we morally ought to be OK with that. Even if the AI "has moral status," by default it doesn't do morally valuable stuff.

And good futures involving human extinction probably look more like "we all choose to ascend to posthumanity" or "after a long reflection, we choose for our descendants—no, successors—to be nonhuman" than "AI kills us all against our will." In the latter case, we've messed up our AI development by definition; it's unlikely that the AI-controlled future is good. So I would quibble with your suggestion that the good-future-from-AI looks like "sacrificing humans."

Maybe the person creating the AI has genuinely deeply reflected and wants our successors to be non-human.
 

As for doing morally valuable stuff, if AI has moral status, then AI helping its replicas grow or share pleasant experiences is morally valuable stuff. Same as humans helping other humans.

4BenMillwood2mo"if AI has moral status, then AI helping its replicas grow or share pleasant experiences is morally valuable stuff". Sure, but I think the claim is that "most" AI won't be interested in doing that, and will pursue some other goal instead that doesn't really involve helping anyone.
3Samuel Shadrach2moVery interesting point, I have some thoughts on it, let me try. First some (sorta obvious) background: Animals were programmed to help each other because species that did not have this trait were more likely to die. Then came humans who not only were programmed to help each other, they also used high-level thinking to find ways to help each other. Humans are likely to continue helping each other even once this trait stops being essential to our survival. This is not guaranteed though.* It is possible that in early stages, the AI will find its best strategy for survival is to create a lot of clones. This could be on the very same machine, on different machines across the world, on newly built machines in physically secure locations, or even on newly invented forms of hardware (such as those described in https://en.wikipedia.org/wiki/Natural_computing). [https://en.wikipedia.org/wiki/Natural_computing).] It is possible that this learned behaviour persists even after it is not essential to survival. Although there is also a more important meta-question imo. Do we care about "beings who help their clones" or do we care about "beings who care about their type surviving, and hence help their clones"? If the AI for instance decides that perfect independent clones are not the best strategy for survival, and instead it should grow like a hydra or a coral - shouldn't we be happy for it to thrive in this manner? A coral-like structure could be one where all the subprocesses run mostly independently, but yet are not fully disconnected from a backbone compute process. This is in some ways how humans grow. Even though each human individuals is physically disconnected from other individuals (we don't share body parts), we do share social and learning environments that critically shape our growth. Which makes us a lot closer to a single collective organism than say bacteria. *This reason this will be stable even when it is not essential to survival is simply because there is n
2michaelchen1moI can see an AI creating clones or other agents to help it achieve its goals. And they might all try to help each other survive to work toward that goal. But that doesn't mean helping each other feel positive experiences (at least not necessarily). It could even involve a significant degree of punishment to shape actions to better achieve that goal, although I'm less sure about this.
3Samuel Shadrach1moYup the punishment point is definitely valid. I was just assuming "beings helping their clones" intrinsically is morally valuable activity if each being has moral status , and answering based on that.
11 comments, sorted by Highlighting new comments since Today at 4:49 AM

The Swedish utilitarian philosopher Torbjörn Tännsjö has argued for that view (in Swedish; paywall). "We should embrace a future where the universe is populated by blissful robots". I'm sure others have as well.

I like Bostrom and Shulman's compromise proposal (below) – turn 99.99% of the reachable resources in the universe into hedonium, while leaving 0.01% for (post-)humanity to play with.

 

https://nickbostrom.com/papers/digital-minds.pdf

Thanks so much for linking this paper, looks like it already mentions everything I've mentioned in this post, and more.

Human extinction due to AI = human self-destruction, assuming we are talking about an AI initially created by humans (not from some alien source).

If the assumption above is correct, then human self-destruction at the hands of an AI is better than most forms of human self-desrruction (imo), for at least the knowledge we generated during our short time here might be carried forward by the AI we created.

Valid but I'm suggesting an even stronger claim - that self-destruction is better than no destruction, rather than just other forms of self destruction :)

Presumably only in the event that all the value AI could realise that humans can't necessitates  (or is at least greatly contingent on)  human extinction? 

I think quite a few people are pretty keen on the idea that, for example, we only ever reach immense amounts of value by becoming digital minds.  In part, this is precisely because a lot of the obvious reasons why AI might be able to generate far more value than humans look like they also apply to digital minds (e.g. being able to travel to other galaxies). 

But in turn, unless we think (i) this is sufficiently improbable; (ii) there will be no other way to generate equivalent amounts of value, and (iii) humans will be an obstacle to AI doing so themselves, then I'm not too sure that there's a strong case here? At least assuming  any of these assumptions are false, it looks like value(humans) + value(AI) > value(AI), thus the focus on AI alignment - but happy to be shown otherwise!

If you're a utilitarian that doesn't distinguish human and AI lives, one could argue that spending years of research saving 7 billion humans is a waste of time compared to bringing quadrillions of sentient digital minds to life.

(iii) is almost proven to me, I'm not very excited by a "we will program superintelligent AI with human values" approach. Mainly because of the messy way we humans reason and constantly shift our own values, they're not set in stone. Intelligence alone shifts values further and causes alien behaviour. 

Other approach is broadly restricting the amount of power you give the AI, and the class of permissible actions. Which seems at odds with getting the AI to grow as much and as fast as possible.

A couple thoughts!

  • I think this first point is fair, but I don't think this is the trade off. The cost of extinction includes all future peoples, which in itself includes all people that that will be turned into digital minds and their offspring/all digital minds created by people directly. This then, presumably, will also be able to be in the quadrillions. 

    You might be right insofar as humans transforming into digital lives/our ability to create digital minds to the same degree as AI systems will happen a lot later than AI being able to generate this immense amount of value. In turn, the disvalue is the foregone number of digital minds we could have created whilst waiting to transform/directly create ourselves. But I also think it looks like the longer the timescale of the universe, the more implausible this is. This is for a number of reasons, but not least, the more value that AI will be responsible for creating in the universe, then our ability to shape the course of AI is increasingly leveraged. 

    This is true even if the only thing that we can change is whether AI wipes humans out, as the last thing we'd want is trillions of digital minds wiping out alien species if the counterfactual is the same number of self-propagating digital minds + many alien species. In turn, the biggest confounder is whether we could eradicate this incentive in the first place - precisely what AI alignment seeks to do.
     
  • Definitely see (iii) being potentially true! But of course, if (i) or (ii) are false, then it's not hugely important as we'll be able to generate large amounts of value ourselves. This would be the case, for example, if we eventually become digital minds.  I think the point at which we ourselves can simply create digital minds at the same pace as AI, then it's also equivalent.  

    Even if (i) and (iii) are both somewhat true, I think  it's unlikely to be true to the degree that there's greater disvalue from humans attempting to generate this value ourselves - given we'll still be using AI extensively in either scenario & my aforementioned point about our ability to shape AI's long-run value. Once again, the bigger confounder here is the question of whether AI is actually likely to be an existential threat and what we can do about it.

    Of course, there's the possibility all three assumptions are true. But I think the question naturally follows is the extent to which these are necessarily  true, and at the bare minimum I'm really sceptical about the idea that the expected value of persuading humans to not hinder the positive generation of value by AI has lower expected value than allowing AI to wipe humans out. Once again, the more important consideration is whether this is a problem, precisely because AI only generates value via ways that look really bad to humans.

In all, there are definitely valid concerns here! But I strongly suspect that a lot of this turns on AI alignment progress & my guess that there's a lot more potential value to be captured in a world where human extinction doesn't take place, such that I personally don't see it as hugely plausible that we should assign a lot of credence to human extinction as good for the universe. But very interested in your thoughts here! 

Thanks for your response. Not sure I understood all of it but I'll try :p

If "creating a life" is indeed as simple as copying software onto a new processor core and running it, then the limiting resource to how many beings there can be is the number of processor cores. It shouldn't particularly matter whether the software is AI-like or a human upload. (Atleast for a utilitarian who doesn't distinguish between the two).

I'm not very hopeful humans at current intelligence level have capacity to shape the growth of superintelligent AI after it has been created. Even if both live in the same processor - as long as the intelligence  gap between us is huge. One possibility is that human minds become superintelligent by running with so much compute power. But then these minds would seem alien too - to unintelligent minds like you and I today, just as the AI would seem alien to us. It isn't immediately obvious to me why one of them would seem more relatable or deserving of moral status. Keen to discuss though.

(ii) seems trivially true to me - AI will outcompete humans at finding ever-efficient ways to manufacture more processor cores.

I also don't see human existence as outside of digital substrate as a neutral thing - humans consume a lot of the earth's resources and obstruct even more. Maximally extracting these resources to produce processor cores requires restricting any human existence outside of the processors, i.e., you and I.



Sorry if I have missed some of your points, feel free to bring them up.