If past generations of animals stopped humans from evolving to become as intelligent as they are now, or ensured that humans valued the welfare and rights of animals, they could have prevented widespread factory farming, deforestation and climate change which causes severe animal suffering today and violates the rights of animals.

Animals today can’t stop humans because humans are far too powerful now.

Similarly, if humans prevent the emergence of new superintelligence or ensure that it values our welfare and rights , we can prevent scenarios where a superintelligence doesn’t value our welfare and rights.

If an unaligned superintelligence emerges, it will be too powerful for us to stop it.

5

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 4:22 AM

I think there is a false equivalency being advanced here.

Animals did not create human beings. Human beings are by every measure, animals, with a few unique caveats just like every other species.

The potential dystopian future you reference, although possible, is unlikely. I believe that it is more likely that an individual, an institution or a group of people/institutions will use such tech as an extension of their power and coercive force.

In the relevant sense of "create," humans will not create the AIs that disempower humanity. Let me explain.

If we built AI using ordinary software ("Good Old Fashioned AI") then it would be a big file of code, every line of which would have been put there on purpose by someone & probably annotated with explanation for why it was there and how it works. And at higher levels of abstrction, the system would also be interpretable/transparent, because the higher-level structure of the system would also have been deliberately chosen by some human or group of humans.

But instead we're going to use deep learning / artificial neural networks. These are basically meta-programs that perform a search process over possible object-level programs (circuits), until they find one that performs perfectly on the training environment. So the trained neural net -- the circuit that pops out at the end of the training process -- is a tangled spaghetti mess of complex, uber-efficient structure, that is very good at scoring highly in the training environment. But that's all we know about it; we don't know how or why it works. 

If this situation is still approximately true when we get to AGI -- if we create AGI by searching for it rather than by building it -- then we really won't have much control over how it behaves once it gets powerful enough to disempower us, because we won't know what it's thinking or why it does what it does.

 

Curated and popular this week
Relevant opportunities