Hide table of contents

Note: This is "Draft version 1.1" of this paper. If I learn that a later draft has been released, I'll edit this post to include it. If you learn of a later draft, please let me know!


Written by Carl Shulman and Nick Bostrom.

Abstract

The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence. 

Here we focus on one set of issues, which arise from the prospect of digital “utility monsters”. These may be mass-produced minds with moral statuses and interests similar to those of human beings or other morally considerable animals, so that collectively their moral claims outweigh those of the incumbent populations. Alternatively it may become easy to create individual digital minds with much stronger individual interests and claims to resources than humans. 

Disrespecting these could produce a moral catastrophe of immense proportions, while a naïve way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence.

Read the rest of the paper

12

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

The latest version is 1.8, digital-minds.pdf

Curated and popular this week
Relevant opportunities