In a previous post we explained our reasons for favoring the term “artificial sentience.” However, while this term captures what we truly care about — artificial entities with the capacity for positive and/or negative experiences — it may be too vague when we try to use it to make judgments about sentience in specific artificial entities.[1] Since our judgments about which entities to grant moral consideration depend on whether and the extent to which we consider them to be sentient, this raises a potentially serious problem.

One approach to this problem is to say that we do not currently need to know which artificial entities are sentient, only that some future artificial entities will be, and they may be excluded from society’s moral circle. We should, therefore, encourage the expansion of the moral circle to include artificial entities despite not knowing exactly where the boundary should be drawn. Working out the specifics of which artificial entities are and are not sentient can be deferred to the future. Still, further clarity on this can be useful because of the growing complexity of artificial entities, as well as the increasing attention to their treatment as a social and moral issue. In this post, we operationalize the term “artificial sentience” and outline an initial, tentative framework for assessing sentience in artificial entities.

32

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.