Hide table of contents

AI Wellbeing

This post is an executive summary of a research paper by Simon Goldstein, associate professor at the Dianoia Institute of Philosophy at ACU, and Cameron Domenico Kirk-Giannini, assistant professor at Rutgers University. This research was supported by the Center for AI Safety Philosophy Fellowship.

 

We recognize one another as beings for whom things can go well or badly, beings whose lives may be better or worse according to the balance they strike between goods and ills, pleasures and pains, desires satisfied and frustrated. In our more broad-minded moments, we are willing to extend the concept of wellbeing also to nonhuman animals, treating them as independent bearers of value whose interests we must consider in moral deliberation. But most people, and perhaps even most philosophers, would reject the idea that fully artificial systems, designed by human engineers and realized on computer hardware, may similarly demand our moral consideration. Even many who accept the possibility that humanoid androids in the distant future will have wellbeing would resist the idea that the same could be true of today’s AI.

Perhaps because the creation of artificial systems with wellbeing is assumed to be so far off, little philosophical attention has been devoted to the question of what such systems would have to be like. In this post, we suggest a surprising answer to this question: when one integrates leading theories of mental states like belief, desire, and pleasure with leading theories of wellbeing, one is confronted with the possibility that the technology already exists to create AI systems with wellbeing. We argue that a new type of AI – the artificial language agent – has wellbeing. Artificial language agents augment large language models with the capacity to observe, remember, and form plans. We also argue that the possession of wellbeing by language agents does not depend on them being phenomenally conscious. Far from a topic for speculative fiction or future generations of philosophers, then, AI wellbeing is a pressing issue. This post is a condensed version of our argument. To read the full version, click here.

1. Artificial Language Agents

Artificial language agents (or simply language agents) are our focus because they support the strongest case for wellbeing among existing AIs. Language agents are built by wrapping a large language model (LLM) in an architecture that supports long-term planning. An LLM is an artificial neural network designed to generate coherent text responses to text inputs (ChatGPT is the most famous example). The LLM at the center of a language agent is its cerebral cortex: it performs most of the agent’s cognitive processing tasks. In addition to the LLM, however, a language agent has files that record its beliefs, desires, plans, and observations as sentences of natural language. The language agent uses the LLM to form a plan of action based on its beliefs and desires. In this way, the cognitive architecture of language agents is familiar from folk psychology. 

For concreteness, consider the language agents built this year by a team of researchers at Stanford and Google. Like video game characters, these agents live in a simulated world called ‘Smallville’, which they can observe and interact with via natural-language descriptions of what they see and how they act. Each agent is given a text backstory that defines their occupation, relationships, and goals. As they navigate the world of Smallville, their experiences are added to a “memory stream” in the form of natural language statements. Because each agent’s memory stream is long, agents use their LLM to assign importance scores to their memories and to determine which memories are relevant to their situation. Then the agents reflect: they query the LLM to make important generalizations about their values, relationships, and other higher-level representations. Finally, they plan: They feed important memories from each day into the LLM, which generates a plan for the next day. Plans determine how an agent acts, but can be revised on the fly on the basis of events that occur during the day. In this way, language agents engage in practical reasoning, deciding how to promote their goals given their beliefs.

2. Belief and Desire

The conclusion that language agents have beliefs and desires follows from many of the most popular theories of belief and desire, including versions of dispositionalism, interpretationism, and representationalism.

According to the dispositionalist, to believe or desire that something is the case is to possess a suitable suite of dispositions. According to ‘narrow’ dispositionalism, the relevant dispositions are behavioral and cognitive; ‘wide’ dispositionalism also includes dispositions to have phenomenal experiences. While wide dispositionalism is coherent, we set it aside here because it has been defended less frequently than narrow dispositionalism. 

Consider belief. In the case of language agents, the best candidate for the state of believing a proposition is the state of having a sentence expressing that proposition written in the memory stream. This state is accompanied by the right kinds of verbal and nonverbal behavioral dispositions to count as a belief, and, given the functional architecture of the system, also the right kinds of cognitive dispositions. Similar remarks apply to desire.

According to the interpretationist, what it is to have beliefs and desires is for one’s behavior (verbal and nonverbal) to be interpretable as rational given those beliefs and desires. There is no in-principle problem with applying the methods of radical interpretation to the linguistic and nonlinguistic behavior of a language agent to determine what it believes and desires. 

According to the representationalist, to believe or desire something is to have a mental representation with the appropriate causal powers and content. Representationalism deserves special emphasis because “probably the majority of contemporary philosophers of mind adhere to some form of representationalism about belief” (Schwitzgebel). 

It is hard to resist the conclusion that language agents have beliefs and desires in the representationalist sense. The Stanford language agents, for example, have memories which consist of text files containing natural language sentences specifying what they have observed and what they want. Natural language sentences clearly have content, and the fact that a given sentence is in a given agent’s memory plays a direct causal role in shaping its behavior. 

Many representationalists have argued that human cognition should be explained by positing a “language of thought.” Language agents also have a language of thought: their language of thought is English!

An example may help to show the force of our arguments. One of Stanford’s language agents had an initial description that included the goal of planning a Valentine’s Day party. This goal was entered into the agent’s planning module. The result was a complex pattern of behavior. The agent met with every resident of Smallville, inviting them to the party and asking them what kinds of activities they would like to include. The feedback was incorporated into the party planning. 

To us, this kind of complex behavior clearly manifests a disposition to act in ways that would tend to bring about a successful Valentine’s Day party given the agent’s observations about the world around it. Moreover, the agent is ripe for interpretationist analysis. Their behavior would be very difficult to explain without referencing the goal of organizing a Valentine’s Day party. And, of course, the agent’s initial description contained a sentence with the content that its goal was to plan a Valentine’s Day party. So, whether one is attracted to narrow dispositionalism, interpretationism, or representationalism, we believe the kind of complex behavior exhibited by language agents is best explained by crediting them with beliefs and desires.

3. Wellbeing

What makes someone’s life go better or worse for them? There are three main theories of wellbeing: hedonism, desire satisfactionism, and objective list theories. According to hedonism, an individual’s wellbeing is determined by the balance of pleasure and pain in their life. According to desire satisfactionism, an individual’s wellbeing is determined by the extent to which their desires are satisfied. According to objective list theories, an individual’s wellbeing is determined by their possession of objectively valuable things, including knowledge, reasoning, and achievements. 

On hedonism, to determine whether language agents have wellbeing, we must determine whether they feel pleasure and pain. This in turn depends on the nature of pleasure and pain.

There are two main theories of pleasure and pain. According to phenomenal theories, pleasures are phenomenal states. For example, one phenomenal theory of pleasure is the distinctive feeling theory. The distinctive feeling theory says that there is a particular phenomenal experience of pleasure that is common to all pleasant activities. We see little reason why language agents would have representations with this kind of structure. So if this theory of pleasure were correct, then hedonism would predict that language agents do not have wellbeing. 

The main alternative to phenomenal theories of pleasure is attitudinal theories. In fact, most philosophers of wellbeing favor attitudinal over phenomenal theories of pleasure (Bramble). One attitudinal theory is the desire-based theory: experiences are pleasant when they are desired. This kind of theory is motivated by the heterogeneity of pleasure: a wide range of disparate experiences are pleasant, including the warm relaxation of soaking in a hot tub, the taste of chocolate cake, and the challenge of completing a crossword. While differing in intrinsic character, all of these experiences are pleasant when desired.

If pleasures are desired experiences and AIs can have desires, it follows that AIs can have pleasure if they can have experiences. In this context, we are attracted to a proposal defended by Schroeder: an agent has a pleasurable experience when they perceive the world being a certain way, and they desire the world to be that way. Even if language agents don’t presently have such representations, it would be possible to modify their architecture to incorporate them. So some versions of hedonism are compatible with the idea that language agents could have wellbeing.

We turn now from hedonism to desire satisfaction theories. According to desire satisfaction theories, your life goes well to the extent that your desires are satisfied. We’ve already argued that language agents have desires. If that argument is right, then desire satisfaction theories seem to imply that language agents can have wellbeing.

According to objective list theories of wellbeing, a person’s life is good for them to the extent that it instantiates objective goods. Common components of objective list theories include friendship, art, reasoning, knowledge, and achievements. For reasons of space, we won’t address these theories in detail here. But the general moral is that once you admit that language agents possess beliefs and desires, it is hard not to grant them access to a wide range of activities that make for an objectively good life. Achievements, knowledge, artistic practices, and friendship are all caught up in the process of making plans on the basis of beliefs and desires. 

Generalizing, if language agents have beliefs and desires, then most leading theories of wellbeing suggest that their desires matter morally.

4. Is Consciousness Necessary for Wellbeing?

We’ve argued that language agents have wellbeing. But there is a simple challenge to this proposal. First, language agents may not be phenomenally conscious — there may be nothing it feels like to be a language agent. Second, some philosophers accept:

The Consciousness Requirement. Phenomenal consciousness is necessary for having wellbeing.  

The Consciousness Requirement might be motivated in either of two ways: First, it might be held that every welfare good itself requires phenomenal consciousness (this view is known as experientialism). Second, it might be held that though some welfare goods can be possessed by beings that lack phenomenal consciousness, such beings are nevertheless precluded from having wellbeing because phenomenal consciousness is necessary to have wellbeing.

We are not convinced. First, we consider it a live question whether language agents are or are not phenomenally conscious (see Chalmers for recent discussion). Much depends on what phenomenal consciousness is. Some theories of consciousness appeal to higher-order representations: you are conscious if you have appropriately structured mental states that represent other mental states. Sufficiently sophisticated language agents, and potentially many other artificial systems, will satisfy this condition. Other theories of consciousness appeal to a ‘global workspace’: an agent’s mental state is conscious when it is broadcast to a range of that agent’s cognitive systems. According to this theory, language agents will be conscious once their architecture includes representations that are broadcast widely. The memory stream of Stanford’s language agents may already satisfy this condition. If language agents are conscious, then the Consciousness Requirement does not pose a problem for our claim that they have wellbeing.

Second, we are not convinced of the Consciousness Requirement itself. We deny that consciousness is required for possessing every welfare good, and we deny that consciousness is required in order to have wellbeing.

With respect to the first issue, we build on a recent argument by Bradford, who notes that experientialism about welfare is rejected by the majority of philosophers of welfare. Cases of deception and hallucination suggest that your life can be very bad even when your experiences are very good. This has motivated desire satisfaction and objective list theories of wellbeing, which often allow that some welfare goods can be possessed independently of one’s experience. For example, desires can be satisfied, beliefs can be knowledge, and achievements can be achieved, all independently of experience.

Rejecting experientialism puts pressure on the Consciousness Requirement. If wellbeing can increase or decrease without conscious experience, why would consciousness be required for having wellbeing? After all, it seems natural to hold that the theory of wellbeing and the theory of welfare goods should fit together in a straightforward way:

Simple Connection. An individual can have wellbeing just in case it is capable of possessing one or more welfare goods.

Rejecting experientialism but maintaining Simple Connection yields a view incompatible with the Consciousness Requirement: the falsity of experientialism entails that some welfare goods can be possessed by non-conscious beings, and Simple Connection guarantees that such non-conscious beings will have wellbeing.

Advocates of the Consciousness Requirement who are not experientialists must reject Simple Connection and hold that consciousness is required to have wellbeing even if it is not required to possess particular welfare goods. We offer two arguments against this view.

First, leading theories of the nature of consciousness are implausible candidates for necessary conditions on wellbeing. For example, it is implausible that higher-order representations are required for wellbeing. Imagine an agent who has first order beliefs and desires, but does not have higher order representations. Why should this kind of agent not have wellbeing? Suppose that desire satisfaction contributes to wellbeing. Granted, since they don’t represent their beliefs and desires, they won’t themselves have opinions about whether their desires are satisfied. But the desires still are satisfied. Or consider global workspace theories of consciousness. Why should an agent’s degree of cognitive integration be relevant to whether their life can go better or worse?

Second, we think we can construct chains of cases where adding the relevant bit of consciousness would make no difference to wellbeing. Imagine an agent with the body and dispositional profile of an ordinary human being, but who is a ‘phenomenal zombie’ without any phenomenal experiences. Whether or not its desires are satisfied or its life instantiates various objective goods, defenders of the Consciousness Requirement must deny that this agent has wellbeing. But now imagine that this agent has a single persistent phenomenal experience of a homogenous white visual field. Adding consciousness to the phenomenal zombie has no intuitive effect on wellbeing: if its satisfied desires, achievements, and so forth did not contribute to its wellbeing before, the homogenous white field should make no difference. Nor is it enough for the consciousness to itself be something valuable: imagine that the phenomenal zombie always has a persistent phenomenal experience of mild pleasure. To our judgment, this should equally have no effect on whether the agent’s satisfied desires or possession of objective goods contribute to its wellbeing. Sprinkling pleasure on top of the functional profile of a human does not make the crucial difference. These observations suggest that whatever consciousness adds to wellbeing must be connected to individual welfare goods, rather than some extra condition required for wellbeing: rejecting Simple Connection is not well motivated. Thus the friend of the Consciousness Requirement cannot easily avoid the problems with experientialism by falling back on the idea that consciousness is a necessary condition for having wellbeing.

We’ve argued that there are good reasons to think that some AIs today have wellbeing. But our arguments are not conclusive. Still, we think that in the face of these arguments, it is reasonable to assign significant probability to the thesis that some AIs have wellbeing.

In the face of this moral uncertainty, how should we act? We propose extreme caution. Wellbeing is one of the core concepts of ethical theory. If AIs can have wellbeing, then they can be harmed, and this harm matters morally. Even if the probability that AIs have wellbeing is relatively low, we must think carefully before lowering the wellbeing of an AI without producing an offsetting benefit. 

Comments
No comments on this post yet.
Be the first to respond.
More from Simon
Curated and popular this week
Relevant opportunities