A lot of effective altruists are interested in consciousness, because it is an inherently interesting topic, and because it matters for cause prioritization and thinking about the long term future. But the curious reader is confronted with an enormous and intricate academic literature, both philosophical and scientific.
A lot of this intricacy is unavoidable and desirable, because consciousness is a genuinely perplexing and challenging topic. But it can be overwhelming. The purpose of this post is to give readers one conceptual tool for navigating this large literature and for thinking about consciousness - the distinction between the “hard problem” of consciousness, and the “pretty hard problem” of consciousness. This distinction has been helpful for me personally, as I do research for the Future of Humanity Institute on consciousness in artificial intelligence.
The hard problem: Why are physical states associated with conscious experience? For example, why are certain neural firings associated with (for example) the conscious experience of red, rather than with some other experience, or no experience at all?
The pretty hard problem: Which physical states are associated with conscious experience?
This post explains what is meant by “consciousness” in these contexts, and then explores how the two problems can (mostly) be separated, and a few ways they intersect. I also argue that effective altruists who are illusionists about consciousness - that is, who deny that consciousness exists in the first place - do avoid the hard problem but still face difficult questions that are closely related to the pretty hard problem.
Consciousness: what is at issue?
This section clarifies what is meant by “consciousness” or “conscious experience” in these questions. If you are already familiar with the term “phenomenal consciousness” and how it’s used, this section can be skipped.
It’s natural for people to wonder which complex systems have subjective experiences--like pain, or the experience of seeing red--and which do not. Consider the contrast between a human and a laptop: in both cases, there is complex information processing, but in only one case is this associated with consciousness. Michael Graziano (2017) describes this contrast:
You can connect a computer to a camera and program it to process visual information—color, shape, size, and so on. The human brain does the same, but in addition, we report a subjective experience of those visual properties. This subjective experience is not always present. A great deal of visual information enters the eyes, is processed by the brain and even influences our behavior through priming effects, without ever arriving in awareness. Flash something green in the corner of vision and ask people to name the first color that comes to mind, and they may be more likely to say “green” without even knowing why. But some proportion of the time we also claim, “I have a subjective visual experience. I see that thing with my conscious mind. Seeing feels like something.”
In both cases, there is complicated information processing that brings about some behavior or implements some function. But only in the human case is the information processing sometimes associated with subjective experience. In one popular locution, there is something that it is like to be a human seeing green. Here is David Chalmers (1995) on other states that there is “something it is like” to be in:
When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is [conscious] experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them.
Because “consciousness” can refer to many things in different contexts--self-awareness, free will, higher cognition--philosophers often use the term phenomenal consciousness to refer to this “what is it like”, subjective experience phenomenon. And “the phenomenal” is a locution that is used to refer to phenomenal consciousness-involving states and properties more generally. In what follows, by “consciousness” I will mean “phenomenal consciousness” and by “experience” I will mean “phenomenally conscious experience”.
In this terminology, some of your brain states are conscious, like the ones described above. But not all of them: there’s not something it’s like for you when your brain controls your organs or regulates hormones.
So states can be referred to as conscious or not conscious; so can creatures or systems as whole. Systems that are conscious include humans and (almost certainly) pigs; systems that are not include motor engines and clocks; and for many systems, like fish and bees and future advanced AI systems, we are not entirely sure.
So a natural way of thinking of the problem of AI consciousness, or animal consciousness, or digital upload consciousness, is that we are unsure which of these systems have consciousness, and what their experiences (if any) are like.
The hard problem and the pretty hard problem
The hard problem of consciousness
One of many reasons it can be hard to dive into consciousness questions is that there is a large, intricate, and centuries-old philosophical debate about the fundamental nature of the relationship between the physical world and consciousness. The “hard problem of consciousness” is the question of why physical things and physical processes (“the physical”) are sometimes associated with consciousness. Is consciousness ultimately reducible to the physical? Can physical explanations explain consciousness, or do they leave something out?
Many people have the intuitive sense, and many philosophers have argued, that consciousness does not fit easily within the physical realm and is not amenable to purely physical explanation. Some philosophers maintain that purely physical explanations, no matter how detailed and sophisticated, are simply not the kind of thing that could explain why some brain states are accompanied by these qualitative experiences (e.g. the redness of red), rather than some other experiences (e.g. the greenness of green), or by no experience at all. It is in this philosophical literature that people deploy thought experiments such as Mary the super-scientist, color spectrum inversion, or p-zombies, which are meant to draw our attention to the alleged gap between physical explanations and consciousness. I won’t rehash the arguments or these thought experiments here, but note that this literature concerns the question of the metaphysical relationship between the physical and the phenomenal. It is as responses to the hard problem that we get these oft-debated metaphysical views:
Physicalism: fundamentally there are only physical properties / things / processes. Consciousness just is identical to, or grounded in, the physical.
Dualism: fundamentally there are both physical and phenomenal properties / things / processes; consciousness is distinct from the physical.
Panpsychism: the intrinsic nature of matter is phenomenal - so the basic building blocks of reality are, in some sense, both physical and phenomenal.
The pretty hard problem of consciousness
Fortunately for the interested reader, these positions and the millenia-old and intricate disputes between them can be (largely) set aside when we ask, “Which physical states are associated with consciousness, and which are not?” This question is what Scott Aaronson (2014) has dubbed the term “Pretty Hard Problem”; David Chalmers notes its distinctness from the hard problem:
An answer to the Pretty Hard Problem so construed will be a universal psychophysical principle, one which assigns a state of consciousness (possibly a null state) to any physical state...it’s still easier than the original hard problem at least in the sense that it needn’t tell us why consciousness exists in the first place, and it can be neutral on some of the philosophical issues that divide solutions to the hard problem.
Most philosophers and scientists should accept that the Pretty Hard Problem is at least a meaningful problem with better and worse answers. As long as one accepts that consciousness is real, one should accept that there are facts (no matter how hard to discover) about which systems have which sort of consciousness….
One way to see how the hard problem and the pretty hard problem are indeed distinct questions is to note that the pretty hard problem arises as a further question for all of the metaphysical positions we saw above.
Physicalism’s version of the pretty hard problem:
Which physical states or processes are identical to, or ground, consciousness?
Dualism's version of the pretty hard problem
Which physical states or processes are correlated with consciousness? Dualists usually think that a physical brain (or some kind of physical system) is required for consciousness in the actual world. They just think that brain states or processes correlate with consciousness, which is a separate property or phenomenon linked to the physical via “bridge” laws of nature which specify the relationship between the the physical and the phenomenal.
Panpsychism's version of the pretty hard problem
Which combinations of conscious building blocks combine into conscious systems? One might have thought that panpsychism does a stance on the pretty hard problem: doesn't panpsychism entail that every system - humans, trees, nematodes, GPT-3 - is conscious? Not necessarily - in fact, most panspychists think that while the fundamental building blocks of reality are conscious, not that every aggregate of these building blocks is itself conscious. On this view, tables are composed of conscious building blocks but tables qua tables are not conscious. In contrast, you are composed of conscious building blocks that combine into your human consciousness. So panpsychists have their own pretty hard problem - they want to know in which physical systems this “combination” into new aggregates of consciousness occurs.
It’s possible that we could come to an answer to the pretty hard problem, and philosophers could still dispute different answers to the hard problem.
Scientific theories of consciousness
Scientific theories of consciousness are best thought of as (in the first instance) answers to the pretty hard problem, not to the hard problem. It’s in the scientific literature on consciousness that one finds theories of consciousness like: global workspace theory, higher-order thought theories, biological theories, predictive processing and Bayesian theories of consciousness , the attention schema theory, midbrain-based theories, Integrated Information Theory, and so on. This is where one will find experiments trying to tease out which brain regions are active when humans consciously versus unconsciously detect a dot flashed on a screen, or whether children born lacking most of their cerebral cortex are conscious, or whether bees are susceptible to some of the same (possibly) conscious-affecting visual effects as humans.
Ways that the hard problem and the pretty hard problem do intersect
With all that said, there are ways that the hard problem and the pretty hard problem intersect. Here are a few:
Does consciousness have sharp boundaries, or vague boundaries? If your stance on the hard problem is that consciousness is fundamentally a physical phenomenon, you are more likely to think that consciousness admits of vagueness, since many of the proposed physical or computational bases of consciousness (“global information broadcast”, “bodily self-modeling”, “internal self-monitoring”) will also admit of vagueness.
Does consciousness require complex cognition? In answering the hard problem, panpsychists (unlike dualists and physicalists) have already taken a decisive stance on this question - they think an electron can have some sort of simple experience, even though they do not think that an electron can entertain complex thoughts. In this sense, panpsychists are more open to solutions to the pretty hard problem that involve very ‘simple’ forms of consciousness - though not exactly for the reason people often think that they are open to it (see above, on combination).
Does phenomenal consciousness exist in the first place? This is obviously one very key way that they intersect! One reaction to the difficulties of the hard problem is to not answer it but reject the question: to deny that phenomenal consciousness exists in the first place. This might seem like a surprising view, since arguably phenomenal consciousness is the very thing that we are the most familiar with and sure of. But even this can be denied. And if your response to the hard problem is to reject it by denying that consciousness exists, then the pretty hard problem as formulated will also not arise.
This leads us to one last point.
Illusionists about consciousness still have some pretty hard problems
This post has claimed that there is a pretty hard problem of knowing what AIs or animals are conscious, a question which can admit of different answers, no matter how hard to arrive at. But in my experience, some EAs (often those of a LessWrong rationalist bent), are suspicious of even admitting that there is a pretty hard problem - they hesitate to acknowledge questions about which systems are conscious. I suspect that this suspicion arises because they think that to concede that there is a pretty hard problem will automatically commit them to a dubious position on the hard problem, and/or set them up for some slight-of-hand involving sophistical thought experiments.
My opinion is that there is a real question here about phenomenal consciousness, and that it doesn’t involve any of those commitments. There is a thin and “innocent” notion of consciousness--the bare fact that we have subjective experience. You do not have to think that consciousness is especially special or spooky or strange, or think that thought experiments about consciousness are a useful methodology, in order to wonder whether chickens are conscious and what their experiences are like.
Still, one can deny the existence of phenomenal consciousness in even this thin sense, and indeed some smart and thoughtful people do. This position is known as “strong illusionism” (henceforth just “illusionism”). Illusionism does dissolve the hard problem, and technically speaking the pretty hard problem as well. But effective altruists who are illusionists should recognize that illusionism still leaves unanswered very important and difficult questions that are closely related to the pretty hard problem.
If you don’t like to talk of “consciousness”, you can still acknowledge that “pain” exists - even if it is not associated with consciousness as we normally think it is. The same is true of any of the mental states which according to the illusionist we falsely take to be conscious. Presumably illusionists still care about many of these states and think that they are good or bad: suffering, pain, nausea, discomfort, joy, satisfaction. The illusionist still has a pretty hard problem for any of these states - what physical systems can have these states? How widely distributed are they in the animal kingdom? Could GPT-3 experience discomfort? Could GPT-11? When and why?
These are still perfectly meaningful questions, and not ones that we are close to having good answers to at this point. Illusionism might arguably set us on a better track to address them than realism about consciousness, but we are still far from knowing the answer.
Thanks to Sophie Rose, Dan Chenoweth, and Hedda Hassel Mørch for feedback on drafts of this post.
Not exhaustive. I have excluded idealism, the view that fundamentally there are only mental properties / things / processes. And one response to the hard problem is to deny that consciousness exists, thereby dissolving the problem. This is the subject of the last section. ↩︎
Instead of calling this problem Pretty Hard, Ned Block (2002) independently dubbed this the Harder Problem. For fairness I propose that we split the difference and call it the “Quite Hard Problem”. For US readers this will mean “Very Hard Problem”, and for UK readers it will mean “Pretty Hard Problem”. ↩︎
To someone with an empiricist sensibility, the fact that these metaphysical disputes could continue even if we had a good empirical theory of consciousness will raise suspicions about the meaningfulness of the dispute. ↩︎
It is technically open to a panpsychist to think that ‘combination’ only occurs when there is complex cognition, however. ↩︎
Thanks for writing this summary! This all seems really important and really hard to figure out. What approaches/methods do researchers use to suggest answers to these kinds of questions? Can you give some examples of recent progress?
[Replying separately with comments on progress on the pretty hard problem; the hard problem; and the meta-problem of consciousness]
The meta-problem of consciousness is distinct from both a) the hard problem: roughly, the fundamental relationship between the physical and the phenomenal b) the pretty hard problem, roughly, knowing which systems are phenomenally consciousness
The meta-problem is c) explaining "why we think consciousness poses a hard problem, or in other terms, the problem of explaining why we think consciousness is hard to explain" (6)
The meta-problem has a very interesting relationship to the hard problem. To see what this relationship is, we need a distinction between what the "hard problem" of explaining consciousness, and what Chalmers calls the 'easy' problems of explaining "various objective behavioural or cognitive functions such as learning, memory, perceptual integration, and verbal report".
(Much like 'pretty hard', the 'easy' is tongue in cheek - the easy problems are tremendously difficult and thousands of brilliant people with expensive fancy machines are constantly hard at work on them).
Ease of the easy problems: "the easy problems are easy because we have a standard paradigm for explaining them. To explain a function, we just need to find an appropriate neural or computational mechanism that performs that function. We know how to do this at least in principle."
Hardness of the hard problem: "Even after we have explained all the objective functions that we like, there may still remain a further question: why is all this functioning accompanied by conscious experience?...the standard methods in the cognitive sciences have difficulty in gaining purchase on the hard problem."
The meta problem is interesting because it is deeply related to the hard problem, but it is strictly speaking an 'easy' problem: it is about explaining certain cognitive and behavioral functions. For example: thinking "I am currently seeing purple and it seems strange to me that this experience could simply be explained in terms of physics" or "It sure seems like Mary in the black and white room lacks knowledge of what it's like to see red"; or sitting down and writing "boy consciousness sure is puzzling, I bet I can funding to work on this."
Chalmers hopes that cognitive science can make traction on the meta-problem, by explaining how these cognitive functions and behaviors come about in 'topic neutral' terms that don't commit to any particular metaphysical theory of consciousness. And then if we have a solution to the meta problem, this might shed light on the hard problem.
One particular intriguing connection is that it seems like a) a solution to the meta problem should at least be possible and b) if it is, then it gives us a really good reason not to trust our beliefs about consciousness!
Part of the aforementioned growing interest in illusionism is that I think this argument is pretty good. Chalmers came up with it and elaborated it - even though he is not an illusionist - and I like his elaboration of it more than his replies!
That's a great question. I'll reply separately with my takes on progress on a) the pretty hard problem, b) the hard problem, and c) something called the meta-problem of consciousness .
 With apologies for introducing yet another 'problem' to distinguish between, when I've already introduced two! (Perhaps you can put these three problems into Anki?)
Progress on the pretty hard problem
This is my attempt to explain Jonathan Birch's recent proposal for studying invertebrate consciousness. Let me know if it makes rough sense!
The problem with studying animal consciousness is that it is hard to know how much we can extrapolate from what we know about what suffices for human consciousness. Let's grant that we know from experiments on humans that you will be conscious of a visual perception if you have a neural system for broadcasting information to multiple sub-systems in the brain. (This is the Global Workspace Theory mentioned above), and that visual perception is broadcast. Great, now we know that this sophisticated human Global Workspace suffices for consciousness. But how much of that is necessary? How much simpler could the Global Workspace be and still result in consciousness?
When we try to take a theory of consciousness "off the shelf" and apply it to animals, we face a choice of how strict to be. We could say that the Global Workspace must be as complicated as the human case. Then no animals count as conscious. We could say that the Global Workspace can be very simple. Then maybe even simple programs count as conscious. To know how strict or liberal to be in applying the theory, we need to know what animals are conscious. Which is the very question!
Some people try to get around this by proposing tests for consciousness that avoid the need for theory--the Turing Test would be an example of this in the AI case. But these usually end up sneaking theory in the backdoor.
Here's Birch's proposal for getting around this impass.
It's a cluster because it seems like “the abilities will come and go together, co-varying in a way that depends on whether or not a stimulus is consciously perceived” (8). Empirically we have evidence that some abilities in the cluster include: trace conditioning, rapid reversal learning, cross-modal learning.
Look for these clusters of abilities of animals.
See if things which are able to make perceptions unconscious in humans--flashing them quickly and so forth--seems to 'knock out' that cluster in animals. If we can make the clusters come and go like this, it's a pretty reasonable inference that the cause of this is consciousness coming and going.
As I understand it, Birch (a philosopher) is currently working with scientists to flash stuff at bees and so forth. I think Birch's research proposal is a great conceptual advance and I find the empirical research itself very exciting and am curious to see what comes out of it.
[Replying separately with comments on progress on the pretty hard problem; the hard problem; and the meta-problem of consciousness]
Progress on the hard problem
I am much less sure of how to think about this than about the pretty hard problem. This is in part because in general, I'm pretty confused about how philosophical methodology works, what it can achieve, and the extent to which there is progress in philosophy. This uncertainty is not in spite of, but probably because of doing a PhD in philosophy! I have considerable uncertainty about these background issues.
One claim that I would hang my hat on is that the elaboration of (plausible) philosophical positions in greater detail, and more detailed scrutiny of them, is a kind of progress. And in this regard, I think the last 25 years have seen a lot of progress on the hard problem. The possible solution space has been sketched more clearly, and arguments elaborated. One particularly interesting trend is the elaboration of the more 'extreme' solutions to the hard problem: panpsychism and illusionism. Panpsychism solves the hard problem by making consciousness fundamental and widespread; illusionism dissolves the hard problem by denying the existence of consciousness.
Funnily enough, panpsychists and illusionists actually agree on a lot - they are both skeptical of programs that seek to identify consciousness with some physical, computational, or neural property; they both think that if consciousness exists it then it has some strange-sounding relation to the physical. For illusionists, this (putative) anomalousness of consciousness is part of why they conclude it must not exist. For panpsychists, this (putative) anomalousness of consciousness is part of why they are led to embrace a position that strikes many as radical. You can think of this situation by analogy: theologically conservative religious believers and hardcore atheists are often united in their criticisms of theologically liberal religious believers. Panpsychists and illusionists are both united in their criticisms of 'moderate' solutions to the hard problem.
I think the elaboration of these positions is progress. And this situation also forces non-panpsychist consciousness realists, who reject the 'extremism' of both illusionism and panpsychism, to respond and elaborate their views in a stronger way.
For my part, reading the recent literature on illusionism has made me far more sympathetic to it as a position than I was before. (At first glance, illusionism can just sound like an immediate non-starter. Cartoon sketch of an objection: How could consciousness be an 'illusion' - illusions are mismatches between appearance and reality, and with consciousness the appearance is the reality. Anyway, illusionists can respond to this objection - that's a subject for another day). If I continue to be sympathetic to illusionism, then I can say: the growing elaboration of and appeal of illusionism in the last decade represents progress.
But I think there is at least a 40% chance that my mind will have changed significantly regarding illusionism within the next three months.