Hide table of contents

EDIT (Oct 1, 2022): I now think a) I'm overextending some of these theories of consciousness, and so am mistaken about some of them in practice (e.g. GWT is not meant to be a general theory of consciousness or explain what makes something conscious, and is better thought of describing neural correlates of consciousness in humans in practice, although this also counts against the theories) and b) perhaps sometimes misinterpreting theories (possibly most of the "(Other) Higher-order processes" section).


Summary: The necessary features for consciousness in prominent physical theories of consciousness that are actually described in terms of physical processes do not exclude panpsychism, the possibility that consciousness is ubiquitous in nature, including in things which aren't typically considered alive. I’m not claiming panpsychism is true, although this significantly increases my credence in it, and those other theories could still be useful as approximations to judge degrees of consciousness. Overall, I'm skeptical that further progress in theories of consciousness will give us plausible descriptions of physical processes necessary for consciousness that don't arbitrarily exclude panpsychism, whether or not panpsychism is true.

The proposed necessary features I will look at are information integration, attention, recurrent processes, and some higher-order processes. These are the main features I've come across, but this list may not be exhaustive.

I conclude with a short section on processes that matter morally.

Some good discussion prompted by this post ended up on Facebook here.

Disclaimer and level of confidence: I am not an expert in neuroscience, consciousness or philosophy of mind, and have done approximately no formal study on these topics. This article was written based on 1-2 weeks of research. I'm fairly confident in the claim that theories of consciousness can't justifiably rule out panpsychism (there are other experts claiming this, too; see the quotes in the Related work section), but not confident in my characterizations of these theories, which I understand mostly only at a high level.

Similar points are made by Brian Tomasik here. For more of Brian's writings on consciousness and panpsychism, see his section of articles here, especially also:


In this paper, it's argued that each of several proposed precise requirements for consciousness can be met by a neural network with just a handful of neurons. The authors call this the "small network argument". (Thanks also to Brian for sharing this.) I quote:

For example, two neurons, mutually interconnected, make up a recurrent system. Hence, these two neurons must create consciousness if recurrence is sufficient for consciousness (e.g. Lamme, 2006). Minimal models of winnertake-all computations require only three “competing” neurons which are fully connected to three presynaptic input neurons, plus potentially a single neuron controlling vigilance (Grossberg, 1999). Hence, such a network of seven neurons is sufficient to develop resonant states allowing learning (Grossberg, 1999) and working memory (Taylor, 1998). Analogously, if neural oscillations or synchrony are the main characteristics of consciousness, then, a group of three interconnected neurons firing in synchrony is conscious. Similarly, a thermostat, typically modelled as a single control loop between a temperature sensor (‘perception’) and an on-off switch for a heater (‘action’), is a classical example of a perception-action device. It can be formulated as a two-neuron feedforward network with a sensory neuron connecting onto an output neuron controlling the heater switch.
(...)
Still, attention can be integrated within a small network just by adding one extra input arising from a second group of neurons (e.g. Hamker, 2004)- containing potentially a very small number of cells.

In this paper, the same point is made, and it's further concluded that popular theories like IIT, RPT and GNWT "endorse panpsychism" under a slightly limited form "where all animals and possibly even plants would be conscious, or at least express the unconscious/conscious dichotomy." The author writes:

Current models of consciousness all suffer from the same problem: at their core, they are fairly simple, too simple maybe. The distinction between feedforward and recurrent processing already exists between two reciprocally connected neurons. Add a third and we can distinguish between ‘local’ and ‘global’ recurrent processing. From a functional perspective, processes like integration, feature binding, global access, attention, report, working memory, metacognition and many others can be modelled with a limited set of mechanisms (or lines of Matlab code). More importantly, it is getting increasingly clear that versions of these functions exist throughout the animal kingdom, and maybe even in plants.

In my view, these don't go far enough in their conclusions. Why shouldn't an electron and its position count as a neuron and its activity? With that, we get a fuller panpsychism.


For descriptions of and discussion about specific physical theories of consciousness, see:


Information integration

1. Anytime what happens at one location depends causally separately on each of at least two other different locations (regardless of the dependence of those two locations with each other), this is a kind of information integration, in my view. This is widespread. For example, an electron's position depends causally on multiple things.

EDIT: It seems Integrated Information Theory depends on more than just information integration as I describe it above, although the theory is considered panpsychist. Rather, integration is a kind of mutual dependence, so that causal cycles are necessary, and IIT focuses on irreducible systems, in which each part affects each other part; see here. See the next section on recurrent processes.


Attention and recurrent processes

0. Bottom-up attention is just information integration, which is feedforward, i.e. no directed loops, so no neuron can feed into itself, including through other neurons. A causal relationship like is recurrent, where indicates causes or depends causally on .

1. Top-down/selective attention is “global” recurrent processing and reduces to local recurrent processing, because “global” is meaningless without qualification to what system it is global to. See the reduction from GWT to RPT here. See the description of GWT (GNWT) in the abstract here.

2. Recurrent processing reduces to feedforward processing over time, because the causal graph is feedforward, with nodes labelled by "neuron, time" pairs. Think unfolded/unrolled recurrent neural networks in AI. For example, "Neuron fires, causing to fire, causing to fire again, causing to fire again" is the same as " fires, causing to fire, causing to fire, causing to fire", which is feedforward and not recurrent.

All recurrent processing necessary for consciousness has finite depth in practice or else you're not conscious, and the difference between depth 3 (enough for a cycle) and any higher depth is a matter of degree, not kind. Unbounded in principle shouldn’t matter if it’s always finite, because that would mean events that never happen determine whether or not a process is conscious.

  • Maybe the “same” neuron should be used in the cycle, but this is an important metaphysical claim, and requires identity to be preserved in some way over time in a way that matters, and it's also unfalsifiable, since we could approximate recurrent behaviour arbitrarily closely with purely feedforward processes. This seems pretty unlikely to me, but perhaps not extremely unlikely to me. Indeed, feedforward networks are universal approximators, so any function a network with recurrence can implement, a feedforward network can approximate, at least in terms of inputs and outputs. This is the "unfolding argument" in this paper. To me, the stronger argument is that every network is metaphysically a feedforward one, including all of its inner workings and intermediate processes and states, assuming identity doesn't matter in a strict sense.
  • Maybe the feedforward processing should work in a certain way to suitably simulate recurrent processing. This seems like a matter of degree, not kind, and feedforward processing with depth at least 3 should basically always simulate some recurrent process with nonzero accuracy, i.e. simulates some to some degree. EDIT: Maybe shouldn't be called recurrent, since it only has one instance of each of and , so we should look at or ; the latter has two of each arrow.

3. Recurrent processing is ubiquitous anyway. An electron influences other particles, which in turn influence the electron. (Credit to Brian Tomasik)


(Other) Higher-order processes

See also Brian's writing here, from which some of these arguments are taken.

1. Requiring higher-order of degree > 2 is arbitrary and should be reduced to 2, if the kind of required higher-order relationship is the same.

2. The line between brain and outside is arbitrary, so second-order theories reduce to particular first-order ones. For a mental state (brain process) to be experienced consciously, higher-order theories require some , which takes output from as input, with certain relationships between and . But why should be in the brain itself? If neurons relate to changes in the outside world with the same kind relationship, then the outside world is experienced consciously. So second-order reduces to a kind of first-order. Relatedly, see the generic objection to higher-order theories here which roughly states that Y being "aware" of a rock X doesn't make X a conscious rock.

3. If causes , then predicts to nonzero degree. Under one characterization of higher-order processes (e.g. see here), we require to predict future “brain” state/processes with input from for to be experienced consciously (if we also require attention for some kind of reflectivity, see the first section). How accurately does have to predict and how should we measure this? If it were perfectly, we wouldn’t be conscious. The line seems to be arbitrary, so this is a matter of degree, not kind, and basically any connected to should predict to nonzero degree.

  • We could say predicts if does not receive input from , and the correlation between at and at is nonzero, or at and X at are not statistically independent. However, this too seems pretty ubiquitous: if neuron fires because it receives some sensory input, and neuron fires because does, there's a better chance than average that will continue to receive similar input and fire again, so firing predicts firing again, and often does so reasonably well.
  • (More plausible than the stipulation that does not receive input from is that there's a dependence or correlation between at and X at even if we hold constant the information flowing from to .)
  • Maybe instead, with times , acts at , receives input from and reacts at , acts again at , and Y at should correlate with some measure of difference between at and at , so that predicts changes in . But even this can often happen by coincidence, and I don't see a non-arbitrary and principled way to distinguish coincidence from non-coincidence. I assign probability ~0 to the claim that does not predict changes in if 's behaviour causes 's, since this would require a perfect probabilistic balancing of events. Furthermore, there are multiple ways to measure change in , it's unlikely any particular one is "the right way", and it's extremely unlikely this perfect balance would be achieved for multiple nonequivalent measures at the same time.

4. Even if we require higher orders, these will happen ubiquitously by chance, because these higher order relationships are everywhere, like in 3. One particle affects another, which affects another, which affects another, etc.. (Credits to Brian Tomasik again.)

5. Learning isn't necessary. Maybe we require to be trainable to get better at predicting (or to improve the degree of their higher-order relationship). However, it doesn’t seem like we should require it to continuously be trained, so if we disconnect the system to update (e.g. by anterograde amnesia or disconnecting the reinforcement learning system), why would no longer be consciously experienced? See “Do non-RL agents matter?” here.


Remark: If =brain relates to =environment in a higher-order way and to no other systems, and we require higher-order relationships, then any suffering isn’t happening in alone, but in and together. If there’s suffering in and , it’s more like , the brain, is conscious of pain in , the environment, according to higher-order theories. This is still compatible with panpsychism, but seems like a morally important difference without the higher-order requirement, if only as a matter of degree. Also very weird.


What about specific combinations of these processes?

For example, sensory inputs feeding into top-down and bottom-up attention feeding into working memory, a self-model and output. Essentially, this is a graph embedding problem: how well does this abstract organization of processes, this "pattern", apply to this specific physical process? I think if each of the processes in the pattern can be found ubiquitously in nature, the pattern will have to be very complex, perhaps intractably and unjustifiably complex, to not be found ubiquitously in nature as well. It seems unlikely we'll be able to require specific numbers of neurons in subnetworks implementing one feature, e.g. in an attention subnetwork. Sure, we could say, the smallest number ever used for introspective report so far, but we won't be able to prove that it can't be done with fewer, and this number will continue to decrease over time. It is not enough to knock out neurons in an existing brain to rule our the possibility that it couldn't be done with fewer; you'd have to test different organizations of these neurons. I doubt that this will ever be feasible in a way that gives us reliable observations, e.g. reports we'll trust.

Still, this seems like it might be a more promising route for progress, if we don't think specific individual kinds of low-level processes are enough. Looking into more global or holistic features like brain waves might also be promising.

A major concern is overfitting our theory to our observations in humans. We want a general theory of consciousness, not a theory of consciousness in human brains.


Processes that matter morally

1. As far as I know, there’s no (plausible) account of what suffering, pleasure, preference and preference satisfaction, etc. are in terms of basic physical processes, to distinguish them from other conscious processes. I expect any such account to face objections and reductions similar to those above.

2. I don’t think morally relevant processes depend in principle on learning, since you could remove the learning/updates. See “Do non-RL agents matter?” here.

3. However, systems that were tuned by learning or are more similar to systems tuned by learning, in my view, matter more in expectation.

Comments28
Sorted by Click to highlight new comments since: Today at 10:54 AM

The arguments in Reasons to doubt that suffering is ontologically prevalent by @Magnus Vinding have made me more skeptical of panpsychism, especially this excerpt:

Counterexamples: People who do not experience pain or suffering

One argument against the notion that suffering is ontologically prevalent is that we seem to have counterexamples in people who do not experience pain or suffering. For example, various genetic conditions seemingly lead to a complete absence of pain and/or suffering. This, I submit, has significant implications for our views of the ontological prevalence (or non-prevalence) of suffering.

After all, the brains of these individuals include countless subatomic particles, basic biological processes, diverse instances of information processing, and so on, suggesting that none of these are in themselves sufficient to generate pain or suffering.

One might object that the brains of such people could be experiencing suffering — perhaps even intense suffering — that these people are just not able to consciously access. Yet even if we were to grant this claim, it does not change the basic argument that generic processes at the level of subatomic particles, basic biology, etc. do not seem sufficient to create suffering. For the processes that these people do consciously access presumably still entail at least some (indeed probably countless) subatomic particles, basic biological processes, electrochemical signals, different types of biological cells, diverse instances of information processing, and so on. This gives us reason to doubt all views that see suffering as an inherent or generic feature of processes at any of these (quite many) respective levels.

Of course, this argument is not limited to people who are congenitally unable to experience suffering; it applies to anyone who is just momentarily free from noticeable — let alone significant — pain or suffering. Any experiential moment that is free from significant suffering is meaningful evidence against highly expansive views of the ontological prevalence of significant suffering.

FWIW, I don't see that piece as making a case against panpsychism, but rather against something like "pansufferingism" or "pansentienceism". In my view, these arguments against the ontological prevalence of suffering are compatible with the panpsychist view that (extremely simple) consciousness / "phenomenality" is ontologically prevalent (cf. this old post on "Thinking of consciousness as waves").

Good point.

I think we can extend your argument to one against pan-experience-of-X-ism, for (almost?) any given X, no matter how specific or broad, with your other example for X being "wanting to go to a Taylor Swift concert so as to share the event with your Instagram followers". This is distinct from panpsychism, which only (?) requires that mental contents or experiences of something in general be widespread, not that any given (specific or kind of) mental content X be widespread.

[anonymous]4y12
0
0

Kudos btw for writing this. Consciousness is a topic where it can be really hard to make progress and I worry that people aren't posting enough about it for fear of saying something wrong.

Cool post. :) I'm not sure if I understand the argument correctly, but what would you say to someone who cites the "fallacy of division"? For example, even though recurrent processes are made of feedforward ones, that doesn't mean the purported consciousness of the recurrent processes also applies to the feedforward parts. My guess is that you'd reply that wholes can sometimes be different from the sum of their parts, but in these cases, there's no reason to think there's a discontinuity anywhere, i.e., no reason to think there's a difference in kind rather than degree as the parts are arranged.

Consider a table made of five pieces of wood: four legs and a top. Suppose we create the table just by stacking the top on the four legs, without any nails or glue, to keep things simple. Is the difference between the table versus an individual piece of wood a difference in degree or kind? I'm personally not sure, but I think many people would call it a difference in kind.

I think an alternate route to panpsychism is to argue that the electron has not just information integration but also the other properties you mentioned. It has "recurrent processing" because it can influence something else in its environment (say, a neighboring electron), which can then influence the original electron. We can get higher-order levels by looking at one electron influencing another, which influences another, and so on. The thing about Y predicting X would apply to electrons as well as neurons.

The table analogy to this argument is to note that an individual piece of wood has many of the same properties as a table: you can put things on it, eat food from it, move it around your house as furniture, knock on it to make noise, etc.

I'm not sure if I understand the argument correctly, but what would you say to someone who cites the "fallacy of division"? For example, even though recurrent processes are made of feedforward ones, that doesn't mean the purported consciousness of the recurrent processes also applies to the feedforward parts. My guess is that you'd reply that wholes can sometimes be different from the sum of their parts, but in these cases, there's no reason to think there's a discontinuity anywhere, i.e., no reason to think there's a difference in kind rather than degree as the parts are arranged.

I basically agree. I think there are no good lines to draw anywhere so it seems to me to be a difference of degree, although I'd guess we can propose minimal isolated systems that are not conscious, perhaps an isolated electron, but that kind of isolation seems rare (maybe impossible?) in the real world.

That being said, I don't think the physical theories have picked out precise properties of "wholes" that don't apply to small ubiquitous systems, just to lesser degrees.

Consider a table made of five pieces of wood: four legs and a top. Suppose we create the table just by stacking the top on the four legs, without any nails or glue, to keep things simple. Is the difference between the table versus an individual piece of wood a difference in degree or kind? I'm personally not sure, but I think many people would call it a difference in kind.

I think people either don't have a precise definition in mind when they think of tables, or if they do, have something in mind that would specifically rule this out. Or they'll revise their definition when presented with such an example: "Oh, but the legs have to be attached!" Of course, what do they mean by legs?

I think an alternate route to panpsychism is to argue that the electron has not just information integration but also the other properties you mentioned. It has "recurrent processing" because it can influence something else in its environment (say, a neighboring electron), which can then influence the original electron. We can get higher-order levels by looking at one electron influencing another, which influences another, and so on. The thing about Y predicting X would apply to electrons as well as neurons.

Agreed. Good point.

I agree that IIT doesn't seem falsifiable since there's no way to confirm something isn't conscious, and that's an important objection, because there probably isn't consciousness without information integration. At least with other the theories I looked at, we could in principle have some confidence that recurrence or attention or predicting lower order mental states probably isn't necessary, even though there are no sharp lines between processes that are doing these things and those that aren't, and the ones that do to nonzero degree seem ubiquitous. But these processes can only really be ruled out as necessary if they are not necessary for eventual report.

Do I need to be able to eventually report (even just to myself) that I experienced something to have actually experienced it? This also seems unfalsifiable. So processes required for eventual report (the ones necessarily used during experiences that are eventually reported, but not necessarily the ones used during the report itself) can't be ruled out as unnecessary, and I'm concerned that the more complex theories of consciousness are approaching theories of reportability (in humans), not necessarily theories of consciousness. No report paradigms only get around this through the unfalsifiable assumption that reflexive behaviours correlated with report (under certain experimental conditions) actually indicate consciousness in the absence of report.

So, IIT accepts basically everything as conscious, while reportability requirements can rule out basically everything except humans (and maybe some "higher" animals) under specific conditions (EDIT: actually, I'm not sure about this), both are unfalsifiable, and basically all other physical theories with academic supporters fall between them (maybe with a few extra elements that are falsifiable), and therefore also include unfalsifiable elements. Choosing between them seems like a matter of intuition, not science. Suppose we identified all of the features necessary for reportability. Academics would still be arguing over which ones among these are necessary for consciousness. Some would claim all of them are, others would still support panpsychist theories, and there doesn't seem to be any principled way to decide. They'd just fit their theories to their intuitions of which things are conscious, but those intuitions aren't reliable data, so this seems backwards.

One skeptical response might be that reportability is required for consciousness. But another skeptical response is that if you try to make things precise, you can't rule out panpsychism non-arbitrarily, as I illustrate in this post.

Slightly weaker than report and similar to reportability, sometimes "access" is considered necessary (consciousness is access consciousness, according to Dennett). But access seems to be based on attention or global workspaces, and imprecisely defined processes that are accessing them, and I argue in this post that attention and global workspaces can be reduced to ubiquitous processes, and my guess is that the imprecisely defined processes accessing them aren't necessary (for the same reasons as report) or attempts to define them in precise physical terms will also either draw arbitrary lines or lead to reduction to panpsychism anyway.

Here are some definitions of access consciousness:

Access consciousness: conscious states that can be reported by virtue of highlevel cognitive functions such as memory, attention and decision making.

https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(11)00125-2

A perceptual state is access-conscious, roughly speaking, if its content - what is represented by the perceptual state - is processed via that information-processing function, that is, if its content gets to the Executive System, whereby it can be used to control reasoning and behavior.
(...)
A state is access-conscious (A-conscious) if, in virtue of one's having the state, a representation of its content is (1) inferentially promiscuous (Stich 1978), that is, poised for use as a premise in reasoning, (2) poised for rational control of action, and (3) poised for rational control of speech. (I will speak of both states and their contents as A-conscious.) These three conditions are together sufficient, but not all necessary. I regard (3) as not necessary (and not independent of the others), because I want to allow that nonlinguistic animals, for example chimps, have A-conscious states. I see A-consciousness as a cluster concept, in which (3) - roughly, reportability - is the element of the cluster with the smallest weight, though (3) is often the best practical guide to A-consciousness.

http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/1995_Function.pdf

There's still an ongoing debate as to whether or not the prefrontal cortex is necessary for consciousness in humans, with some claiming that it's only necessary for report in humans:

https://plato.stanford.edu/entries/consciousness-neuroscience/#FronPost

https://www.jneurosci.org/content/37/40/9603

https://www.jneurosci.org/content/37/40/9593.full

https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12264

Whether or not it is necessary for consciousness in humans could decide whether or not many nonhuman animals are conscious, assuming the kinds of processes happening in the prefrontal cortex are somehow fairly unique and necessary for consciousness generally, not just in humans (although I think attempts to capture their unique properties physically will probably fail to rule out panpsychism non-arbitrarily, like in this post).

I also think these objections will apply to panpsychism generally and any precise physical requirements that don't draw arbitrary lines. In particular, they apply to the other proposed requirements Luke describes in 6.2. Combining precise physical requirements in specific ways, e.g. attention feeds into working memory which feeds into a process that models/predicts its own behaviour/attention, won't really solve the problem, if each of those requirements are so ubiquitous in nature to nonzero degree under attempts to make them precise that specific combinations of them will happen to be, too.

[anonymous]4y5
0
0

I like your description of how complex physical processes like global attention / GWT to simple ones like feedforward nets.

But I don't see how this implies that e.g. GWT reduces to panpsychism. E.g. to describe a recurrent net as a feedforward net you need a ridiculous number of parameters (with the same parameter values in each layer). So that doesn't imply that the universe is full of recurrent nets (even if it were full of feedforward nets which it isn't).

To draw a caricature of your argument as I understand it: It turns out computers can be reduced to logic gates. Therefore, everything is a computer.

Or another caricature: Recurrent nets are a special case of {any arrangement of atoms}. Therefore any arrangement of atoms is an RNN.

edit: missing word

I your description of how complex physical processes like global attention / GWT to simple ones like feedforward nets.

I think you missed a word. :P

E.g. to describe a recurrent net as a feedforward net you need a ridiculous number of parameters (with the same parameter values in each layer).

That's true, but there's no good line to draw for the number of iterations, so this seems more a matter of degree than kind. (I also don't see why the parameter values should be the same, but maybe this could be important. I wrote that I found this unlikely, but not extremely unlikely.)

So that doesn't imply that the universe is full of recurrent nets (even if it were full of feedforward nets which it isn't).

I do think the universe is full of both, see Brian's comment. E.g. an electron influences other particles which in turn influence the electron again.

To draw a caricature of your argument as I understand it: It turns out computers can be reduced to logic gates. Therefore, everything is a computer.

Basically. The claims these theories are making are that certain kinds of physical processes are required (perhaps in certain ways), but these processes are ubiquitous (so will often be "accidentally" arranged in those certain ways, too), although to much lower degrees. It's like "Computers are physical systems made up of logic gates. Logic gates are everywhere, so computers are everywhere." Their necessary conditions that can be explained in physical terms are too easy to meet.

Or another caricature: Recurrent nets are a special case of {any arrangement of atoms}. Therefore any arrangement of atoms is an RNN.

This would of course be invalid logic on its own. I'd say that ubiquitous feedforward processes simulate recurrent ones to shallow recurrence depth.

On further reflection, though, I think or may be better called recurrent than just , since the latter only includes one of each of and .

(I do think most actual local (in the special/general relativity sense) arrangements of atoms have recurrence in them, though, as long as the atoms' relative positions aren't completely fixed. I expect feedback in their movements due to mutual influence.)

[anonymous]4y3
0
0

I agree that physical theories of consciousness are pan psychist if they say that every recurrent net is conscious (or that everything that can be described as GWT is conscious). The main caveats for me are:

Does anyone really claim that every recurrent net is conscious? It seems so implausible. E.g. if I initialize my net with random parameters, it just computes garbage. Or if I have a net with 1 parameter it seems too simple. Or if the number of iterations is 1 (as you say), it's just a trivial case of recurrence. Or if it doesn't do any interesting task, such as prediction...

(Also, most recurrent nets in nature would be gerrymandered. I could imagine there are enough that aren't though, such as potentially your examples).

NB, recurrence doesn't necessarily imply recurrent processing (the term from recurrent processing theory). The 'processing' part could hide a bunch of complexity?

Does anyone really claim that every recurrent net is conscious? It seems so implausible.

I think IIT supporters would claim this. I don't think most theories or their supporters claim to be panpsychist, but I think if you look at their physical requirements abstractly, they are panpsychist. Actually, Lamme, who came up with Recurrent Processing Theory, claims that it, IIT and GNWT endorse panpsychism here, and it seems that he really did intend for two neurons to be enough for recurrent processing:

Current models of consciousness all suffer from the same problem: at their core, they are fairly simple, too simple maybe. The distinction between feedforward and recurrent processing already exists between two reciprocally connected neurons. Add a third and we can distinguish between ‘local’ and ‘global’ recurrent processing. From a functional perspective, processes like integration, feature binding, global access, attention, report, working memory, metacognition and many others can be modelled with a limited set of mechanisms (or lines of Matlab code). More importantly, it is getting increasingly clear that versions of these functions exist throughout the animal kingdom, and maybe even in plants.

1. In a more limited form applying to basically all animals and possibly plants, too, but I think his view of what should count as a network or processing might be too narrow, e.g. why shouldn't an electron and its position count as a neuron and its firing?

Are linear regressions conscious?

I don't think the theories I looked at can conclude they're not without making arbitrary distinctions in matters of degree rather than kind. Most of the theories themselves make such arbitrary distinctions in my view; maybe all of them except IIT?

Panpsychism still seems like a flavor of eliminativism to me. What do we gain by saying an electron is conscious too? Novel predictions?

Relevant:

By saying an electron is conscious too (although I doubt an isolated electron on its own should be considered conscious, since there may be no physical process there), we may need to expand our set of moral patients considerably. It's possible an electron is conscious and doesn't experience anything like suffering, pleasure or preferences (see this post), but then we also don't (currently, AFAIK) know how to draw lines between suffering and non-suffering conscious processes.

I arrived here from Jay Shooster's discussion about the EA community's attitude to eating animals.

I wasn't aware of the current scientific consensus about consciousness; this article was a good primer on the state of the field for me in terms of which theories are preferred. I do like your though and I think it's an interesting challenge or way to approach thinking about consciousness in machines. I've typed out/deleted this reply several times as it does make me re-evaluate what I think about panpsychism. I believe I like your approach and think it is useful for thinking about consciousness at least in machines, but am not sure that "panpsychism" as a theory adds much.

Psychological or neurological theories of consciousness are implicitly prefaced on studying human or non-human animal systems. Thus, though they reckon with the cognitive building blocks of consciousness, there's less examination of just how reductive your system could get and still have consciousness. Whether you're taking a GWT, HOT, or IIT approach, your neural system is made up of millions of neurons arranged into a number of complex components. You might still think there needs to be some level of complexity within your system to approach a level of valenced conscious experience anything like that which you and I are familiar. Even if there's no arbitrary "complexity cut-off", for "processes that matter morally" do we care about elemental systems that might have, quantitatively, a tiny, tiny fraction of the conscious experience of humans and other living beings?

To be a bit more concrete about it (and I suspect you agree with me on this point): when it comes to thinking about which animals have valenced conscious experience and thus matter morally, I don't think panpsychism has much to add - do you? To the extent that GWT, HOT, or IIT ends up being confirmed through observation, we can then proceed to work out how much of each of those experiences each species of animal has, without worrying how widely that extends out to non-living matter.

And then proceeding squarely on to the question of non-living matter. Even if it's true that neurological consciousness theories reduce to panpsychism, we can still observe that most non-living systems fail to have anything but the most basic similarity to the sorts of systems we know for a fact are conscious. Consciousness in more complex machines might be one of the toughest ethical challenges for our century or perhaps the next one, but I suspect when we deal with it, it might be through approaches like this, which attempt to identify building blocks of consciousness and see how machines could have them in some sort of substantive way rather than in a minimal form. Again, whether or not an electron or positron "has consciousness" doesn't seem relevant to that question.

Having said that, I can see value in reducing down neurological theories to their simplest building blocks as you've attempted here. That approach really might allow us to start to articulate operational definitions for consciousness we might use in studying machine consciousness.

You might still think there needs to be some level of complexity within your system to approach a level of valenced conscious experience anything like that which you and I are familiar. Even if there's no arbitrary "complexity cut-off", for "processes that matter morally" do we care about elemental systems that might have, quantitatively, a tiny, tiny fraction of the conscious experience of humans and other living beings?

I think we couldn't justify not assigning them some value with such an approach, even if it's so little we can ignore it (although it could add up).

To be a bit more concrete about it (and I suspect you agree with me on this point): when it comes to thinking about which animals have valenced conscious experience and thus matter morally, I don't think panpsychism has much to add - do you? To the extent that GWT, HOT, or IIT ends up being confirmed through observation, we can then proceed to work out how much of each of those experiences each species of animal has, without worrying how widely that extends out to non-living matter.

I agree, and I think this could be a good approach.

My reading leading up to this post and the post itself were prompted by what seemed to be unjustifiable confidence in almost all nonhuman animals not being conscious. Maybe a more charitable interpretation or a steelman of these positions is just that almost all nonhumans animals have only extremely low levels of consciousness compared to humans (although I'd disagree with this).

It's worth checking out this very much ongoing twitter thread with Lamme about related issues.

https://mobile.twitter.com/VictorLamme/status/1258855709623693325

Maybe one good place to draw lines is whether the system does "better than chance" at implementing a function in some way that's correlated with inputs, but it's not clear that rules out panpsychism.

Thanks for writing the post!

Since you write:

... I’m not claiming panpsychism is true, although this significantly increases my credence in it ...

I'm curious what is your relative credence in non-materialist, "idealistic" physicalism if you're familiar with it? One contemporary account I'm most familiar with is David Pearce's "physicalistic idealism" ("an experimentally testable conjecture" that "that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions") (see also Pearce's popular explanation of his views in a Quora post). David Hoffman's "Consciousness Realism" would be another example (I haven't looked deeply into his work).

One can argue that idealistic physicalism is more parsimonious (by being a monistic physicalism) and thus more likely to be true(r) than panpsychism (which assumes property dualism). Panpsychism, on the other hand, may be more intuitive and more familiar to researchers these days, which may explain why it's discussed more(?) these days compared to non-materialist physicalism.

I'm curious what is your relative credence in non-materialist, "idealistic" physicalism if you're familiar with it?

I'm not familiar enough with it to have much of a view, and I only skimmed. Correct me if I'm misundertanding, but my guess is that basically classical/non-quantum phenomena can be sufficient for consciousness, since the quantum stuff going on in our heads doesn't seem that critical and could be individually replaced with "classical" interactions while preserving everything else in the brain as well as our behaviour. I would say substrate doesn't matter and we can abstract away a lot of details, but some features of how interactions happen might matter (I'm not a computational functionalist).

panpsychism (which assumes property dualism)

I guess this is a matter of definitions, but I don't think this is true, and as far as I can tell, non-materialist physicalism is also compatible with what many would recognize as panpsychism. I would call a theory panpsychist if it calls things like rocks, my desk, etc.. conscious. My post here doesn't assume dualism in panpsychism, and is compatible with illusionism, e.g. it applies to attention schema theory.

Thanks for the reply.

... my guess is that basically classical/non-quantum phenomena can be sufficient for consciousness, since the quantum stuff going on in our heads doesn't seem that critical and could be individually replaced with "classical" interactions while preserving everything else in the brain as well as our behaviour.

I'm not sure how to understand your "sufficient", since to our best knowledge the world is quantum, and the classical physics is only an approximation. (Quoting Pearce: "Why expect a false theory of the world, i.e. classical physics, to yield a true account of consciousness?".)

One reason Pearce needs quantum phenomena is the so-called binding problem of consciousness. For on Pearce's account, "phenomenal binding is classically impossible." IIRC the phenomenal binding is also what drives David Chalmers to dualism.

I would say substrate doesn't matter ...

It doesn't matter indeed on a physicalistic idealist account. But currently, as far as we know, only brains support phenomenal binding (as opposed to being mere "psychotic noise"), for the reason of a huge evolutionary advantage (to the replicating genes).

... non-materialist physicalism is also compatible with what many would recognize as panpsychism ...

Good point. Thanks :)

I'm not sure how to understand your "sufficient", since to our best knowledge the world is quantum, and the classical physics is only an approximation.

 

I agree, but I think exclusively quantum phenomena like superposition aren't necessary in our account of consciousness; that's a detail we can abstract away. I think we could make all the important phenomena happen on a macroscopic scale where classical physics can adequately describe what's happening, e.g. use macroscopic balls instead of particles for signals.

Some good discussion ended up on Facebook here.

I've added sections "Related work" and "What about specific combinations of these processes?".