Hide table of contents

Introduction

I have a theory regarding the nature of consciousness that I'd like to get feedback on.  If it's correct, it would have large implications relating to effective altruism.  It would be great if someone could explain to me why I'm wrong in a way that I'll understand and agree with.  Or it would also be great if I manage to explain it well enough that other people will understand it and agree with it.  It's more likely that neither of those things will happen, but I would still be interested in getting feedback.  I haven't found any other well-established theory of consciousness that is very similar to mine, especially regarding the existence of what I call 'the Indicator'.

TLDR/Summary

  1. I think consciousness is mostly just an input channel (I'll call it an 'Experiencer'), that experiences different things (mental events) depending on the physical state of the brain.
  2. But humans also have an indicator (I'll call it the Indicator) in their genetic code that indicates to the human (and indirectly the Experiencer) that the Experiencer is there and that he/she is it.
  3. We shouldn't mess around with genetics very much, because we won't know if what we create has consciousness or not, even if it acts like it has consciousness.
  4. I don't think anything mechanical like AI will ever have consciousness, because the Indicator is only present in biological entities, as far as we know.

The Experiencer

I think consciousness is basically like an input channel.  We are not actually the humans that we think of ourselves as, and not even the brains inside those humans.  We are actually sort-of 'Experiencers' that experience different things based on the physical state of the brains we are associated with.  Rather than our brains thinking and feeling, we, the Experiencers, do that.  We experience different mental events, like thoughts and feelings, and even the sense that we are actively doing something, depending on the physical state of the brains we are associated with.  Our brains alone are basically just biological computers, and we, the Experiencers, are the only things that make our brains more than that.  More specifically, we, the Experiencers, are the ones that experience what the human each of us is associated with, would say it is experiencing, if asked, assuming the human answered truthfully and accurately.

I also think we, the Experiencers, are not necessarily physical entities.  Or at least there isn't anything to suggest that we are physical entities, and there isn't any reason to necessarily think that consciousness is a physical phenomenon.

We tend to think of ourselves as being humans, or at least as being human brains, but we aren't.  We are something else (Experiencers) that are just associated with our brains.  We, the Experiencers, are things that have subjective experience and phenomenal consciousness, meaning there is something it is like to be us, where-as brains, themselves, are just physical objects.  So there isn't anything that it's like to be a human brain, the same way there isn't anything that it's like to be a rock.

All of this approximately lines up with a pretty well-established theory of consciousness called Cartesian Dualism (mind and body).  But the next part about "the Indicator" is the part of my theory that I haven't seen represented in any established theories of consciousness.

The thing is, if consciousness (a.k.a., the Experiencer) only functions like an input channel, there would be no explanation for why it is something that humans talk about, and are obviously aware of.  If we Experiencers were just accepting input based on the physical state of the brain, humans would have no awareness of us Experiencers.

The Indicator

I think there must be an indicator (I'll call it 'the Indicator') that is part of the physical behavior of human brains.  We, the Experiencers, are different things from the brains we are attached to, but also that our brains' awareness that we exist is yet another different thing, a physical thing.  It indicates to us (both the brain and the Experiencer), not only that each of us is an Experiencer, but also what we Experiencers are currently experiencing.  Since the Indicator is part of the behavior of human brains, that would suggest that it is in the genetic code of humans.
I don't know how the Indicator would have ended up being part of our genetic code.  I have some theories, but it isn't completely relevant to what I'm trying to explain right now, so I'll skip that part in this post.

This also means that, if my theory is correct, you should be able to notice the Indicator in yourself.  However, it is very difficult to notice the Indicator, because the Indicator is an indication of the entirety of your consciousness.  The Indicator not only indicates to you that you are an Experiencer, but it indicates to you everything that you are currently experiencing.  So everything you experience is part of the Indicator.  This makes the Indicator difficult to notice, because it's so intrinsic.  It's just always there, so it fades into the background.  We don't think of the Indicator as being a thing, because it seems like it's just everything.

This makes it nearly impossible for me to really describe it to you, such that you will notice it.  Instead, the best I can do is explain around it, and hope that that helps you notice it yourself.  As Darth Vader would say, "Search your feelings, you know it to be true."

I think some older well-known philosophical sayings were trying to accomplish the same thing.  For example, questions like:  "What's the sound of one hand clapping?" and "If a tree falls in a forest and no one is around to hear it, does it make a sound?"  Directly, it's pretty obvious that one hand can't clap, so there is no sound of one hand clapping.  And if a tree falls in a forest, of course it would make a sound even if no one is around to hear it.  But these two questions are trying to help us understand that it is important that there exists at least one Experiencer in the world that could possibly hear a tree falling.  And that if there were a physical world (one hand), but no Experiencers (the other hand), then it wouldn't really matter that there's a physical world, because there wouldn't be anyone to experience it (you need to have both hands to clap).

I think, if things were a bit different, our brains could, theoretically, be completely unaware of having us Experiencers associated with them.  The brain that is associated with one of us that happens to be aware of its associated Experiencer, call it brain #1, could ask another brain that is completely unaware of its associated Experiencer, call it brain #2, what it is currently experiencing, and brain #2 would have no idea what brain #1 is talking about.

Implications for Effective Altruism

If my theory is correct, there would be a few major implications for effective altruism, and I'll go into those next.

Genetic Engineering

The first major implication deals with genetic engineering.  The problem is that there is nothing to suggest that the Indicator is more than an indicator.  If we were to modify the genes of an organism that doesn't have the Indicator, such that the resulting organism has the Indicator, it would be very likely that that organism would have consciousness.  But, if we then modified that organism's genes such that the organism only had some part of the Indicator, not the entirety of it, we wouldn't have a way of knowing where the cut-off point is between an organism that has the Indicator, and so probably has consciousness, versus an organism that doesn't have a large enough portion of the Indicator, such that it doesn't have consciousness.  So, if we want to know which organisms have consciousness, we should probably avoid genetically engineering any aspect of an organism that is involved with the Indicator.  And, similarly, we should avoid genetically engineering an organism such that it has anything similar to the Indicator.

Breeding

There are also similar implications regarding the breeding of humans and animals.  I think it would be safe to assume that when looking at two physical objects, if they are almost physically identical, probably either both of them have consciousness or neither does.  If one has consciousness then the other must also.  Similarly, if two humans have pretty much the same DNA, for the most part, either both have consciousness or neither.  And, as I said above, if we were to genetically engineer the Indicator part of a human beyond a certain point, we would realistically not know if the human has consciousness or not.  But, for any pattern of DNA that has that result, with technology that is sufficently advanced, there would probably be some way of creating a being with the same DNA through careful breeding or embryo selection rather than genetic engineering.  This suggests that it would also be a bad idea to breed humans such that they only have an incomplete Indicator.
And that it would be a bad idea to breed something that has no Indicator into something that has the Indicator.

Machine Consciousness

Another important implication of this is that there's no reason to think that machines will ever have consciousness.  The Indicator is part of the human genome and it's the only indication of the existence of us Experiencers that we have.  There's no reason to think that any kind of machine would have consciousness, regardless of how complex or how similar to humans its behavior might be.
The Indicator is the only indication we have that Experiencers exist and it is a purely biological thing.

Animal Consciousness

My theory also has implications regarding animal consciousness.  The way we can tell that a human has consciousness is basically by asking them (although in reality it's more complicated than that).
Unfortunately, we can't communicate with animals effectively enough to ask animals if they have consciousness.  However, if we could figure out very specifically what physical aspects of the human brain are associated with the Indicator, the aspects that result in a human saying that they have consciousness, we might be able to figure out if an animal has consciousness by checking if the animal's brain is physically similar in that way.

Advancing Technology

More generally, all of this presents the problem that as technology advances, it can potentially get much more difficult to tell which beings have consciousness and which don't.  At this point, it's pretty easy for all of us to tell at least that everyone we encounter that seems fairly definitely human has consciousness, but it might eventually get a lot more difficult.  But, at least we'll probably get a better idea of if animals have consciousness as technology advances.

Similar Established Theories

I've made some attempts at resolving my theory of consciousness with established philosophy, and I'll describe those next.  I initially noticed that I had these specific beliefs regarding the nature of consciousness back in approximately 2015, and I decided to look into it more.  I started off looking around online, but I didn't find much.

Then I thought that maybe I should talk to a philosophy professor about it.  I looked around online for a philosophy professor, and discovered that I was already regularly playing board games with someone who happened to be a philosophy professor at a local university.  I talked to him about it and he pointed me towards some things I could read online, so I did that.
When I tried to describe my theory to him, his response was "yeah, that sounds pretty crazy".

I found a few established philosophical concepts that seemed to line up with my theory, at least as far as consciousness being like an input channel, but I didn't find any established theories of consciousness that included the idea of the Indicator, or any of the implications I think the existence of the Indicator has.

For example, 'epiphenomenalism' is the theory that mental events are caused by physical events in the brain, but that mental events don't impact physical events.  And the 'Cartesian Theater' is a metaphor for consciousness that involves a tiny human called a homunculus sitting in something like a theater inside the person's brain, experiencing whatever the human is experiencing.  Both of those theories line up with my concept of the Experiencer to some extent.

Previous Feedback

While investigating this, I have also described my theory to various people over the years.  Most of the time it seems like they don't understand what I'm trying to describe, probably because ideas related to consciousness are so abstract, and because the Indicator is always there, which makes it more difficult to notice.  At least one person thought that my theory didn't actually describe anything more than Descartes' Theorem ("I think therefore I am").  Once, when I described my theory to someone who was religious, it seemed like she might have understood it and agreed with it, but just thought my description of the 'Experiencer' was an incomplete description of a soul.  I haven't encountered anyone who agrees with my explanation of the Indicator and that it is probably part of the human genetic code.

Comments1


Sorted by Click to highlight new comments since:

Your theory about consciousness and the Indicator is intriguing! It reminds me of debates about biological vs. mechanical awareness—almost like asking if erome news could define her own reality

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal