Hide table of contents

First, basics: I'm a first-year Informatics student. At The University of Edinburgh, where I study, Informatics broadly encompasses Computer Science, AI, and Cognitive Science. I initially started this programme to go into AI safety research later and bc of good personal fit and all that. Ik it's a long time in the future and my plan will likely change, but it's good to have plans, right?

I subscribe to the belief that we should maximise for "positive conscious experience" of all beings. Additionally, over the past few months I've grown more and more intrigued with the riddle consciousness poses. My intention has subtly changed from becoming an AI safety researcher to becoming a consciousness researcher by means of AI/Cognitive Science.

Here's my conundrum: Researching consciousness does make sense as to verify the very basis of my EA beliefs. However, it has practically no real altruistic impact. I also only have a very narrow view of its pressingness/tractability/replacability etcetc as it is not widely discussed, e.g., has no career profile on 80,000 hours. All my information basically comes from the people at Qualia Research Institute who are really excited about the issue (which admittedly is quite infectious).

So what I'm saying is I need more views on this! What do you think? How important is solidifying the concept of consciousness for EA? If I don't do it, would someone else do it instead? What are your thoughts on a career in this field?

Thanks if anyone actually read this :)))) And even more thanks for any replies!

New Answer
New Comment

4 Answers sorted by

I currently think consciousness research is less important/tractable/neglected than AI safety, AI governance, and a few other things. The main reason is that it totally seems to me to be something we can "punt to the future" or "defer to more capable successors" to a large extent. However, I might be wrong about this. I haven't talked to QRI at length sufficient to truly evaluate their arguments. (See this exchange, which is about all I've got.)

Your link redirects to what is presumably something else.

2
kokotajlod
3y
Oops thanks!

I am a 3rd year PhD student in consciousness neuroscience. After studying 3 years in this field I tend to think that better understanding consciousness looks less important than standard EA causes areas. 

Understanding consciousness is probably not very neglected. Indeed, although the field of consciousness science is relatively young and probably still small relatively to other academic fields, it is a growing academic field with established lab teams such as the Sackler center for consciousness science, the tlab, Stan Dehaene lab, Giulio Tononi and more. Consciousness is a fascinating problem that attract many intellectuals. There is an annual conference on the science of consciousness organised every year that probably gather hundreds of academics https://assc24.forms-wizard.co.il/ (unsure about the number of participants)

Although I appreciate the enthusiasm of QRI and the original ideas they discuss, I am  personally concerned by the potential general lack of scientific rigor that might be induced by the structure of QRI, but I would need to engage more with QRI content. Consciousness (noted C) is a difficult problem that quite likely requires collaboration between a good amount of academics with solid norms of scientific rigor (i.e. doing better than the current replication crisis).

In terms of importance of the cause, it is plausible that there is a lot of variation in architecture and phenomenology of conscious processing and so it is unclear how easily results in current, mostly human-centric, consciousness science would transfer to other species or AIs. On the other hand this suggests that understanding consciousness in specific species might be more neglected (but maybe having reliable behavioral markers of C might already go a long way to understand moral patienthood). In any case I have a difficult time making the case for why understanding consciousness is a particularly important problem relative to other standard EA causes.

Some potential interest to further specify that could potentially make the case for studying consciousness more:

  • C might be necessary for general intelligence then better understanding C might help us to better understand general AI and suggest interesting new directions for AI safety.
  • Building conscious AI (in the form of brain emulations or other architectures) could possibly help us create a large amount of valuable artificial beings. Wildely speculative indulgence: being able to simulate humans and their descendents could be a great way to make the human species more robust to most existing existential risks (if it is easy to create artificial humans that can live in simulations then humanity could becomes much more resilient)

Overall I am quite skeptical that on the margin consciousness science is the best field for an undergrad in informatics  compare to AI safety or other priority cause areas.

Building conscious AI (in the form of brain emulations or other architectures) could possibly help us create a large amount of valuable artificial beings. Wildely speculative indulgence: being able to simulate humans and their descendents could be a great way to make the human species more robust to most existing existential risks (if it is easy to create artificial humans that can live in simulations then humanity could becomes much more resilient)

That would pose a huge risk of creating astronomical suffering too. For example, if someone decides to do a conscious simulation of natural history on earth, that would be a nightmare for those who work on reducing s-risks.

Thanks, your perspective on this is really helpful! Especially the points you made about consciousness research not being very neglected. On the other hand, AI research can also not really be described as neglected anymore. Maybe the intersection of both is the way to go - as you said, C might be crucial to AGI.

1
george
2y
This is why I'm pursuing Cognitive Science.

On a longtermist POV, if you think humanity will soon be in a position to create or spread consciousness on a huge scale and has a decent chance of making a hard to reverse decision (e.g. sending out autonomous self-replicating conscious entities, either animal or artificial), knowing more soon about consciousness might be very important.

In the shorter term, knowing more will help animal advocates prioritize between species.

Besides QRI, check out Rethink Priorities and ASENT.

I'm not sure why your answer is so full of repetition, but I will definitely check those orgs out, thanks!

3
MichaelStJules
3y
Woops, fixed.

QRI = the Qualia Research Institute

https://qualiaresearchinstitute.org

The "meta-problem of consciousness" is "What is the exact chain of events in the brain that leads people to self-report that they're conscious?". The idea is (1) This is not a philosophy question, it's a mundane neuroscience / CogSci question, yet (2) Answering this question would certainly be a big step towards understanding consciousness itself, and moreover (3) This kind of algorithm-level analysis seems to me to be essential for drawing conclusions about the consciousness of different algorithms, like those of animal brains and AIs.

(For example, a complete accounting of the chain of events that leads me to self-report "I am wearing a wristwatch" involves, among other things, a description of the fact that I am in fact wearing a wristwatch, and of what a wristwatch is. By the same token, a complete accounting of the chain of events that leads me to self-report "I am conscious" ought to involve the fact that I am conscious, and what consciousness is, if indeed consciousness is anything at all. Unless you believe in p-zombies I guess, and likewise believe that your own personal experience of being conscious has no causal connection whatsoever to the words that you say when you talk about your conscious experience, which seems rather ludicrous to me, although to be fair there are reasonable people who believe that.)

My impression is that the meta-problem of consciousness is rather neglected in neuroscience / CogSci, although I think Graziano is heading in the right direction. For example, Dehaene has a whole book about consciousness, and nowhere in that book will you see a sentence that ends "...and then the brain emits motor commands to speak the words 'I just don't get it, why does being human feel like anything at all?'." or anything remotely like that. I don't see anything like that from QRI either, although someone can correct me if I missed it. (Graziano does have sentences like that.)

Ditto with the "meta-problem of suffering", incidentally. (Is that even a term? You know what I mean.) It's not obvious, but when I wrote this post I was mainly trying to work towards a theory of the meta-problem of suffering, as a path to understand what suffering is and how to tell whether future AIs will be suffering. I think that particular post was wrong in some details, but hopefully you can see the kind of thing I'm talking about. Conveniently, there's a lot of overlap between solving the meta-problem of suffering and understanding brain motivational systems more generally, which I think may be directly relevant and important for AI Alignment.

wrt QRI's take on the causal importance of consciousness - yes, it is one of the core problems that are being addressed.

Perhaps see: Breaking Down the Problem of Consciousness, and Raising the Table Stakes for Successful Theories of Consciousness.

wrt the meta-problem, see: Qualia Formalism in the Water Supply: Reflections on The Science of Consciousness 2018
 


 

I did not know about the meta-problem of consciousness before. I will have to think about this, thank you!

I don't see anything like that from QRI either, although someone can correct me if I missed it.

 

In Principia Qualia (p. 65-66), Mike Johnson posits:

What is happening when we talk about our qualia? 

If ‘downward causation’ isn’t real, then how are our qualia causing us to act? I suggest that we should look for solutions which describe why we have the sensory illusion of qualia having causal power, without actually adding another causal entity to the universe.

I believe this is much more feasible than it seems if we carefully examine the exact sense ... (read more)

1
Steven Byrnes
3y
OK, if I understand correctly, the report suggests that qualia may diverge from qualia reports—like, some intervention could change the former without the latter. This just seems really weird to me. Like, how could we possibly know that? Let's say I put on a helmet with a button, and when you press the button, my qualia radically change, but my qualia reports stay the same. Alice points to me and says "his qualia were synchronized with his  qualia reports, but pressing the button messed that up". Then Bob points to me and says "his qualia were out-of-sync with his qualia reports, but when you pressed the button, you fixed it". How can we tell who's right? And meanwhile here I am, wearing this helmet, looking at both of them, and saying "Umm, hey Alice & Bob, I'm standing right here, and I'm telling you, I swear, I feel exactly the same. This helmet does nothing whatsoever to my qualia. Trust me! I promise!" And of course Alice & Bob give me a look like I'm a complete moron, and they yell at me in synchrony "...You mean, 'does nothing whatsoever to my qualia reports'!!" How can we decide who's right? Me, Alice, or Bob? Isn't it fundamentally impossible?? If every human's qualia reports are wildly out of sync with their qualia, and always have been for all of history, how could we tell? Sorry if I'm misunderstanding or if this is in the report somewhere.
2
Linch
3y
(I have not read the report in question) There are some examples of situations/interventions where I'm reasonably confident that the intervention changes qualia reports more than it changes qualia.  The first that jumps to mind is meditation: in the relatively small number of studies I've seen, meditation dramatically changes how people think they perceive time (time feels slower, a minute feels longer, etc), but without noticeable effects on things like reaction speed, cognitive processing of various tasks, etc.  This to me is moderate evidence that the subjective experience of the subjective experience of time has changed, but not (or at least not as much) the actual subjective experience of time. Anecdotally, I hear similar reports for recreational drug use (time feels slower but reaction speed doesn't go up...if anything it goes down). This is relevant to altruists because (under many consequentialist ethical theories) extending subjective experience of time for pleasurable experiences seems like a clear win, but the case for extending the subjective experience of the subjective  experience of time is much weaker.
1
Steven Byrnes
3y
Interesting... I guess I would have assumed that, if someone says their subjective experience of time has changed, then their time-related qualia has changed, kinda by definition. If meanwhile their reaction time hasn't changed, well, that's interesting but I'm not sure I care... (I'm not really sure of the definitions here.)
6
Linch
3y
Let me put it a different way. Suppose we simulate Bob's experiences on a computer. From a utilitarian lens, if you can run Bob on a computational substrate that goes 100x faster, there's a strong theoretical case that FastBob is 100x as valuable per minute run (or 100x as disvaluable if Bob's suffering). But if you trick simulatedBob to thinking that he's 100x faster (or if you otherwise distort the output channel so the channel lies to you about the speed), then it seems to be a much harder case to argue that FakeFastBob is indeed 100x faster/more valuable.
1
Steven Byrnes
3y
Oh, I think I see. If someone declares that it feels like time is passing slower for them (now that they're enlightened or whatever), I would accept that as a sincere description of some aspect of their experience. And insofar as qualia exist, I would say that their qualia have changed somehow. But it wouldn't even occur to me to conclude that this person's time is now more valuable per second in a utilitarian calculus, in proportion to how much they say their time slowed down, or that the change in their qualia is exactly literally time-stretching. I treat descriptions of subjective experience as a kind of perception, in the same category as someone describing what they're seeing or hearing. If someone sincerely tells me they saw a UFO last night, well that's their lived experience and I respect that, but no they didn't. By the same token, if someone says their experience of time has slowed down, I would accept that something in their consciously-accessible brain has changed, and the way they perceive that change is as they describe, but it wouldn't even cross my mind that the actual change in their brain is similar to that description. As for inter-person utilitarian calculus and utility monsters, beats me, everything about that is confusing to me, and way above my pay grade :-P
3
Linch
3y
Right, I guess the higher-level thing I'm getting at is that while introspective access is arguably the best tool that we have to access subjective experience in ourselves right now, and stated experiences is arguably the best tool for us to  see it in others (well, at least humans), we shouldn't confuse stated experiences as identical to subjective experience.  To go with the perception/UFO example, if someone (who believes themself to be truthful) reports seeing an UFO and it later turns out that they "saw" an UFO because their friend pulled a prank on them, or because this was an optical illusion, then I feel relatively comfortable in saying that they actually had the subjective experience of seeing an UFO. So while external reality did not actually have an UFO, this was an accurate qualia report.  In contrast, if their memory later undergoes falsification, and they misremembered seeing a bird (which at the time they believed it was a bird) as seeing an UFO, then they only had the subjective experience of remembering seeing an UFO, not the actual subjective experience of seeing an UFO.   Some other examples: 1. If I were to undergo surgery, I would pay more money for a painkiller that numbs my present experience of pain than I would pay for a painkiller that removes my memory of pain (and associated trauma etc), though I would pay nonzero dollars for the later. This is because my memory of pain is an experience of an experience, not identical with the original experience itself. 2. Many children with congenital anosmia (being born without a sense of smell) act as if they have a sense of smell until tested.  While I think it's reasonable to say that they have some smell-adjacent qualia/subjective experiences, I'd be surprised if they hallucinated qualia identical to the experiences of people with a sense of smell, and I would be inaccurate to say that their subjective experiences of smell is the same as people who have the objective ability to smell. 
1
Steven Byrnes
3y
Thanks! I think you're emphasizing how qualia reports are not always exactly corresponding to qualia and can't always be taken at face value, and I'm emphasizing that it's incoherent to say that qualia exist but there's absolutely no causal connection whatsoever going from an experienced qualia to a sincere qualia report. Both of those can be true! The first is like saying "if someone says "I see a rock", we shouldn't immediately conclude that there was a rock in this person's field-of-view. It's a hypothesis we should consider, but not proven." That's totally true. The second is like disputing the claim: "If you describe the complete chain of events leading to someone reporting "I see a rock", nowhere in that chain of events is there ever an actual rock (with photons bouncing off it), not for anyone ever—oh and there are in fact rocks in the world, and when people talk about rocks they're describing them correctly, it's just that they came to have knowledge of rocks through some path that had nothing to do with the existence of actual rocks." That's what I would disagree with. So if you have a complete and correct description of the chain of events that leads someone to say they have qualia, and nowhere in that description is anything that looks just like our intuitive notion of qualia, I think the correct conclusion is "there is nothing in the world that looks just like our intuitive notion of qualia", not "there's a thing in the world that's just like our intuitive notion of qualia, but it's causally disconnected from our talking about it". (I do in fact think "there's nothing in the world that looks just like our intuitive notion of qualia". I think this is an area where our perceptions are not neutrally and accurately conveying what's going on; more like our perception of an optical illusion than our perception of a rock.)
2
Linch
3y
Hi, sorry for the very delayed reply. I think one thing I didn't mention in the chain of comments above is that I think it's more plausible that there are interventions that change qualia reports without much changing  (morally important) qualia than the reverse: changing important qualia without changing qualia reports. And I gave examples of changing qualia reports without (much) changing qualia, whereas the linked report talks more about changing qualia without substantively changing qualia reports.  I can conceive of examples where qualia interventions change qualia but not qualia reports (eg painkillers for extreme pain that humans naturally forget/round down), but they seem more like edge cases than the examples I gave. 
1
Steven Byrnes
3y
I agree that there are both interventions that change qualia reports without much changing  (morally important) qualia and interventions that change qualia without much changing qualia reports, and that we should keep both these possibilities in mind when evaluating interventions.
Comments2
Sorted by Click to highlight new comments since: Today at 12:54 PM

It seems to me that consciousness research could be categorized as "fundamental" research and while it may have a less obvious or near-term altruistic impact, without a full understanding, we may miss something essential in how we work or how we operate. For example, studying consciousness, what it is, how it works, and who/what "has it" to what degree could have strong implications on animal rights discussions. More broadly, I tend to think fundamental research is pretty significantly underfunded and underrepresented, perhaps because the direct application seems fuzzier, but I think it is still very important for formalizing and hardening our understanding of how the world works, which serves to improve our decision making. Cognitive science in general is promising to me too, since it can help us figure out why we feel and act like we do, which can really improve our ability to overcome our potentially negative impulses, support our positive impulses, and be more rational and clear-thinking.

I'd say the same thing about astrophysics or quantum mechanics; they seem to be less directly relevant and don't have an 80,000 hours profile, but people definitely still need to do it, since they are essential to our understanding of the universe and have direct applications in improving the world or avoiding existential risks. Not necessarily saying they need to be on 80,000 hours' "most pressing" list, but I certainly wouldn't want to discourage people from working in these areas if the have the skills and interest. We could let "more capable successors" deal with these issues, but I am of the opinion that we can't let work on fundamental research go to zero, or even close to it, while we wait for the successors to arrive.

Research into the human brain and mind does not seem neglected. I am skeptical of our ability to make much progress into the question of consciousness and in particular I don't think we will ever be able to be confident which animals and AI are conscious. But to whatever extent we can make progress on these questions it seems it will come from research areas that are not neglected. Of course, if you are passionate about the area you might think that going into it and donating part of your salary is the best decision overall. 

More from Luise
Curated and popular this week
Relevant opportunities