First, basics: I'm a first-year Informatics student. At The University of Edinburgh, where I study, Informatics broadly encompasses Computer Science, AI, and Cognitive Science. I initially started this programme to go into AI safety research later and bc of good personal fit and all that. Ik it's a long time in the future and my plan will likely change, but it's good to have plans, right?
I subscribe to the belief that we should maximise for "positive conscious experience" of all beings. Additionally, over the past few months I've grown more and more intrigued with the riddle consciousness poses. My intention has subtly changed from becoming an AI safety researcher to becoming a consciousness researcher by means of AI/Cognitive Science.
Here's my conundrum: Researching consciousness does make sense as to verify the very basis of my EA beliefs. However, it has practically no real altruistic impact. I also only have a very narrow view of its pressingness/tractability/replacability etcetc as it is not widely discussed, e.g., has no career profile on 80,000 hours. All my information basically comes from the people at Qualia Research Institute who are really excited about the issue (which admittedly is quite infectious).
So what I'm saying is I need more views on this! What do you think? How important is solidifying the concept of consciousness for EA? If I don't do it, would someone else do it instead? What are your thoughts on a career in this field?
Thanks if anyone actually read this :)))) And even more thanks for any replies!
The "meta-problem of consciousness" is "What is the exact chain of events in the brain that leads people to self-report that they're conscious?". The idea is (1) This is not a philosophy question, it's a mundane neuroscience / CogSci question, yet (2) Answering this question would certainly be a big step towards understanding consciousness itself, and moreover (3) This kind of algorithm-level analysis seems to me to be essential for drawing conclusions about the consciousness of different algorithms, like those of animal brains and AIs.
(For example, a complete accounting of the chain of events that leads me to self-report "I am wearing a wristwatch" involves, among other things, a description of the fact that I am in fact wearing a wristwatch, and of what a wristwatch is. By the same token, a complete accounting of the chain of events that leads me to self-report "I am conscious" ought to involve the fact that I am conscious, and what consciousness is, if indeed consciousness is anything at all. Unless you believe in p-zombies I guess, and likewise believe that your own personal experience of being conscious has no causal connection whatsoever to the words that you say when you talk about your conscious experience, which seems rather ludicrous to me, although to be fair there are reasonable people who believe that.)
My impression is that the meta-problem of consciousness is rather neglected in neuroscience / CogSci, although I think Graziano is heading in the right direction. For example, Dehaene has a whole book about consciousness, and nowhere in that book will you see a sentence that ends "...and then the brain emits motor commands to speak the words 'I just don't get it, why does being human feel like anything at all?'." or anything remotely like that. I don't see anything like that from QRI either, although someone can correct me if I missed it. (Graziano does have sentences like that.)
Ditto with the "meta-problem of suffering", incidentally. (Is that even a term? You know what I mean.) It's not obvious, but when I wrote this post I was mainly trying to work towards a theory of the meta-problem of suffering, as a path to understand what suffering is and how to tell whether future AIs will be suffering. I think that particular post was wrong in some details, but hopefully you can see the kind of thing I'm talking about. Conveniently, there's a lot of overlap between solving the meta-problem of suffering and understanding brain motivational systems more generally, which I think may be directly relevant and important for AI Alignment.
I agree that there are both interventions that change qualia reports without much changing (morally important) qualia and interventions that change qualia without much changing qualia reports, and that we should keep both these possibilities in mind when evaluating interventions.