Hide table of contents

I've recently been reading The World Behind the World: Consciousness, Free Will, and the Limits of Science by neuroscientist Erik Hoel (it's amazing btw-highly recommend) and wanted to share this snippet at the end of the sixth chapter, under the section "A Theory of Consciousness Cannot Come Soon Enough":

We cannot wait too long...For we live in strange times: there are now creatures made only of language, and they make for potential substitutions. Contemporary AIs, like LaMDA, which is Google’s internal chatbot, have achieved fluency in their native element...The technology is moving so fast that questions of AI consciousness are now commonplace...AI [has] triggered a wave of discourse centered around the question: Are AIs conscious? Do they have an intrinsic perspective? It seems that, so far, most people have answered no. For how to judge with certainty whether an AI’s claim to consciousness is correct or incorrect, when it’ll say whatever we want it to? So many supposed experts immediately jumped into the fray to opine, but the problem is that we lack a scientific theory of consciousness that can differentiate between a being with actual experiences and a fake. If there were a real scientific consensus, then experts could refer back to it—but there’s not. All of which highlights how we need a good scientific theory of consciousness right now—look at what sort of moral debates we simply cannot resolve for certain without one. People are left only with their intuitions.

I've been intellectually interested in all things philosophy of mind, psychology, consciousness, and AI for a while now, and have seriously considered pursuing graduate research in those areas. The issue is that I am also a naive undergraduate student who feels compelled to do a lot of good with my life and have historically been unsure of the effectiveness of academic research of this sort. 

This passage by Erik Hoel updated me: It seems likely that forging a theory of consciousness would in fact help make sense of all things AI (and humans, of course), and could thus contribute to AI safety work.  Without such a theory, we cannot reliably determine whether AI claims to consciousness are valid.

Of course we are far, far from building a comprehensive theory of consciousness, although I think chipping away at one, no matter how slowly, is still possible and worthwhile. But again, as always, resources are limited, and I've also been concerned with the timelines of AGI recently. 

What I'm looking for by mentioning all of this: 

Just looking for advice/opinions, really.

  • Do you know of anyone who is working on consciousness/AI/cog sci, from a non-technical/more philosophical side? Are they in EA? Do they have any thoughts on this (like the effectiveness of their work)?
    • If so, I'd love to be put in touch with them! My email is juliana.eberschlag@gmail.com. 
  • Timelines of AGI: Is doing research that is highly philosophical, rigorous, and uncertain worth it?
    • *uncertain in the sense that I'm unsure of how much current consciousness work is actually pushing the needle toward a comprehensive theory of consciousness. i.e don't know if I'd actually make a difference, but also any marginal difference still may be vastly helpful in expectation.
  • Any other thoughts? 

Thanks :)

8

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

Hi! Have you checked out the Qualia Research Institute?

I have this fresh in my mind as we've had some internal discussion on the topic at Convergence. My personal take is that "consciousness" is a bit of a trap subject because it bakes in a set of distinct complex questions, people talk about it differently, it's hard to peer inside the brain, and there's slight mystification because consciousness feels a bit magical, from the inside. Sub-topics include but are not limited to: 1. Higher-order though. 2. Subjective experience. 3. Sensory integration. 3. Self-awareness. 4. Moral patienthood.

My recommendation is to try and talk in terms of these sub-topics as much as possible rather than the fuzzy, differently understood, and massive concept "consciousness".

Is contributing to this work useful/effective? Well, I think it will be more useful if, when one works in this domain (or domains), one has specific goals (more in the direction of "understand self-awareness" or "understand moral patienthood" than "understand consciousness") and one does them for specific purposes.

My personal take is that the current "direct AI risk reduction work" that has the highest value is AI strategy and AI governance. And hence, I would reckon that "consciousness"-work that has clear bearing on AI strategy and AI governance can be impactful.

Curated and popular this week
Relevant opportunities