BS

ben.smith

827 karmaJoined Downtown, Eugene, OR, USA

Comments
107

ha I see. Your advice might be right but I don't think "consciousness is quantum". I wonder if you could say what you mean by that?

Of course I've heard that before. In the past when I have heard people say that before, it's by advocates of free will theories of consciousness trying to propose a physical basis for consciousness that preserves indeterminacy of decision-making. Some objections I have to this view:

  1. Most importantly, as I pointed out here: consciousness is roughly orthogonal to intelligence. So your view shouldn't give you reassurance about AGI. We could have a formal definition of intelligence, and causal instantiations of it, without any qualia what-its-like-to-be subjective consciousness existing in the system. There is also conscious experience with minimal intelligence, like experiences of raw pleasure, pain, or observing the blueness of the sky. As I explain in the linked post, consciousness is also orthogonal to agency or goal-directed behavior.
  2. There's a great deal of research about consciousness. I described one account in my post, and Nick Humphrey does go out on a limb more than most researchers do. But my sense is most neuroscientists of consciousness endorse some account of consciousness roughly equivalent to Nick's. While probably some (not all or even a majority) would concede the hard problem remains, based on what we do know about the structure of the physical substrates underlying consciousness, it's hard to imagine what role "quantum" would do.
  3. It fails to add any sense of meaningful free will, because a brain that makes decisions based on random quantum fluctuations doesn't in any meaningful way have more agency than a brain that makes decisions based on pre-determined physical causal chains. While a [hypothetical] quantum-based brain does avoid being pre-determined by physical causal chains, now it is just pre-determined by random quantum fluctuations.
  4. Lastly I have to confess a bit of prejudice against this view. In the past it seems like this view has been proposed so naively it seems like people are just mashing together two phenomena that no one fully understands, and proposing they're related because ???? But the only thing they have in common, as far as I know, is that we don't understand them. That's not much of a reason to believe in a hypothesis that links them.
  5. Assuming your view was correct, if someone built a quantum computer, would you then be more worried about AGI? That doesn't seem so far off.

Elliot has a phenomenally magnetic personality and is consistently positive and uplifting. He's generally a great person to be around. His emotional stamina gives him the ability to uplift the people around him and I think he is a big asset to this community.

TLDR: I'm looking for researcher roles in AI Alignment, ideally translating technical findings into actionable policy research


Skills & background: I have been an local EA community builder since 2019. I have a PhD in social psychology and wrote my dissertation on social/motivational neuroscience. I also have a BS in computer science and spent two years in industry as a data scientist building predictive models. I'm an experienced data scientist, social scientist, and human behavioral scientist.

Location/remote: Currently located on the West Coast of the USA. Willing to relocate to the Bay area for sufficiently high renumeration, or to Southern California or Seattle for just about any suitable role. Would relocate to just about anywhere including the USA east coast, Australasia, the UK, or China for a highly impactful role.

Availability & type of work: I finish work teaching at the University of Oregon around April, and if I haven't found something by then, will be available again in June. I'm looking for full-time work from there or part time work in impactful roles for an immediate start.

Resume/CV/LinkedIn: 

Brief resume

Full academic CV


LinkedIn
Email/contact: benjsmith@gmail.com

Other notes: I don't have strong preference for cause areas and would be highly attracted to roles reducing AI existential risk, or improving animal welfare and global health, or our understanding of the long-term future. I suspect my comparative advantage is in research roles (broadly defined) and in data science work; technical summaries for AI governance or Evals work might be a comparative advantage.

But I would guess that pleasure and unpleasantness isn't always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause.

This sounds right. My claim is that there are all sorts of unconscious perceptions an valenced processing going on in the brain, but all of that is only experienced consciously once there's a certain kind of recurrent cortical processing of the signal which can loosely be described as "sensation". I mean that very loosely; it even can include memories of physical events or semantic thought (which you might understand as a sort of recall of auditory processing). Without that recurrent cortical processing modeling the reward and learning process, probably all that midbrain dopaminergic activity does not get consciously perceived. Perhaps it does, indirectly, when the dopaminergic activity (or lack thereof) influences the sorts of sensations you have.

But I'm getting really speculative here. I'm an empiricist and my main contention is that there's a live issue with unknowns and researchers should figure out what sort of empirical tests might resolve some of these questions, and then collect data to test all this out.

 

I would say thinking of something funny is often pleasurable. Similarly, thinking of something sad can be unpleasant. And this thinking can just be inner speech (rather than visual imagination)....Also, people can just be in good or bad moods, which could be pleasant and unpleasant, respectively, but not really consistently simultaneous with any particular sensations.

 

I think most of those things actually can be reduced to sensations; moods can't be, but then, are moods consciously experienced, or do they only predispose us to interpret conscious experiences more positively or negatively?

(Edit: another set of sensations you might overlook when you think about conscious experience of mood are your bodily sensations: heart rate, skin conductivity, etc.)

But this also seems like the thing that's more morally important to look into directly. Maybe frogs' vision is blindsight, their touch and hearing are unconscious, etc., so they aren't motivated to engage in sensory play, but they might still benefit from conscious unpleasantness and aversion for more sophisticated strategies to avoid them. And they might still benefit from conscious pleasure for more sophisticated strategies to pursue pleasure.

They "might" do, sure, but what's your expectation they in fact will experience conscious pleasantness devoid of sensations? High enough to not write it off entirely, to make it worthwhile to experiment on, and to be cautious about how we treat those organisms in the meantime--sure. I think we can agree on that. 

But perhaps we've reached a sort of crux here: is it possible, or probable, that organisms could experience conscious pleasure or pain without conscious sensation? It seems like a worthwhile question. After reading Humphrey I feel like it's certainly possible, but I'd give it maybe around 0.35 probability. As I said in OP, I would value more research in this area to try to give us more certainty. 

If your probability that conscious pleasure and pain can exist without conscious sensation is, say, over 0.8 or so, I'd be curious about what leads you to believe that with confidence.

To give a concrete example, my infant daughter can spend hours bashing her toy keyboard with 5 keys. It makes a sound every time. She knows she isn't getting any food, sleep, or any other primary reinforcer to do this. But she gets the sensations of seeing the keys light up and a cheerful voice sounding from the keyboard's speaker each time she hits it. I suppose the primary reinforcer just is the cheery voice and the keys lighting up (she seems to be drawn to light--light bulbs, screens, etc). 

During this activity, she's playing, but also learning about cause and effect--about the reliability of the keys reacting to her touch, about what kind of touch causes the reaction, and how she can fine-tune and hone her touch to get the desired effect. I think we can agree that many of these things are transferable skills that will help her in all sorts of things in life over the next few years and beyond?

I'm sort of conflating two things that Humphrey describes separately: sensory play, and sensation seeking. In this example it's hard to separate the two. But Humphrey ties them both to consciousness, and perhaps there's still something we can learn from about an activity that combines the two together.

In this case, the benefits of play are clear, and I guess the further premise is that consciousness adds additional motivation for sensory play because, e.g., it makes things like seeing lights, hearing cheery voices much more vivid and hence reinforcing, and allows the incorporation of those things with other systems that enable action planning about how to get the reinforcers again, which makes play more useful.

I agree this argument is pretty weak, because we can all agree that even the most basic lifeforms can do things like approach or avoid light. Humphrey's argument is something like the particular neurophysiology that generates consciousness also provides the motivation and ability for play. I think I have said about as much as I can to repeat the argument and you'd have to go directly to Humphrey's own writing for a better understanding of it!

Yes I see that is a reasonable thing to not be convinced about and I am not sure I can do justice to the full argument here. I don't have the book with me, so anything else I tell you is pulling from memory and strongly prone to error. Elsewhere in this comments section I said

When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you've learned to leverage your visual sensations to accomplish objectives. It seems odd that an organism that can learn (as almost all can) would evolve visual sensations but not a propensity to play in a way that helps them to learn about those sensations.

And

Humphrey theorises that the evolutionary impulse for conscious sensations includes (1) the development of a sense of self (2) which in turn allows for a sense of other, and theory of mind. He thinks that mere unconscious perception can't be reasoned about or used to model others because, being unconscious, it is inaccessible by the global workspace for that kind of use. in contrast, conscious sensations are accessible in the global workspace and can be used to imagine the past, future, or what others are experiencing. The cognitive and sensory empathy that allows can enable an organism to behave socially, to engage in deceit or control, to more effectively care for another, to anticipate what a predator can and can't see, etc.

I believe the idea is something like sentience enables a lot more opportunity to learn about the world, and learning opportunities can be obtained through play. Not taking those opportunities if you're able is sort of like leaving free adaptive money on the table.

To me "conscious pleasure" without conscious sensation almost sounds like "the sound of one hand clapping". Can you have pure joy unconnected to a particular sensation? Maybe, but I'm sceptical. First, the closest I can imagine is calm joyful moments during meditation, or drug-induced euphoria, but in both cases I think it's at least plausible there are associated sensations. Second, to me, even the purest moments of simple joy seem to be sensations in themselves, and I don't know if there's any conscious experience without sensations.

Humphrey theorises that the evolutionary impulse for conscious sensations includes (1) the development of a sense of self (2) which in turn allows for a sense of other, and theory of mind. He thinks that mere unconscious perception can't be reasoned about or used to model others because, being unconscious, it is inaccessible by the global workspace for that kind of use. in contrast, conscious sensations are accessible in the global workspace and can be used to imagine the past, future, or what others are experiencing. The cognitive and sensory empathy that allows can enable an organism to behave socially, to engage in deceit or control, to more effectively care for another, to anticipate what a predator can and can't see, etc.

I would add that conscious sensation allows for more abstract processing of sensations, which enables tool use and other complex planning like long term planning in order to get the future self more pleasurable sensations. Humphrey doesn't talk about that much, perhaps because it's only a small subset of conscious species that have been observed doing those things, so perhaps mere consciousness isn't sufficient to engage in them (some would argue you need language to do good long term planning and complex abstraction).

Humphrey believes that mammals in general do engage in play, which he thinks all (but not only) conscious animals do, and that they also engage in sensation-seeking (e.g. sliding down slopes or moving fast through the air for no reason), which he thinks only (but not all) conscious animals do. And he'd say the same thing about birds, and the fact that those behaviors' distribution over species lines up nicely with the species with neural structures he thinks generates consciousness he treats as additional confirmation of his theory.

Animals do engage in play with unpleasant experiences, e.g., playfighting can include moderately unpleasant sensations. I suppose the benefits of those experiences being conscious might be to form more sophisticated strategies of avoiding them in future. It isn't that Humphrey thinks play is necessary for conscious to emerge, it's that he thinks all conscious animals are motivated to engage in play.

I feel this last answer maybe hasn't answered all your questions but I was a bit confused by your last paragraph, which might have arisen out of an understandable misunderstanding of the claim about consciousness and play.

Humphrey's argument fish aren't conscious doesn't only rest on their not having the requisite brain structures, because as you say, it is possible consciousness could have developed in their own structures in ways that are simply distinct from our own. But then, Humphrey would ask, if they have visual sensations, why are they uninterested in play? When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you've learned to leverage your visual sensations to accomplish objectives. It seems odd that an organism that can learn (as almost all can) would evolve visual sensations but not a propensity to play in a way that helps them to learn about those sensations.

Perhaps fish just don't benefit from learning more about their visual sensations. The sensations are adaptive, but learning about them confers no additional adaptive advantage. That seems a stretch to me, because it's hard for me to imagine sensations being adaptive without learning and experimenting with them conferring additional advantage.

You could also respond by citing examples where fish can play, and are motivated to sensation-seek, as you already have, and I think if Humphrey believes your examples he would find that persuasive evidence about those organisms consciousness.

Load more