RD

Ryan Dobson

4 karmaJoined Jan 2023

Comments
1

Great post, Dr. Milller! I'll be curious what you think of my thoughts here! 

 

And we view many blue-collar jobs as historically transient, soon to be automated by AI and robotics – freeing human bodies from the drudgery of actually working as bodies. (In the future, whoever used to work with their body will presumably just hang out, supported by Universal Basic Income, enjoying virtual-reality leisure time in avatar bodies, or indulging in a few physical arts and crafts, using their soft, uncalloused fingers)

This makes me recall Brave New World by Aldous Huxley. Specifically, the use of "Soma" - a drug that everyone takes to feel pleasure, so they never have to feel pain. Something seems off about imagining people living their lives in virtual reality in an everlasting “pleasurable" state. I think I find this unsettling because it does not fit into my conception of a good life (the point of Brave New World). There is something powerful and rewarding in facing reality and going through a bit of pain. Although I have not explored the AI alignment problem in depth, there seems to be some neglect for the value of pain (I’m using "pain" in a relatively broad, nonserious manner here). For instance, the pain of working an 8-hour shift moving boxes in a warehouse. This is an interesting idea to consider, Paul Bloom in The Sweet Spot reviews how there is frequently pleasure in pain. This might not be super problematic if there is pain that is inherently pleasurable, but the situation gets sticky when pain is only interpreted as worthwhile after the fact. Yet another layer of complexity. 

 

EA consequentialism tends to assume that ethically relevant values (e.g. for AI alignment) are coterminous with sentience. This sentientism gets tricky enough when we consider whether non-cortical parts of our nervous system should be considered sentient, or treated as if they embody ethically relevant values. It gets even tricker when we ask whether body systems outside the nervous system, which may not be sentient in most traditional views, carry values worth considering.

I do think that the AI alignment problem becomes infinitely complex if we consider panpsychism. Or if we consider dualism for that matter. What if everything has subjective experience? What if subjective experience isn't dependent on the body? Or as you suggest "Are non-sentient, corporeal values possible?" 

Although I (mostly) agree with many of the assumptions EAs make (As I see them: reductive materialism, the existence of objective truth), I agree that there is a lot of neglect to a wide array of beliefs and values. 

Another problem that I see is that if we are considering reductive materialism to be correct, consciousness could be obtained by AI. If consciousness can be obtained by AI, we have created a system with values of its own. What do we owe to these new beings we have created? It is possible that the future is populated primarily by AI because their subjective experience is qualitatively better than humans' subjective experience.  

 

The thermostat does not need to be fully sentient (capable of experiencing pleasure or pain) to have goals. 

I partially disagree with this. As Searle points out, these machines do not have intentionality and lack a true understanding of what "they" are doing. I think I can grant that the thermostat has goals, but the thermostat does not truly understand those goals, because it lacks intentionality. 

Although I don't think this point is central to your argument. I think we can get to the inherent value of the body because of its connection with the brain - and therefore the mind (subjective experience). Thus, the values of the body are only valuable because the body mediates the subjective experience of the person. 

 

If this argument is correct, it means there may not be any top-down, generic, all-purpose way to achieve AI alignment until we have a much better understanding of the human body’s complex adaptations.  If Artificial General Intelligence is likely to be developed within a few decades, but if it will take more than a few decades to have a very fine-grained understanding of body values, and if body values are crucial to align with, then we will not achieve AGI alignment. We would need, at minimum, a period of Long Reflection focused on developing better evolutionary medicine models of body values, before proceeding with AGI development. 

I think this is largely pointing out that EAs have failed to consider that some people might not be comfortable with the idea of uploading their brain (mind) into a virtual reality device. I personally find this idea absurd; I don't have any urges to be immortal. Ah, I run into so many problems trying to think this through. What do we do if people consider their own demise valuable? 

 

If an AI system is aligned with the human brain, but it ignores the microbiome hosted within the human body, then it won’t be aligned with human interests (or the microbiome’s interests).

Perhaps I do disagree with your conclusion here... 

Following your above logic, I find it likely that AGI would be developed before we have a full understanding of the human body. I don't agree that it is necessary to have a full understanding of the human body for AGI to be generally aligned with the values of the body. It might be possible for AGI to be aligned with the human body in a more abstract way. "The body is a vital part of subjective experience, don't destroy it." Then, theoretically, AGI would be able to learn everything possible about the body to truly align itself with that interest. (Maybe this idea is impractical from the side of creating the AGI?) 

 

So, which should our AI systems align with – our brains’ revealed preferences for donuts, or our bodies’ revealed preferences for leafy greens?

Could an AGI transcend this choice? Leafy greens that taste like donuts? Or donuts that have the nutritional value of leafy greens? 

Regardless of this, I do get your point of conflicting values between the body and brain. I was mostly considering the values of the body and brain as highly conducive to each other. Not sure what to do about the frequent incongruencies. 

 

She may have no idea how to verbally express her body’s biomechanical capabilities and vulnerabilities to the robot sparring partner. But it better get aligned with her body somehow – just as her human BJJ sparring partners do. And it better not take her stated preferences for maximum-intensity training too seriously.

"But it better get aligned with her body somehow" is a key point for me here. If the AGI has the general notion to not hurt human bodies, it might be possible that the AGI would just use caution in this situation. Or even refuse to play because it understands the risk. This is to say, there might be ways for AGI to be aligned with the body values without it having a complete understanding of the body. Although, a complete understanding would be best. 

On one hand, I agree with the sentiment that we need to consider the body more! On the other hand, I'm not positive that we need to completely understand the body to align AGI with the body. Although it seems to be a logical possibility that understanding the body isn't necessary for AGI alignment, I'm not sure if it is a practical point. 

Please provide some pushback! I don't feel strongly about any of my arguments here, I know there is a lot of background that I'm missing out on.