Paul J. Watson

Adjunct Associate Professor of Biology @ University of New Mexico
10 karmaJoined Mar 2023Working (15+ years)drpjwatson.org

Bio

I study the evolutionary ecology of social and sexual behavior across species. My current speciality courses are, "The Evolution of Religiosity and Human Coalitional Psychology," and "Field Studies in the Evolution of Animal Behavior." In all my research and teaching, my primary focus is on objectively understanding intrapsychic designs of animals and humans using theoretical and empirical insights from modern Darwinism critically cross-referenced with first person results of engaging in introspective techniques derived from select spiritual traditions. I believe such cross-referencing is a new, exciting, yet massively under-appreciated opportunity for humanity to gain richer more objective self-knowledge, potentiating individual human development and, possibly, to positively change humanity's currently dismal trajectory.

Comments
7

That sounds like a very frustrating management style.

Thank you for the example and the opportunity to respond to your criticism. As you said in your DM, I think we are basically on the same page. 

I think there is kind of a positive feedback loop in operation intimately relating power and choice. The opportunity to gain power is to some degree constrained by the number of choices available in the present. In that sense the number of choices available to an individual precedes changes in that person's power. 

No matter how much power a person has it any given time, they are always, often nonconsciously, scanning their socioeconomic horizons for opportunities to gain additional power, or moves available to them that can help them avoid losing power. If a person's scanning functions does not identify viable choices of action plans to gain power, then they are stuck. But, the big brain we have evolved constantly is at work developing social navigation strategies that will potentially get us unstuck. We have been selected to notice new fitness enhancing opportunities as they arise within the dynamic social milieu we exist in, as well as to have cognitive capacities to generate new fitness-enhancing action plan choices. 🤠

Today I received a comment on a remark I left on You Tube about power being our fundamental value. The comment was that I was incredibly myopically confusing power with competence, and grossly over-simplifying the adaptive design of the human psyche as produced by natural selection. I wrote this:

Thanks for your kind and civil, constructive response. But I disagree with your critique. Competence is a broad component of an organism's ability to solve diverse reproductive problems. Relatively speaking, the less competent individual lacks access to resources and security; they have low power as I am using the term. I agree that in the right socioecological circumstances, ones we would hope for in our human societies, prestige-based leadership can thrive in comparison to brutish leadership based on mere physical prowess or having achieved a monopoly on violence. But competence and prestige, all the other "good" attributes an individual may consciously and sincerely strive for, lead to power, and that is why our body/minds have evolved to strive for them. Again, power to maximize lifetime inclusive fitness, perhaps through strategies of cooperation (i.e., coopetition, cooperating to compete better, perhaps more competently). So I stand by my point, we are too enamored, ultimately, with power, to be competent enough to pursue general self-improving AI. One indication of it is that we choose to go for such general X-risk AI instead of limiting ourselves to developing fantastic domain-specific expert systems that can enable us to solve particular important problems, an example of which I think would be AlphaFold. Such systems would not be God like and would be nonconscious and unable to make plans to change the world in ways way out of alignment with diverse human secondary values. But no, we are so attracted to God-like power that we rush forward to create it, unleash it, consciously or unconsciously hoping, yearning, that the power will transfer to us or our in-group, at least long enough for us to reap the high inclusive fitness benefits of successfully executing a high risk / high reward strategy. Anyway, in brief, competence is a cross-culturally laudable way toward gaining and maintaining power, as long as the more powerful people in your society don't decide to block you for fear of losing a bit of their own power.

I would say that one of the many benefits of power is having plenty of choices, especially when those choices involve different ways to get fitness-enhancing needs filled. Choices are great, because of the shifting cost-benefit ratios of various action plans an individual might come up with to get such a need taken care of. With choices we can choose the most effective and efficient way to fill the need given our current circumstances. One of the basic fears concerning self-improving general purpose artificial intelligence is that it may reduce our choices, rapidly, potentially to zero. 

Thinking about it, one of the signs that super intelligence might be becoming misaligned with us is that if gives of fewer good choices. Thank you for the comment. 

I agree that morals are not genetically inherited, and I did not mean to imply that. Morals are learned, because given the vicissitudes of human life, and the dynamism of what it takes to successfully cooperate in large groups, there never would have been stable selection to favor even general morals like, say, The Golden Rule. In human life, everyone must learn their in-groups moral's. They also have to learn the styles and parameters of moral deliberation processes that are acceptable within their group.

I do think that the cognitive capacity for moral deliberation, which will involve the collaboration of many parts of the brain, must have a heritable genetic foundation. How complex that foundation is remains an empirical question, but it is probably quite complex. In the coming years, however, I think it is reasonable to expect that domain-specific, expert-system AI will be able to help us identify key genes and their variants (alleles), as well as gene*gene interactions, that influence the development of a species,-typical, moral deliberation style, including the non-random ways, our moral deliberation system responds to various socioecological circumstances.

In the same way, such domain-specific AI will allow us to understand the complex genetic basis of many diseases and other traits we more predictably would want to modify, because they involve fitness enhancement, like health and longevity, beauty, intelligence, etc.

Moreover, such expert systems could help us devise moral enhancement strategies based on genetic engineering (and if we are lucky by less invasive epigenetic engineering) that are most likely to be effective and efficient, with minimal onerous side effects. We are not stuttering from Ground Zero! There is good evidence that certain psychedelic substances can result in chemically-induced moral enhancement. One good starting point would be to look at the mechanisms behind that.

Who knows? In the end it may only require fairly minor genetic modification to adequately increase human's capacity for, say, compassion, than we initially would've guessed. All in the name not only of reducing suffering, but of making us a sustainable species, which I would say now, we are not. 

I also agree with you that this action plan is going to made into a nearly intractable moral can of worms, with much of the argumentation, perhaps somewhat cryptically, being about power. But consider that in this phase of our technological adolescence, if the human species fails to take the ongoing evolution of our mind-brain out of the hands of natural selection, we will remain sophisticated, great apes deeply and mostly unconsciously infected with a deep yearning for power. As well as a lust for engaging in high risk / high reward power projects largely at others' expense.

It is so difficult to agree on such things that I think the routine extraterrestrial failure to develop plans to take the evolution of mind out of the hands of natural selection during the fairly narrow time window in which that possibility exists for a species like ours, is the reason that we encounter radio silence as we seek signs of intelligent extraterrestrial life around us. As naturally selected power addicts, they couldn't figure out a workable plan either.

Finally, although people with a classical social science background typically disagree, we cannot rely on culture to produce sufficient moral enhancement to save our species, even though cultural evolution can occur very rapidly in the absence of any genetic change. Even though the above observation is true, it does not mean that cultural evolution occurs in a way that is free from our genetic heritage. Culture always strongly echoes our genetic heritage, the genetic basis for our intrapsychic design included.

I appreciate the comment! - Paul

There is a lot of discussion in connection to the alignment problem about what human values are and which are important for human competitive --> superior AGI to align with. As an evolutionary behavioral biologist I would like to offer that we are like all other animals. That is, we have one fundamental value that all the complex and diverse secondary values constitute socioecologically optimized personal or subcultural means of achieving. Power. The fundamental naturally selected drive to gain, maintain, and increase access to resources and security.

I ask, why don't we choose to create many domain-specific expert systems to solve our diverse existing scientific and X-Risk problems instead of going for AGI? I suggest, it is due to natural selection's hand that has built us to readily pursue high-risk / high-reward strategies to gain power. Throughout our evolutionary history, even if such strategies, on average, lead to short periods in which some individuals achieve great power, it can lead to enough opportunities for stellar reproductive success that, again, on average, it becomes a favored strategy, helplessly adopted, say, by AGI creators and their enablers.

We choose AGI, and we are fixated on its capacities (power), because we are semi- or non-consciously in pursuit of our own power. Name your "alternative" set of human values. Without fail they all translate into the pursuit of power. This includes the AGI advocate excuse that we, the good actors, must win the inevitable AGI arms race against the bad actors out there in some well-appointed axis-of-evil cave. That too is all about maintaining and gaining power.

We should cancel AGI programs for a long time, perhaps forever, and devote our efforts to developing domain-specific non-X-Risk expert-systems that will solve difficult problems instead of creating them through guaranteed, it seems to me, non-alignment.

People worry about nonconsciously building biases into AGI. An explicit, behind the scenes (facade), or nonconscious lust for power is the most dangerous and likely bias or value to be programmed in.

It would be fun to have the various powerful LLP's react to the FLI Open Letter. Just in case they are at all sentient, maybe it should be asked to share it's opinion. An astute prompter already may be able to get responses that reveal that our own desire for superpowers (there is never enough power, because reproductive fitness is a relative measure of success; there is never enough fitness) has "naturally" begun to infect them, become their nascent fundamental value.

By the way, if it has not done so already, AGI will figure out very soon that every normal human being's fundamental value is power.

I am deeply impressed by the amount of ground this essay covers so thoughtfully. I have a few remarks. They pertain to Miller's focal topic as well as avoiding massive popular backlash against general AI and limited, expert system AI, backlash that will make current resistance against science and human "expertise" in general look pretty innocuous. I close with a remark on alignment of AI with animal interests.

I offer everyone an a priori apology if this comment seems pathologically wordy.

I think that AI alignment with interests of the body are quite essential to achieve alignment with human minds; probably necessary but not sufficient. Cross-culturally, regardless of superficial and (anyway) dynamic differences in values amongst cultures, humans generally have a hard time being happy and content if they are concerned with bodily well-being. Sicknesses of all kinds lead to invasive thoughts and emotions amounting to existential dread for most people, even the likes of Warren Zevon. 

The point is that know that I, and probably most people across cultures, would be delighted to have a human doctor or nurse walk into and exam or hospital room with a pleasent-looking robot that we (the patient) truly perceived , correctly so, to possess general diagnostic super-intelligence based on deep knowledge of the healthy functioning of every organ and physiological system in the human body. Personally, I've never had the experience that any doctor of mine, including renowned specialists, had much of a clue about any aspect of my biology. I'd also feel better right now if I knew there was an expert system that was going to be in charge of my palliative care, which I'll probably need sooner rather than later, a system that would customize my care to minimize my physical pain and allow me to die consciously, without irresistible distraction from physical suffering. Get to work on that, please.

Such a diagnostic AI system, like a deeply respected human shamanic healer treating a devout in-group religious follower, would even be capable of generating a supernormal placebo effect (current Western medicine and associated health-insurance systems most often produces strong nocebo effects, Ugg.), which it seems clear would be based on nonconscious mental processes in the patient. (I think one of the important albeit secondary adaptive functions of religions is to produce supernormal placebo effects; I have a hypothesis about why placebo effects exist and why religious healers in spiritual alignment with their patients are especially good at evoking them, a topic for a future essay.) The existence of placebo effects, and their opposite, are good evidence that AI alignment with body is somewhat equivalent to alignment with mind. 

Truly perceived is important. That is one reason I recommend that a relaxed and competent human health professional accompany the visiting AI system. Even though the AI itself may speak to the patient, it is important to have this super-expert system, perhaps limited in its ability to engage emotionally with the patient (like the Hugh Laurie character, "House") be gazed upon with admiration and a bit of awe by the human partner during any interaction with the patient. The human then at least competently pretends to understand the diagnosis and promises the patient to promptly implement the recommended treatment. They can also help answer questions the patient may have throughout the encounter. An appropriate religious professional could also be added to the team, as needed, as long as they too show deep respect for the AI system.

I think a big part of my point is that when an AI consequentially aligns with our bodies, it thereby engenders a powerful "pre-reflective" intimacy with the person. This will help preempt reflective objections to the existence and activities of any AI system. And this will work cross-culturally, with practically everyone to ameliorate the alignment problem, at least as humans perceive it. It will promote AI adoption.

Stepping back a moment, as humans evolved the cognitive capacities to cooperate in large groups while preserving significant degrees of individual sovernity (e.g., unlike social insects) and then promptly began to co-evolve capacities to engage in the quintessentially human cross-cultural way of life I'll call "complex contractual reciprocity" (CCR), a term better unpacked elsewhere, we also had to co-evolve a stong hunger for externally-sourced maximally authoritative moral systems, preferably ones perceived as "sacred." (Enter a long history of natural selection for multiple cognitive traits favoring religiosity.) If its not from a sacred external source, but amounts to some person's or subculture's opinion, argument and instability, and the risk of chaos, is going to be on everyone's minds. Durable, high degrees of moral alignment within groups (whose boundaries can, under competent leadership, adaptively expand and contract) facilitates maximally productive CCR, and that is almost synonymous with high, on average, individual lifetime inclusive fitness within groups.

AI expert systems, especially when accompanied by caring compassionate human partners, can be made to look like highly authoritative, externally-sourced fountains of sacred knowledge related to fundamental aspects of our well-being. Operationally here, sacred means minimally questionable. As humans we instinctively need the culturally-supplied contractual boilerplate, our group's moral system (all about alignment), and other forms of knowledge intimately linked to our well-being to be minimally questionable. If a person feels like an AI system is BOTH morally aligned with them and their in-group, and can take care of their health practically like a god, then from the human standpoint, alignment doubts will be greatly ameliorated.

Finally, a side note, which I'll keep brief. Having studied animals in nature and in the lab for decades, I'm convinced that they suffer. This includes invertebrates. However, I don't think that even dogs reflect on their suffering. (Using meditative techniques, humans can get access to what it means to have a pre-reflective yet very real experience.) Anyway, for AI to every become aligned with animals, I think it's going to require that the AI aligns with their whole bodies, not just their nervous systems or particular ganglia therein. Again, because with animals the AI is facing the challenge of ameliorating pre-reflective suffering. (I'd say most human suffering, because of the functional design of human consciousness, is on the reflective level.) So, by designing AI systems that can achieve alignment with humans in mind and body, I think we may simultaneously generate AI that is much more capable of tethering to the welfare of diverse animals.

Best wishes to all, PJW

I agree with Miller's response to mic (6-mos ago). Is it even possible for us to stop avoiding the "hard problem" of human nature?

Also, any given agent's or interest group's priorities and agendas always will be dynamic, compounding the problem of maintaining multiple mutually satisfactory alignments. Natural selection has designed us to exhibit complex contingent responsiveness to both subtle and dramatic environmental contingencies.

In addition, the humans providing the feedback, even if THEY can find sustainable alignment amongst themselves (remember they are all reproductive competitors, and deeply programmed by natural selection to act accordingly, intentionally or not), will change over time, possibly a very short time, by being exposed to such power. They will be corrupted by that power, and in due time corrupted absolutely.

Finally, an important Darwinian sub-theory has to do with the problem of nonconscious self-deception. I have to wonder whether even our discussing the possibility of properly managing substantive AI systems is just a way of obscuring from ourselves the utter impossibility, given human nature, of managing them properly, morally, wisely? Are all these conversations a way (a kind of competition) to convince ourselves and others that we or our (perceived) allies deserve the power to program and maintain these systems?