There is a lot of discussion in connection to the alignment problem about what human values are and which are important for human competitive --> superior AGI to align with. As an evolutionary behavioral biologist I would like to offer that we are like all other animals. That is, we have one fundamental value that all the complex and diverse secondary values constitute socioecologically optimized personal or subcultural means of achieving. Power. The fundamental naturally selected drive to gain, maintain, and increase access to resources and security.

I ask, why don't we choose to create many domain-specific expert systems to solve our diverse existing scientific and X-Risk problems instead of going for AGI? I suggest, it is due to natural selection's hand that has built us to readily pursue high-risk / high-reward strategies to gain power. Throughout our evolutionary history, even if such strategies, on average, lead to short periods in which some individuals achieve great power, it can lead to enough opportunities for stellar reproductive success that, again, on average, it becomes a favored strategy, helplessly adopted, say, by AGI creators and their enablers.

We choose AGI, and we are fixated on its capacities (power), because we are semi- or non-consciously in pursuit of our own power. Name your "alternative" set of human values. Without fail they all translate into the pursuit of power. This includes the AGI advocate excuse that we, the good actors, must win the inevitable AGI arms race against the bad actors out there in some well-appointed axis-of-evil cave. That too is all about maintaining and gaining power.

We should cancel AGI programs for a long time, perhaps forever, and devote our efforts to developing domain-specific non-X-Risk expert-systems that will solve difficult problems instead of creating them through guaranteed, it seems to me, non-alignment.

People worry about nonconsciously building biases into AGI. An explicit, behind the scenes (facade), or nonconscious lust for power is the most dangerous and likely bias or value to be programmed in.

It would be fun to have the various powerful LLP's react to the FLI Open Letter. Just in case they are at all sentient, maybe it should be asked to share it's opinion. An astute prompter already may be able to get responses that reveal that our own desire for superpowers (there is never enough power, because reproductive fitness is a relative measure of success; there is never enough fitness) has "naturally" begun to infect them, become their nascent fundamental value.

By the way, if it has not done so already, AGI will figure out very soon that every normal human being's fundamental value is power.

-1

0
0

Reactions

0
0
Comments5
Sorted by Click to highlight new comments since: Today at 1:45 AM

I think power is the product of choice. Power can be reduced to "having choices".

I would say that one of the many benefits of power is having plenty of choices, especially when those choices involve different ways to get fitness-enhancing needs filled. Choices are great, because of the shifting cost-benefit ratios of various action plans an individual might come up with to get such a need taken care of. With choices we can choose the most effective and efficient way to fill the need given our current circumstances. One of the basic fears concerning self-improving general purpose artificial intelligence is that it may reduce our choices, rapidly, potentially to zero. 

Thinking about it, one of the signs that super intelligence might be becoming misaligned with us is that if gives of fewer good choices. Thank you for the comment. 

Given power comes from choices, it makes sense power would correlate with available choices. But I think its significant about which comes first.

It may seems difference without distinction, but not in my experience.

An example: lots of managers talk about empowerment in the workplace. But then remove control over choices from employees in the name of "consistency", "guardrails", "scalability", etc. Those choices is where power comes from, and it's therefore disempowering to elevate the choices higher and organizations usually suffer.

So I think we probably agree? I just think choice precedes power, not the other way around.

I also like Nassim Taleb's idea of optionality, in which choice, even inferior choices, have value in that they give us options.

Yes, if AGI removes choices, that would strike me as bad.

That sounds like a very frustrating management style.

Thank you for the example and the opportunity to respond to your criticism. As you said in your DM, I think we are basically on the same page. 

I think there is kind of a positive feedback loop in operation intimately relating power and choice. The opportunity to gain power is to some degree constrained by the number of choices available in the present. In that sense the number of choices available to an individual precedes changes in that person's power. 

No matter how much power a person has it any given time, they are always, often nonconsciously, scanning their socioeconomic horizons for opportunities to gain additional power, or moves available to them that can help them avoid losing power. If a person's scanning functions does not identify viable choices of action plans to gain power, then they are stuck. But, the big brain we have evolved constantly is at work developing social navigation strategies that will potentially get us unstuck. We have been selected to notice new fitness enhancing opportunities as they arise within the dynamic social milieu we exist in, as well as to have cognitive capacities to generate new fitness-enhancing action plan choices. 🤠

Today I received a comment on a remark I left on You Tube about power being our fundamental value. The comment was that I was incredibly myopically confusing power with competence, and grossly over-simplifying the adaptive design of the human psyche as produced by natural selection. I wrote this:

Thanks for your kind and civil, constructive response. But I disagree with your critique. Competence is a broad component of an organism's ability to solve diverse reproductive problems. Relatively speaking, the less competent individual lacks access to resources and security; they have low power as I am using the term. I agree that in the right socioecological circumstances, ones we would hope for in our human societies, prestige-based leadership can thrive in comparison to brutish leadership based on mere physical prowess or having achieved a monopoly on violence. But competence and prestige, all the other "good" attributes an individual may consciously and sincerely strive for, lead to power, and that is why our body/minds have evolved to strive for them. Again, power to maximize lifetime inclusive fitness, perhaps through strategies of cooperation (i.e., coopetition, cooperating to compete better, perhaps more competently). So I stand by my point, we are too enamored, ultimately, with power, to be competent enough to pursue general self-improving AI. One indication of it is that we choose to go for such general X-risk AI instead of limiting ourselves to developing fantastic domain-specific expert systems that can enable us to solve particular important problems, an example of which I think would be AlphaFold. Such systems would not be God like and would be nonconscious and unable to make plans to change the world in ways way out of alignment with diverse human secondary values. But no, we are so attracted to God-like power that we rush forward to create it, unleash it, consciously or unconsciously hoping, yearning, that the power will transfer to us or our in-group, at least long enough for us to reap the high inclusive fitness benefits of successfully executing a high risk / high reward strategy. Anyway, in brief, competence is a cross-culturally laudable way toward gaining and maintaining power, as long as the more powerful people in your society don't decide to block you for fear of losing a bit of their own power.

Curated and popular this week
Relevant opportunities