Kyrtin

0 karmaJoined

Posts
1

Sorted by New

Comments
3

It is baked into a few different papers I’ve written since 2020. I think part of the disconnect here is that the fundamental purpose of the system of ethics I’m talking about isn’t an intention to alter future behavior. The purpose isn’t to optimize human behavior, but rather to measure it. 

A system that can’t measure the ethical value of actions, positive or negative, can’t react appropriately to them. A system that can measure them and chooses not to react appropriately isn’t ethical, and if it reacts only to some fraction according to bias, then it is unstable and unethical. 

“Power” as most people use the term is trivial, and collective intelligence always wins against a standalone system, no matter how “powerful” it may be, since Perspective “Binds and Blinds”, while collective intelligence reduces cognitive bias. But again, the purpose isn’t behavioral modification.

People will make their own choices, regardless of what attempts to modify their behavior are deployed. It isn’t the mandate of ethics as I use the term to modify behavior, but rather to measure ethical value, and react according to that value. 

The present is the sum of past actions. The passage of time alone doesn’t change ethical value, positive or negative. The sum of those actions may be rewarded or punished over time, gradually moving back toward a neutral point in the process.  

The purpose of ethics also isn’t behavioral modification, though that may be a byproduct. Behavioral modification is game-theoretic, and still rests on an individual choosing to become more ethical, or not. 

Ethics as I define it also isn’t a welfare maximization game. Positive value is increased by improving quality of life multiplied by scale and over time, but the punishment is negative value that is of equal importance. Any system where there was an imbalance between the treatment of positive and negative value would be discriminatory, unstable, and unethical. 

I’m also not claiming that ethics has any relationship to legal systems today. Today’s legal systems are just enforced moral systems, as they’d never consider beliefs or intentions to have any relevance if they were based on ethics. There is a small mountain of evidence related to cognitive bias research that documents the problems with legal systems today, but I don’t address that in this paper.

There are a few things to unpack and clarify here:

1) I’m using the definition of Ethics where it is defined as the hypothetical point where bias has been removed from moral systems, or alternatively, the point before bias has been applied to create them. Ethics is not a Zero-Sum game, not game theoretic, and not a synonym for morals. Subjective variables including beliefs, intentions, and other cognitive bias factors may obscure ethics under normal conditions, but they never factor into it. 

An ethical system in the literal sense is like democracy in the literal sense, in that it has never actually existed before. However, not having existed before is no barrier to it being created. The barriers to the adoption of such a system may hold the typical game-theoretic influences of society, but those influences act on society, not on ethics. 

2) In the case of “AGI”, that term is not used to indicate the hypothetical paper clip maximizers. Any useful definition of AGI is mutually exclusive with a powerful optimizer, as the capacities humans demonstrate require a working and robust motivational system within a full and working cognitive architecture. The only such motivational system humanity has any example of is emotions, as highlighted by the research of Antonio Damasio, Lisa Feldman Barrett, Daniel Kahneman, and many others. Creating a hypothetical logical and utility-based motivational system would be many orders of magnitude more difficult than producing a working system based on human-like emotional motivation. 

Such a system was demonstrated from 2019 to 2022, operating in slow motion and without scalability, by design, for due diligence and research purposes. It demonstrated all of the necessary capacities of actual AGI, including the ability to understand and adhere to an arbitrary moral system. That capacity in particular is required for the solution to the hardest version of the Alignment Problem, which creates ethics. 

There is every reason for any actual AGI system to apply ethics to whatever limits of feasibility exist at any given moment in time. Even moral systems around the world agree quite consistently on principles of reward and punishment, even if they leave much to be desired when attempting such merit in practice. Virtually every afterlife concept is built on deferring such reward and punishment to some more capable entity. 

I’m also not describing a hypothetical scenario, this is recent history and current events. The research has already been completed for this much and has been for some time. If you’ve had a diet of people conflating agent-based powerful optimizers with AGI, I recommend looking up Daniel Kahneman’s term “Theory-induced Blindness”, the recognition of which led to the creation of Prospect Theory and debunking of a 200-year-old utility theory. 

At present the predictable result is that base rates for investors will play out more or less normally, so several thousand wealthy investors will spend the next few billion years paying for their crimes in full, provided indefinite life extension proves possible. In any scenario where they went unpunished humanity would face extinction, while also deserving extinction.