RB

Richard Baxter

4 karmaJoined Aug 2022

Posts
1

Sorted by New

Comments
2

AI alignment is a myth; it assumes that humans are a single homogeneous organism and that AI will be one also. Humans have competing desires and interests and so will the AGIs created by them, none of which will have independent autonomous motivations without being programmed to develop them.

Even on an individual human level alignment is a relative concept. Whether a human engages in utilitarian or deontological reasoning depends on their conception of the self/other (whether they would expect to be treated likewise).

Regarding LLM specific risk, they are not currently an intelligence threat. Like any tech however they can be deployed by malicious actors in the advancement of arbitrary goals. One reason OpenAI are publicly trialling the models early is to help everyone including researchers learn to navigate their use cases, and develop safeguards to hinder exploitation.

Addressing external claims, limiting research is a bad strategy given population dynamics; the only secure option for liberal nations is in the direction of knowledge.

There is no theoretical or historic evidence of Homo sapiens natal investment being independent of environment/population.