Hide table of contents

The style of the article is a bit provocative but in essence it's about meta-ethical conclusions that can be derived from Universal Darwinism taken to extremes.

LINK

Abstract

This article sums up meta-ethical conclusions that can be derived from Universal Darwinism taken to extremes. In particular it 1) applies Universal Darwinism to evaluation of Terminal values, 2) separates objective meaning of life from subjective meaning of life using notion of Quasi-immortality. That means both moral nauralism and moral non-cognitivism are right but in different areas, 3) justifies the free will as a consequence of the Universal Darwinism, 4) comes to the conclusion of Buddhism-like illusion of the “Self” as a consequence of the Quasi-immortality, 5) as a bonus gives Universal Darwinism a hypothetical and vivid Cosmogonic myth from Darwinian natural selection.

0

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

I tried to find some objective ground for ethical considerations given metaphysical premises of the Universal Darwinism. And the relevant part can be summarized to the following quote from the #evaluating-terminal-values section of the article:

How to evaluate terminal values of humans (defined like on lesswrong)? Quote:

A terminal value (also known as an intrinsic value) is an ultimate goal, an end-in-itself. ... In an artificial general intelligence with a utility or reward function, the terminal value is the maximization of that function.

Values are subjective but the question asks for some objective perspective. This question is of interest as “Humans' terminal values are often mutually contradictory, inconsistent, and changeable”.

Obviousness of natural selection (NS) can pose some constraints, albeit weak ones, as all known systems with sentient agents abide NS. But weak constraints are still better than no constraints at all.

Terminal goals are being split by natural selection into ones that fail to reproduce / maintain themselves and ones that survive (together with their bearers of cource). And sometimes we can even predict whether some terminal goals would go extinct or at least range their probability of survival (we already had put aside instrumental goals that “die” when they lose their purpose.).

So that's it. That's the only way to objectively judge terminal values I'm aware of. And judgment part comes from a feeling that I don't want to be invested in terminal goals that would most likely go extinct. At least they should be “mutated” in way to balance minimization of their change and maximization of their survival probability to be appealing.

End quote.

Hence what fails the "extinction criteria":

Goals and values that cannot be reformulated as survival of some quasi-immortal entities are meaningless and would be eliminated via natural selection with time.

But there are still infinite number of goals and values that pass "extinction criteria" but contradict each other. So it is moral nauralism of the "extinction criteria" plus  moral non-cognitivism as a best bet for what is left.

Curated and popular this week
Relevant opportunities