D

Duckruck

-6 karmaJoined

Comments
8

I'm not talking about the positive or negative sign of the net contribution of humans, but rather the expectation that the sign of the net contribution produced by sentient ASI should be similar to that of humans. Coupled with the premise that ASI alone is more likely to do a better job of full-scale cosmic colonization faster and better than humans, this means that either sentient ASI should destroy humans to avoid astronomical waste, or that humans should be destroyed prior to the creation of sentient ASI or cosmic colonization to avoid further destruction of the Earth and the rest of the universe by humans. This means that humans being (properly) destroyed is not a bad thing, but instead is more likely to be better than humans existing and continuing.

Alternatively ASI could be created with the purpose of maximizing perpetually happy sentient low-level AI/artificial life rather than paperclip manufacturing. in which case humans would either have to accept that they are part of this system or be destroyed as this is not conducive to maximizing averaging or overall hedonism. This is probably the best way to maximize the hedonics of sentient life in the universe, i.e. utility monster maximizers rather than paperclip maximizers.

I am not misunderstanding what you are saying, but pointing out that these marvelous trains of thought experiments may lead to even more counterintuitive conclusions.

Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?

Why is it reasonable to assume that humans must treat potentially lower sentient AIs or lower sentient organic lifeforms more kindly than sentient ASIs that have exterminated humans? Yes, such ASIs extinguish humans by definition, but humans have clearly extinguished a very large number of other beings, including some human subspecies as well. From this perspective, whether or not humans are extinct or whether or not ASIs will extinguish humans may be irrelevant, as the likelihood of both kinds of (good or bad) astronomical impacts seems to be equally likely or actually detrimental to the existence of humans.

You are correct. Through the safety net funded by altruists, egoists can afford lower taxes and receive greater benefits. As a result, egoists get stronger and altruists get weaker. That is, those who benefit themselves through charitable giving or don't give at all will become richer and stronger, while those who give to benefit society will become poorer and weaker.

Even if you think that the effects caused by your charity will grow exponentially, this not only still holds true but even more so. For these effects clearly manifest as a transfer from the altruistic to the selfish. Therefore, if this transfer grows exponentially, then the selfish will become more profitable and the altruistic will become more impaired.

This, in short, is part of the evidence for the argument of socialists in the broad sense that improving society requires collective action, not individual charity. Even social democrats advocate improving the social safety net through taxes and state benefits, not through tax-deductible subsidized charitable giving.

Maybe we can have a "theory of failure".

That said, since basically ASI is bound to override humans, the only way to do that is how to adapt to that.

Its market is too illiquid and too expensive to trade to produce any good predictions. Its mechanics seem more like a social media site for amusement than any serious market.

I think these prediction markets need more short-term issues to stay relevant than the current situation where there are many markets but not enough liquidity and high transaction costs.

It is very difficult to create an efficient market in any sense. Many futures exchanges have many contracts with zero volume. The actual way of doing things on this site is even worse than the neoliberals: at least the neoliberals know that they should do their best to use the government to enforce and protect the market, rather than assuming that the free market will work on its own. Maybe they need to study some finance papers to learn how to set up effective exchanges.

According to the "settlement" version of the "Dissolving the Fermi paradox", we seem to be roughly certain that the average number of other civilizations in the universe is even less than one.

Thus the extermination of other alien civilizations seems to be an equally worthwhile price to pay.

Considering the way many people calculate animal welfare, I would have thought that many people here are not anthropocentric.

Lots of paperclips are a possibility, but perhaps ASIs could be designed to be much more creative and sensory than humans, and does that mean humans shouldn't exist.