Topic Contributions


Release of Existential Risk Research database

Thanks! BTW, I found that some my x-risks related articles are included while other  are not. I don't think that it is because not-included articles are more off-topic, so your search algorithm may fail to find them.

Examples of my published relevant articles which were not included: 

The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI

Islands as refuges for surviving global catastrophes

Surviving global risks through the preservation of humanity's data on the Moon

Aquatic refuges for surviving a global catastrophe

[Paper] Surviving global risks through the preservation of humanity's data on the Moon

If they advance enough to reconstruct us, then most of bad enslavement ways are likely not interesting to them. For example, we no try to reconstruct mammoths in order to improve climate in Siberia, but not for hunting or meet.

Thoughts on short timelines

Yes, it is clear. My question was: "Do we have any specific difference in mind about AI strategies for 1 per cent in 10 years vs. 10 per cent in 10 years cases?" If we going to ignore the risk in both cases, there is no difference is it 1 per cent or 10 per cent.

I don't know any short-term publically available strategy for the 10 years case, no matter what is the probability.

Thoughts on short timelines

What is the actionable difference between "1-2 per cent" and "10 per cent" predictions? If we knew that an asteroid is coming to Earth and it will hit the Earth with one of these probabilities, how our attempts to diverge it would depend on the probability of the impact?

Should we ignore 1 per cent probability, but go all-in in preventing 10 per cent probability?

If there is no difference in actions, the difference in probability estimates is rather meaningless.

The Map of Impact Risks and Asteroid Defense

Thanks - just saw this comment now. Not really miss the idea, but not decoded include it here.

Curing past sufferings and preventing s-risks via indexical uncertainty

If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.

In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.

I addressed the problem of the simulation of evil AIs in the "Message to any future AI" text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don't need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)

Curing past sufferings and preventing s-risks via indexical uncertainty

See patches in comments below: there are ways to do the trick not increasing the total number of suffering observer-moments.

Curing past sufferings and preventing s-risks via indexical uncertainty

It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.

Curing past sufferings and preventing s-risks via indexical uncertainty

I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:

No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony - to feel that the pain disappeared and to know that he is saved from hell.

It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.

Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.

Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.

Load More