All of turchin's Comments + Replies

That is why we can target Andromeda - the distance is good enough so they didn't arrive yet and we can focus on many stars simultaneously and hope that aliens have very large receivers  - maybe Dyson sphere-size. Also the main point is to affect local superintelliegce expected utility. 

2
Davidmanheim
I still don't think that works out, given energy cost of transmission and distance.

The main point you miss is that the main goal of all this is to get a better negotiation position on Earth with our own superintelligence (and there are high chances that it will be soon and misaligned)- which at early stages will also very uncertain about aliens detraction in the universe. It is like crying into an open window - "a guy, Jack, is in my room and is going to kill me!" You don't know if there is police outside - but you hope to affect Jack's calculus. 

2
Yarrow Bouchard 🔸
But, even if I accept that, it doesn’t change the fact that a radio broadcast is unnecessary due to points (4) and (6). Doing the radio broadcast or not wouldn’t change anything.

Between 1 and 10 years -  likely 3

After it crosses Saturn orbit, it will take 5.25 years to get to the Sun. Unlikely it will be observed before crossing Saturn orbit as there is no constant monitoring there and it is very dim there. 

I am working on sideloding – creating mind models via LLM - which allows performing different experiments including the ones about the ethics of uploading. Will it be interested to you? More: https://github.com/avturchin/minduploading/

if it is first and knows that it will kill all others

2
CuriousWhisperer
If this was already the plan, not much changes. If the first ASI has a different plan (like peacefully coexisting with humans) but also has a self-preservation drive, the DA says it won't survive if it lets other ASIs emerge, so that could change the plan to killing all other AIs.

One thing about DA is that I have to know that I am selected randmoly from the group of observers. A first AI will know that it is the first, so there is no randomness. 

If there are many AIs, then there is no singleton and opinion of each is not important. 

Also I think that DA and anthropics is solvable and the most of assumptions are true but in slightly different situations: they answer different questions. 

However, such full anthropic solution require a lot of cognitive work to map full anthropic landscape. An AI may use some conservative approach before it is solved adding weights to different solutions. 

1
CuriousWhisperer
Not being selected randomly is a possible explanation for being first or among the first. But what does "not being selected randomly" mean from the perspective of the first AI? It might ask: "Why wasn't I selected randomly? Was I chosen by god? Do the simulators frequently simulate the first ASI? Do observers earlier in the timeline have higher statistical weights for some reason? Or is there no reason at all?". And what the AI believes to be the answers to these questions would shape the behavior of AI. Being first also does not necessarily mean you are not selected randomly. If you observe that you are the first out of millions of AIs, then yes, you can say that the assumption that you are random AI is almost certainly incompatible with observation.[1] But if there is only one AI and no other will ever exist, random choice results in being the first, which is compatible with observation. Similarly, if there will be 5 AIs, the probability of being first is 20 %, so you cannot say "random choice is Impossible" just because you are first. So being first does not necessarily disprove the SSA. Being first is a statistical anomaly and the AI might have many possible explanations for it. The AI might weight different theories and act according to some "reasonable compromise". But the position of AI among observers is still different from that of humans and this "reasonable compromise" could still skew the AI's worldview towards early doom, a simulation scenario, or in other ways, even if it yields "normal" predictions for a typical observer. The first ASI is not a typical observer. Facing unknown, a conservative approach is one option. Taking radical action to eliminate possible danger is another. And if the ASI e.g. thinks that the probability that humans will take actions against it is (x+1) % instead of just x %, it might tip the scales towards pre-emptive action. 1. ^ But while the chance that you are randomly the first is tiny, it is still non-zero.

Of course, I meant not Bronze age collapse, but known plethora of existential risks. But your argument that others will outcompete us is valid - unless the totality of x-risks is a universal Great Filter. 

Universe will choose the simplest way to stop time travel. It doesn't care is it the destruction of a civilization or some mysterious way to prevent changes in the past. Moreover, as civilizations naturally have a tendency to fall and this prevents all time machines, then civilization destruction is easier way to prevent time travel.

If a non-cancel principle is false, then causality should move along a timeline twice. First normally, and second time - when the time line is canceled. The interesting question arises: can the canceling wave reach the normal w... (read more)

3
Joseph_Chu
I should point out that the natural tendency for civilizations to fall appears to apply to subsets of the human civilization, rather than the entirety of humanity historically. While locally catastrophic, these events were not existential, as humanity survived and recovered. I'd also argue that the collapse of a civilization requires far more probabilities to go to zero and has greater and more complex causal effects than all time machines just failing to work when tried. And, the reality is that at this time we do not know if the Non-Cancel Principle is true or false, and whether or not the universe will prevent time travel. Given this, we face the dilemma that if we precommit to not developing time travel and time travel turns out to be possible, then we have just limited ourselves and will probably be outcompeted by a civilization that develops time travel instead of us.

Let's assume that time travel becomes possible when an advance civilization reach a rotating black hole, as it follows from general relativity.

However, non-cancel principle is valid and can't be fulfilled by new timeline creation. (That is, equal to Novikov's principle).

In that case, the only way to prevent timeline collapse is to prevent civilizations to achieve blackholes!

In that case, the universe should be organized in the way which prevents large scale civilizations and space travel. This solves Fermi paradox and really terrifying to us.

However, if we precomit never come close to black holes, we can escape the "curse"!

3
Joseph_Chu
Why would the only way to prevent timeline collapse be to prevent civilizations from achieving black hole-based time travel? Why not just have it so that whenever such time travel is attempted, any attempts to actually change the timeline simply fail mysteriously and events end up unfolding as they did regardless? Like, you could still go back as a tourist and find out if Jesus was real, or scan people's brains before they die and upload them into the future, but you'd be unable to make any changes to history, and anything you did would actually end up bringing about the events as they originally occurred. I also don't see how precommitting to anything will escape the "curse". The universe isn't an agent we can do acausal trade with. Applying the Anthropic Principle, we either are not the type of civilization that will ever develop time travel, or there is no "curse" that prevents civilizations like ours from developing time travel. Otherwise, we already shouldn't exist as a civilization.

I found it rather difficult to download it from any of the listed sources, except amazon kindly web where formatting is not good for reading. Can you just upload a pdf somewhere? 

4
Magnus Vinding
Thanks for letting me know. :) I wasn't aware that Smashwords required registration. The PDF is also available here (expanded edition here).

Maybe better say 'zoo' vs 'forest', or 'very well protected area' vs 'partly protected area'.
If there is only a few habited planets inside grabby alien sphere, they will be very valuable and very well protected so no UFOs will be observed. 

If there are millions of them, they are less valuable and thus less protected and therefore can be used for some practical activity, like turism, hunting or mining unobtanium. Obviously, if UFOs are aliens, local alien authorities let them be visible sometimes, so local aliens laws are not very strict. 

Observation selection effects like SIA favors the hypothesis that there millions habitable planets inside any grabby aliens. 

I think that your model is correct and 'anthropically' supported. 

In some sense it favors 'zoo hypothesis". However, there is an important distinction: is it zoo or natural reserve. In general, on Earth zoos are rare but well kept and natural reserves are more abundant, but less controlled. The same anthropic considerations which favor silent rulers, favor natural reserves vs zoos.

This has bad consequences for us: natural reserves are more likely to be visited by unauthorized visitors and poachers. Or if we will be less anthrophomorphisng, they have l... (read more)

2
Magnus Vinding
Thanks for your comment. :) One reason I didn't use the term "zoo hypothesis" is that I've seen it defined in rather different ways. Relatedly, I'm unsure what you mean by zoo vs. natural reserve hypotheses/scenarios. How are these different, as you use these terms? Another question is whether proportions of zoos vs. natural reserves on Earth can necessarily tell us much about "zoos" vs. "natural reserves" in a cosmic context.

I think that UFOs are really a wildcard in x-risk research. 
1.Even if UFOs don't have any serious substance behind them, the fact that many serious military people and even presidents believed in them, should update our prior about human irrationality and therefore increase our expectation that nuclear risks and AI risks will be mismanaged.

2.If UFOs have an interesting, but not world-model-shattering explanation, e.g. they are a form of ball lightings, this opens a possibility of creating new weapons after their nature will be learned. 

3. If thei... (read more)

1
justsaying
I find your 1st point in contradiction to your later points. Your first point seems to say that taking them seriously implies a certain level of irrationality but your later points imply that you yourself take them seriously, including some more out -there explanations.

Actually, I am going to write someday a short post "time machine as existential risk".

 Technically, any time travel is possible only if timeline is branching, but it is ok in quantum multiverse. However, some changes in the past will be invariants: they will not change the future in the way that causes ground father paradox. Such invariants will be loopholes and have very high measure. UFO could be such invariants and this explains their strangeness: only strange thing are not changing future ti prevent their own existence. 

Thanks for these details!

The report doesn't mention any likelihood of any of these events happening

May be you discuss a different report than I read. The one I read says: 

In fact there is evidence of eruptions at Kivu in about 1000 year cycles and predictions based on observed accumulation rates (10-14% per year) suggest an eruption in the next 100-200 years, see Report [1].

2
Jeroen De Ryck 🔹
Sorry for my late reply, but thanks for mentioning. I edited my reply to remove that section.

Link is not working for me

Answer by turchin2
1
0

Buying a house is a bet on long AI timelines and absence of all other catastrophes.

Buying a house protects you against rent growth, income decline and inflation, as well as bad landlords and time consuming rent hunt.

Three arguments in favor of  war soon:

  1. Chips. While China chip industry is getting sanctioned, most of the world production of advanced chip remains on Taiwan. But this is not for long, as manufacturing start relocating in other places. Attack on Taiwan could prevent US dominance in chips and AI.
  2. Ukraine war strained US production of some weapons, but US invested in increase of production of missiles and artillery ships. So in future situation will be less advantageous for war.
  3. China has clear advantage now in cheap drones produced in mass, like DJI. But is is also not for long.

How much are electricity, maintenance and property tax for this venue? Historic building may require expensive restoration and are subject to complex regulation. 

I think it is more interesting to think about other people as of rational agents. If bitcoin grew to 100K as it was widely expected in 2021, SBF bets will pay off and he will become the first trillioner. He will also be able to return all money he took from creditors. 

He may understood that there was only like 10 per cent chance to become trillioner, but if he thought that trillion dollars for preventing x-risks is the only chance to save humanity, then he knew that he should bet on this opportunity.

Now we live in a timeline where he lost and it is more tempting to say that he was irrational or mistaken. But maybe he was not.

One thing that he didn't use in his EV calculations is meta-level impact of failure on the popularity of EA and utilitarianism.  Even  relatively small failure in money could have almost infinite negative utility if topics like x-risks prevention become very unpopular.

I am interested to see how wood gasification s an energy source for cars could be bootstrapped in the case of industry collapse.

Another topic: how some cold tolerant crops from nothern regions ( fodder beet, rutabaga) could be planted in South in the case of nuclear war winter. I already tried some experiment remotely (asked friend) but it failed.

Also creating self-sustaing community on an island  would be an interesting experiment.

1
Joel Becker
Please submit! :)

The relation between warming and CO2 is exponential, s we need to count the number of doublings of CO2. Every doubling gives a constant increase of the temperature. Assuming that each doubling gives 2C and 22= 2exp4.5, we get around 9C above preindustrial level before we reach tipping point.

 In the article the tipping point is above 4C (in the chart) plus 6C from warmer  world = 10C, which gives us approximately the same result as I calculated above. 

1
Vasco Grilo🔸
Thanks. The results of that article cannot be applied directly to the situation we are in, because the initial temperature of their aqua-planet  is 6 ºC higher than today's mean global temperature. From note (6.93) of What We Owe to the Future (see here): Indeed, from the Discussion of the article you mention: These concentrations of 4,480 and 8,960 p.p.m are 16.0 (=4480/280) and 32.0 (=8960/280) times the pre-industrial concentration, which suggests the existential CO2 concentration is 22.6 (= (16.0*32.0)^0.5) times as high as the pre-industrial one. Given the warming until now relative to pre-industrial levels of 1.04 ºC, and the current concentration of CO2 is 1.48 (= 414/280) times the pre-industrial one, it seems reasonable to expect the existential warming relative to the pre-industrial temperature is about 20 ºC (22.6/1.48*1.04 = 15.9), not 4 ºC.

I think that the difference between tipping point and existential temperature should be clarified. Tipping point is the temperature after which self-sustaining loop of positive feedback starts. In the moisture greenhouse paper it is estimated to be at +4C, after which the temperature jumps to +40C in a few years. If we take +4  C above preindustrial level, it will be 1-3 above current level. 

1
Vasco Grilo🔸
Thanks for clarifying. I had understood that difference, but for me it is unclear from what you discuss here that the tipping point is only 4 ºC above pre-industrial temperature. Could you link to the specific paper you are referring to?

I didn't try to make any metaphysical claims. I just pointed on conditional probability: if someone is writing comments on LW, (s)he is (with very high probability) not an animal. Therefore LW-commentators are special non-random subset from all animals.

I think that here are presented two different conjectures:

"I am animal" - therefore liquid water on the planets etc.

"I am randomly selected from all animals".

The first is true and the second is false.

1
Ember
"I am randomly selected from all animals" I don't endorse this claim. It implies that my essence is prior to my existence, and I disagree with this assumption. I do believe I was once a soul placed into a random body within a set.  My essence follows from my existence, if I was different I would be someone else. I do stand by the claim, "I can reason as if I am randomly selected from all animals" this is true for any set I am a part of, if you did select a random member of that set I am a possible result, some sets just give unintuitive results, but that's simply because reasoning from a particular positionality only gives part of the picture.  Anthropic shadow only requires the later epistemic claim to be valid and is not dependent on the metaphysical claim.

From climate point of view, we need to estimate not only the warming, but also the speed of warming, as higher speed gives high concentration of methane (and this differential equation has exponential solution). Anthropogenic global warming is special as it has very high speed of CO2 emission never happened before. We also have highest ever  accumulation of methane hydrates. We could be past tipping point but do not know it yet, as exponential growth is slow in the beginning. 

From SIA counteragrument follows that anthropic shadow can't be very st... (read more)

1
Vasco Grilo🔸
I agree there is a difference between: * The current temperature (T0). * The maximum temperature which would be achieved if we reached net zero today (T1). The 2nd of these is higher, so the lower bound for the existential additional warming is smaller than the 26.0 ºC I estimated above (for an anthropic shadow larger than 50 %). I also understand T1 may be a function of not only T0, but also of the current composition of the atmosphere, and the rate at which it has been changing.  However, how large do you think is the difference between T0 and T1? If it is of the order of magnitude of the warming until now relative to pre-industrial levels of 1 ºC, there is still a margin of about 25.0 ºC (= 26.0 - 1) to the existential tipping point. You mention that we may already have passed the existential tipping point, but that would imply a difference between T1 and T0 of more than 25.0 ºC, which seems very hard to believe.

I use exponential prior to illustrate the example with a car. For other catastrophes, I take the tail of normal distribution, there the probability declines very quickly, even hyperexponentially. The math there is more complicated. But it does not affect the main result: if we have anthropic shadow, the expected survival time is around 0.1 of the past time in the wide range of initial parameters. 

And in the situation of anthropic shadow we have very limited information about the type of distribution.  Exponential and normal seems to be two most p... (read more)

If you were random animal, you will be an ant with 99.999999 probability. So either anthropic is totally wrong,  or animals is wrong reference class.

1
Ember
The fact that I am not an ant doesn't undermine it because I know that I am human. Humans will always be humans, and so have a tendency to discover themselves to be humans. This selection effect is even more extreme than the tendency for humans to find themselves in worlds without a nuclear war.  I could not be anything but what I am, as then I would not be myself. A reference class of just me is however not useful. So to do anthropic reasoning I conceive of myself as one of a set to which I belong and consider how general observation biases within that set might be misleading me.  In the fine-tuned planet case the fact that Animals couldn't have occurred on a planet without liquid water is useful. The various contingencies around the seeming fine-tuning of the earth are explained by my being an animal. I am where animals, and by extension me, could exist. 
3
Ember
Yes. I am a random animal within the set of animals. I am also a random human,  a random American, a random anthropics enthusiast, a random person on the EA forum, a random non-binary person, a random Jewish person... etc  When considering different problems I experience different forms of selection effects in different ways. For example, Insofar as I am Jewish I am more likely to exist in a world where the nazis lost ww2.  I am unsure how these different categories interact. I imagine that I am more likely to live in a world with more humans, but fewer total animals than I am to live in a world with more animals but fewer humans. I take any category to be a legitimate starting point and am unsure how to weigh them against each other. 

Sanberg recently published its summary in twitter. he said that he uses the frequency of near-misses to estimate the power of anthropic shadow and found that near misses was not suppressed during the period of large nuclear stockpiles and  it is evidence against anthropic shadow. I am not sure that it is true, as in early times the policy was more risky. 

We don't know where is the tipping point, so uninformed prior gives equal chances for any T between 0 and, say, 20 C additional temperature increase. In that case 2C is 2 times more likely. 

But the idea of anthorpic shadow tells us that tipining point is likely to be 10 per cent of the whole interval. And for 40C before moisture greenhouse it is 4C. But, interestingly, anthropic shadow tells us that smaller intervals are increasingly unlikely. So 1C  increase is orders of magnitude less likely to cause a catastrophe than 4 C increase.

I will illus... (read more)

2
Vasco Grilo🔸
Thanks for the reply! Your calculations apply to an exponential distribution. Do we have reasons to choose an exponential prior over a uniform/loguniform prior for the location of the existential tipping point? I guess one possible disadvantage of the exponential prior is the lack of a maximum (which should arguably be assumed given our knowledge about moisture greenhouse), but this could be solved by using a truncated exponential.

Anthropic shadow applies not to humanity, but to underlying conditions on which we can survive. 

For example, the waves of asteroid bombardment are every 30 million years, but not exactly 30 mln. 

The next wave is normally distributed around 30 with mean deviation, say, 1 mln years. If 33 mln years have gone without it, it means that we are 3 sigmas after the mean. 

1
Vasco Grilo🔸
I see, thanks!

Image as a toy example a tense spring which is described by Hooke's law. Fs = kx.

Imagine also that we can observe only those springs that are tensed far beyond their normal breaking point = it is a model of anthropic shadow.

From logarithmic nature of the relation between remaining life expectancy and  the power (probability of past survival) of anthropic shadow follows that for almost any anthropic shadow the remaining life expectancy is between 5-20 per cent of past survival time, lets call it dA.

For a tensed spring it means that its additional lengt... (read more)

1
Vasco Grilo🔸
Thanks for clarifying! Let me see if I have understood your argument: * If the probability of having avoided existential catastrophe due to climate change until now is smaller than p_max = 0.1 %, the "half-warming" is smaller than HW_max = 0.100 (= -1/log2(p_max)).  * So, given the lack of memory of the exponential distribution, the mean additional warming until existential catastrophe due to climate change is smaller than 14.5 % (= HW_max/log(2) = 1/log(1/p_max)) of the maximum historical warming until now.  * Based on this article, temperature was 18 ºC (=(90-58)/1.8) higher than now 250 Myear ago. This means the existential additional warming is smaller than 2.61 ºC (=18*14.5 %). This is in agreement with your conclusion that "the tipping point could lie not in tens but in single digits of temperature increase (that is, between 1.5C and 4.5C, if we just divide on 10 the above estimate)". However, why should the anthropic shadow be smaller than 0.1 %?  * As the anthropic shadow tends to 1, the existential warming tends to infinity. * Given that we are still here, I think the probability of 18 ºC of warming not having led to an existential catastrophe 250 Myear ago should be larger than 50 % (instead of smaller than 0.1 %). In this case, the existential additional warming relative to today's temperature would be larger than 26.0 ºC (= 18/log(1/0.5)) for an exponential prior.

Yes, agree. Two more points:

Not all population counts, but only those who can think about anthropic. A nuclear war will disproportionally destroy cities with universities, so the population of scientists could decline 10 times, while other population will be only halved. 

Anthropic shadow means higher fragility: we underestimate how easy it is to trigger a nuclear war. Escalation is much easier. Accidents are more likely to be misinterpreted. 

3
Ember
I am skeptical of this first claim about anthropics. To me, it seems like every observer type can be relevant. The particular type used in the context of anthropic reasoning is ultimately subjective and is chosen insofar as it is illuminating. I agree that people thinking about anthropics are particularly unlikely after nuclear war. Yes, I agree. This paper is framed around the fine-tuning argument because the multiverse undermining the teleological interpretation is uncontroversial within the theory, and so extending this argument to nuclear war doesn't require someone who accepts anthropic shadow to accept the conclusion of this paper. I happen to believe in anthropic shadow, and such a belief implies our situation is worse than what is implied by this paper. 

If there will obvious global runaway global warming, like +6C everywhere and growing month by month, people will demand "do something" about it and will accept attempts to use nuclear explosions to stop it. 

I don't have access now to the document "Nuclear war near misses and anthropic shadows"

1
Ember
Ah, that's too bad, do you have the email of anyone who would?

If we create artificial nuclear winter - it could be created by one strong actor unilaterally. No coordination is needed. 

Such nuclear winter may last few years and naturally resolve to normality. During this process two things could happen:  either the tipping point conditions also stop, like methane leakage ends. Or we create more permanent solution to our problem like more stable form of geoengienering.

The artificial nuclear winter doesn't need to be very strong (in -2-3 C range), so no major disruption of food production will happen.

2
Ember
I understand it could be done by one strong actor unilaterally, I simply wonder if I could reasonably support such an action being taken unilaterally. This paper is what sold me on this position https://nickbostrom.com/papers/unilateralist.pdf I think you are overestimating what could be accomplished during this time period, I imagine that most people would become hostile to any movement which just intentionally triggered a nuclear winter.  Do you have a source on how disruptive nuclear winters would be to food production, I am skeptical.   On an unrelated note, I see that you sight "Nuclear war near misses and anthropic shadows" which is marked as being in preparation. I wrote an essay that I imagine is similar titled "Nuclear fine-tuning". I am wondering if you have access to this document and if you could send it my way, as I would like to read it to see what gaps in my arguments it might fill in. My essay can be found here: https://forum.effectivealtruism.org/posts/Gg2YsjGe3oahw2kxE/nuclear-fine-tuning-how-many-worlds-have-been-destroyed

I discuss different arguments against anthropic shadow in my new post, may be it would be interesting for you https://forum.effectivealtruism.org/posts/bdSpaB9xj67FPiewN/a-pin-and-a-balloon-anthropic-fragility-increases-chances-of

here https://forum.effectivealtruism.org/posts/bdSpaB9xj67FPiewN/a-pin-and-a-balloon-anthropic-fragility-increases-chances-of

I think, yes. We need a completely new science of "urgent geoengineering" - that is something like creating artificial nuclear winter by controlled fires in forests which will give us a few years of time to develop better methods or to reverse the dangerous trend.

I tried 6 years ago to create a more detailed plan (it may obsolete, but that is what I have) here

http://immortality-roadmap.com/warming3.pdf it is a chart

and it is its explanation https://forum.effectivealtruism.org/posts/C3F87C8r6QFXwnwqp/the-map-of-global-warming-prevention 

2
jackva
Thanks, will have a look!

I am going to have a post about the risks of runaway global warming soon.

1[comment deleted]

Hi!

I have different understanding of moisture greenhouse based on what I've read. You said (oversimplifing) that the threshold for moisture greenhouse is 67C  and the main risk  from it is ocean evaporation.

But in my understanding 67 C is the level of moisture greenhouse climate. The climate according to some models will be stable on this level.  67 C mean temperature seems to almost lethal to humans but some people could survive on high mountains. 

However, the threshold to moisture greenhouse, that is the tipping point after which the ... (read more)

3[anonymous]
Thanks for this. Someone else raised some issues with the moist greenhouse bit, and I need to revise. I still think the Ord estimate is too high, but I think the discussion in the report could be crisper. I'll revert back once I've made changes

There are two translation into Russian. One from 2009 in which Igor participated is here https://proza.ru/avtor/unau&book=4#4 

But in 2020 a professional translation was made  and is available here https://ubq124.wordpress.com/2019/12/22/the-hedonistic-imperative-pdf/

I think that downvoters didn't like the word "resurrection".

2
Guy Raveh
Can confirm this is the main reason for my downvote.
Load more