All of turchin's Comments + Replies

Actually, I am going to write someday a short post "time machine as existential risk".

 Technically, any time travel is possible only if timeline is branching, but it is ok in quantum multiverse. However, some changes in the past will be invariants: they will not change the future in the way that causes ground father paradox. Such invariants will be loopholes and have very high measure. UFO could be such invariants and this explains their strangeness: only strange thing are not changing future ti prevent their own existence. 

Thanks for these details!

The report doesn't mention any likelihood of any of these events happening

May be you discuss a different report than I read. The one I read says: 

In fact there is evidence of eruptions at Kivu in about 1000 year cycles and predictions based on observed accumulation rates (10-14% per year) suggest an eruption in the next 100-200 years, see Report [1].

2
Jeroen De Ryck
1y
Sorry for my late reply, but thanks for mentioning. I edited my reply to remove that section.

Link is not working for me

Answer by turchinDec 18, 20222
1
0

Buying a house is a bet on long AI timelines and absence of all other catastrophes.

Buying a house protects you against rent growth, income decline and inflation, as well as bad landlords and time consuming rent hunt.

Three arguments in favor of  war soon:

  1. Chips. While China chip industry is getting sanctioned, most of the world production of advanced chip remains on Taiwan. But this is not for long, as manufacturing start relocating in other places. Attack on Taiwan could prevent US dominance in chips and AI.
  2. Ukraine war strained US production of some weapons, but US invested in increase of production of missiles and artillery ships. So in future situation will be less advantageous for war.
  3. China has clear advantage now in cheap drones produced in mass, like DJI. But is is also not for long.

How much are electricity, maintenance and property tax for this venue? Historic building may require expensive restoration and are subject to complex regulation. 

I think it is more interesting to think about other people as of rational agents. If bitcoin grew to 100K as it was widely expected in 2021, SBF bets will pay off and he will become the first trillioner. He will also be able to return all money he took from creditors. 

He may understood that there was only like 10 per cent chance to become trillioner, but if he thought that trillion dollars for preventing x-risks is the only chance to save humanity, then he knew that he should bet on this opportunity.

Now we live in a timeline where he lost and it is more tempting to say that he was irrational or mistaken. But maybe he was not.

One thing that he didn't use in his EV calculations is meta-level impact of failure on the popularity of EA and utilitarianism.  Even  relatively small failure in money could have almost infinite negative utility if topics like x-risks prevention become very unpopular.

I am interested to see how wood gasification s an energy source for cars could be bootstrapped in the case of industry collapse.

Another topic: how some cold tolerant crops from nothern regions ( fodder beet, rutabaga) could be planted in South in the case of nuclear war winter. I already tried some experiment remotely (asked friend) but it failed.

Also creating self-sustaing community on an island  would be an interesting experiment.

1
Joel Becker
1y
Please submit! :)

The relation between warming and CO2 is exponential, s we need to count the number of doublings of CO2. Every doubling gives a constant increase of the temperature. Assuming that each doubling gives 2C and 22= 2exp4.5, we get around 9C above preindustrial level before we reach tipping point.

 In the article the tipping point is above 4C (in the chart) plus 6C from warmer  world = 10C, which gives us approximately the same result as I calculated above. 

1
Vasco Grilo
2y
Thanks. The results of that article cannot be applied directly to the situation we are in, because the initial temperature of their aqua-planet  is 6 ºC higher than today's mean global temperature. From note (6.93) of What We Owe to the Future (see here): Indeed, from the Discussion of the article you mention: These concentrations of 4,480 and 8,960 p.p.m are 16.0 (=4480/280) and 32.0 (=8960/280) times the pre-industrial concentration, which suggests the existential CO2 concentration is 22.6 (= (16.0*32.0)^0.5) times as high as the pre-industrial one. Given the warming until now relative to pre-industrial levels of 1.04 ºC, and the current concentration of CO2 is 1.48 (= 414/280) times the pre-industrial one, it seems reasonable to expect the existential warming relative to the pre-industrial temperature is about 20 ºC (22.6/1.48*1.04 = 15.9), not 4 ºC.

I think that the difference between tipping point and existential temperature should be clarified. Tipping point is the temperature after which self-sustaining loop of positive feedback starts. In the moisture greenhouse paper it is estimated to be at +4C, after which the temperature jumps to +40C in a few years. If we take +4  C above preindustrial level, it will be 1-3 above current level. 

1
Vasco Grilo
2y
Thanks for clarifying. I had understood that difference, but for me it is unclear from what you discuss here that the tipping point is only 4 ºC above pre-industrial temperature. Could you link to the specific paper you are referring to?

I didn't try to make any metaphysical claims. I just pointed on conditional probability: if someone is writing comments on LW, (s)he is (with very high probability) not an animal. Therefore LW-commentators are special non-random subset from all animals.

I think that here are presented two different conjectures:

"I am animal" - therefore liquid water on the planets etc.

"I am randomly selected from all animals".

The first is true and the second is false.

1
Ember
2y
"I am randomly selected from all animals" I don't endorse this claim. It implies that my essence is prior to my existence, and I disagree with this assumption. I do believe I was once a soul placed into a random body within a set.  My essence follows from my existence, if I was different I would be someone else. I do stand by the claim, "I can reason as if I am randomly selected from all animals" this is true for any set I am a part of, if you did select a random member of that set I am a possible result, some sets just give unintuitive results, but that's simply because reasoning from a particular positionality only gives part of the picture.  Anthropic shadow only requires the later epistemic claim to be valid and is not dependent on the metaphysical claim.

From climate point of view, we need to estimate not only the warming, but also the speed of warming, as higher speed gives high concentration of methane (and this differential equation has exponential solution). Anthropogenic global warming is special as it has very high speed of CO2 emission never happened before. We also have highest ever  accumulation of methane hydrates. We could be past tipping point but do not know it yet, as exponential growth is slow in the beginning. 

From SIA counteragrument follows that anthropic shadow can't be very st... (read more)

1
Vasco Grilo
2y
I agree there is a difference between: * The current temperature (T0). * The maximum temperature which would be achieved if we reached net zero today (T1). The 2nd of these is higher, so the lower bound for the existential additional warming is smaller than the 26.0 ºC I estimated above (for an anthropic shadow larger than 50 %). I also understand T1 may be a function of not only T0, but also of the current composition of the atmosphere, and the rate at which it has been changing.  However, how large do you think is the difference between T0 and T1? If it is of the order of magnitude of the warming until now relative to pre-industrial levels of 1 ºC, there is still a margin of about 25.0 ºC (= 26.0 - 1) to the existential tipping point. You mention that we may already have passed the existential tipping point, but that would imply a difference between T1 and T0 of more than 25.0 ºC, which seems very hard to believe.

I use exponential prior to illustrate the example with a car. For other catastrophes, I take the tail of normal distribution, there the probability declines very quickly, even hyperexponentially. The math there is more complicated. But it does not affect the main result: if we have anthropic shadow, the expected survival time is around 0.1 of the past time in the wide range of initial parameters. 

And in the situation of anthropic shadow we have very limited information about the type of distribution.  Exponential and normal seems to be two most p... (read more)

If you were random animal, you will be an ant with 99.999999 probability. So either anthropic is totally wrong,  or animals is wrong reference class.

1
Ember
2y
The fact that I am not an ant doesn't undermine it because I know that I am human. Humans will always be humans, and so have a tendency to discover themselves to be humans. This selection effect is even more extreme than the tendency for humans to find themselves in worlds without a nuclear war.  I could not be anything but what I am, as then I would not be myself. A reference class of just me is however not useful. So to do anthropic reasoning I conceive of myself as one of a set to which I belong and consider how general observation biases within that set might be misleading me.  In the fine-tuned planet case the fact that Animals couldn't have occurred on a planet without liquid water is useful. The various contingencies around the seeming fine-tuning of the earth are explained by my being an animal. I am where animals, and by extension me, could exist. 
3
Ember
2y
Yes. I am a random animal within the set of animals. I am also a random human,  a random American, a random anthropics enthusiast, a random person on the EA forum, a random non-binary person, a random Jewish person... etc  When considering different problems I experience different forms of selection effects in different ways. For example, Insofar as I am Jewish I am more likely to exist in a world where the nazis lost ww2.  I am unsure how these different categories interact. I imagine that I am more likely to live in a world with more humans, but fewer total animals than I am to live in a world with more animals but fewer humans. I take any category to be a legitimate starting point and am unsure how to weigh them against each other. 

Sanberg recently published its summary in twitter. he said that he uses the frequency of near-misses to estimate the power of anthropic shadow and found that near misses was not suppressed during the period of large nuclear stockpiles and  it is evidence against anthropic shadow. I am not sure that it is true, as in early times the policy was more risky. 

We don't know where is the tipping point, so uninformed prior gives equal chances for any T between 0 and, say, 20 C additional temperature increase. In that case 2C is 2 times more likely. 

But the idea of anthorpic shadow tells us that tipining point is likely to be 10 per cent of the whole interval. And for 40C before moisture greenhouse it is 4C. But, interestingly, anthropic shadow tells us that smaller intervals are increasingly unlikely. So 1C  increase is orders of magnitude less likely to cause a catastrophe than 4 C increase.

I will illus... (read more)

2
Vasco Grilo
2y
Thanks for the reply! Your calculations apply to an exponential distribution. Do we have reasons to choose an exponential prior over a uniform/loguniform prior for the location of the existential tipping point? I guess one possible disadvantage of the exponential prior is the lack of a maximum (which should arguably be assumed given our knowledge about moisture greenhouse), but this could be solved by using a truncated exponential.

Anthropic shadow applies not to humanity, but to underlying conditions on which we can survive. 

For example, the waves of asteroid bombardment are every 30 million years, but not exactly 30 mln. 

The next wave is normally distributed around 30 with mean deviation, say, 1 mln years. If 33 mln years have gone without it, it means that we are 3 sigmas after the mean. 

1
Vasco Grilo
2y
I see, thanks!

Image as a toy example a tense spring which is described by Hooke's law. Fs = kx.

Imagine also that we can observe only those springs that are tensed far beyond their normal breaking point = it is a model of anthropic shadow.

From logarithmic nature of the relation between remaining life expectancy and  the power (probability of past survival) of anthropic shadow follows that for almost any anthropic shadow the remaining life expectancy is between 5-20 per cent of past survival time, lets call it dA.

For a tensed spring it means that its additional lengt... (read more)

1
Vasco Grilo
2y
Thanks for clarifying! Let me see if I have understood your argument: * If the probability of having avoided existential catastrophe due to climate change until now is smaller than p_max = 0.1 %, the "half-warming" is smaller than HW_max = 0.100 (= -1/log2(p_max)).  * So, given the lack of memory of the exponential distribution, the mean additional warming until existential catastrophe due to climate change is smaller than 14.5 % (= HW_max/log(2) = 1/log(1/p_max)) of the maximum historical warming until now.  * Based on this article, temperature was 18 ºC (=(90-58)/1.8) higher than now 250 Myear ago. This means the existential additional warming is smaller than 2.61 ºC (=18*14.5 %). This is in agreement with your conclusion that "the tipping point could lie not in tens but in single digits of temperature increase (that is, between 1.5C and 4.5C, if we just divide on 10 the above estimate)". However, why should the anthropic shadow be smaller than 0.1 %?  * As the anthropic shadow tends to 1, the existential warming tends to infinity. * Given that we are still here, I think the probability of 18 ºC of warming not having led to an existential catastrophe 250 Myear ago should be larger than 50 % (instead of smaller than 0.1 %). In this case, the existential additional warming relative to today's temperature would be larger than 26.0 ºC (= 18/log(1/0.5)) for an exponential prior.

Yes, agree. Two more points:

Not all population counts, but only those who can think about anthropic. A nuclear war will disproportionally destroy cities with universities, so the population of scientists could decline 10 times, while other population will be only halved. 

Anthropic shadow means higher fragility: we underestimate how easy it is to trigger a nuclear war. Escalation is much easier. Accidents are more likely to be misinterpreted. 

3
Ember
2y
I am skeptical of this first claim about anthropics. To me, it seems like every observer type can be relevant. The particular type used in the context of anthropic reasoning is ultimately subjective and is chosen insofar as it is illuminating. I agree that people thinking about anthropics are particularly unlikely after nuclear war. Yes, I agree. This paper is framed around the fine-tuning argument because the multiverse undermining the teleological interpretation is uncontroversial within the theory, and so extending this argument to nuclear war doesn't require someone who accepts anthropic shadow to accept the conclusion of this paper. I happen to believe in anthropic shadow, and such a belief implies our situation is worse than what is implied by this paper. 

If there will obvious global runaway global warming, like +6C everywhere and growing month by month, people will demand "do something" about it and will accept attempts to use nuclear explosions to stop it. 

I don't have access now to the document "Nuclear war near misses and anthropic shadows"

1
Ember
2y
Ah, that's too bad, do you have the email of anyone who would?

If we create artificial nuclear winter - it could be created by one strong actor unilaterally. No coordination is needed. 

Such nuclear winter may last few years and naturally resolve to normality. During this process two things could happen:  either the tipping point conditions also stop, like methane leakage ends. Or we create more permanent solution to our problem like more stable form of geoengienering.

The artificial nuclear winter doesn't need to be very strong (in -2-3 C range), so no major disruption of food production will happen.

2
Ember
2y
I understand it could be done by one strong actor unilaterally, I simply wonder if I could reasonably support such an action being taken unilaterally. This paper is what sold me on this position https://nickbostrom.com/papers/unilateralist.pdf I think you are overestimating what could be accomplished during this time period, I imagine that most people would become hostile to any movement which just intentionally triggered a nuclear winter.  Do you have a source on how disruptive nuclear winters would be to food production, I am skeptical.   On an unrelated note, I see that you sight "Nuclear war near misses and anthropic shadows" which is marked as being in preparation. I wrote an essay that I imagine is similar titled "Nuclear fine-tuning". I am wondering if you have access to this document and if you could send it my way, as I would like to read it to see what gaps in my arguments it might fill in. My essay can be found here: https://forum.effectivealtruism.org/posts/Gg2YsjGe3oahw2kxE/nuclear-fine-tuning-how-many-worlds-have-been-destroyed

I discuss different arguments against anthropic shadow in my new post, may be it would be interesting for you https://forum.effectivealtruism.org/posts/bdSpaB9xj67FPiewN/a-pin-and-a-balloon-anthropic-fragility-increases-chances-of

here https://forum.effectivealtruism.org/posts/bdSpaB9xj67FPiewN/a-pin-and-a-balloon-anthropic-fragility-increases-chances-of

I think, yes. We need a completely new science of "urgent geoengineering" - that is something like creating artificial nuclear winter by controlled fires in forests which will give us a few years of time to develop better methods or to reverse the dangerous trend.

I tried 6 years ago to create a more detailed plan (it may obsolete, but that is what I have) here

http://immortality-roadmap.com/warming3.pdf it is a chart

and it is its explanation https://forum.effectivealtruism.org/posts/C3F87C8r6QFXwnwqp/the-map-of-global-warming-prevention 

2
jackva
2y
Thanks, will have a look!

I am going to have a post about the risks of runaway global warming soon.

1[comment deleted]2y

Hi!

I have different understanding of moisture greenhouse based on what I've read. You said (oversimplifing) that the threshold for moisture greenhouse is 67C  and the main risk  from it is ocean evaporation.

But in my understanding 67 C is the level of moisture greenhouse climate. The climate according to some models will be stable on this level.  67 C mean temperature seems to almost lethal to humans but some people could survive on high mountains. 

However, the threshold to moisture greenhouse, that is the tipping point after which the ... (read more)

3[anonymous]2y
Thanks for this. Someone else raised some issues with the moist greenhouse bit, and I need to revise. I still think the Ord estimate is too high, but I think the discussion in the report could be crisper. I'll revert back once I've made changes

There are two translation into Russian. One from 2009 in which Igor participated is here https://proza.ru/avtor/unau&book=4#4 

But in 2020 a professional translation was made  and is available here https://ubq124.wordpress.com/2019/12/22/the-hedonistic-imperative-pdf/

I think that downvoters didn't like the word "resurrection".

2
Guy Raveh
2y
Can confirm this is the main reason for my downvote.

If we know that extinction event is inevitable soon, like asteroid impact, I think it will be reasonable to try to create remnants, perhaps on the Moon, that could provide possible aliens with information about humanity or even help to resurrect human beings. 

5
Gavin
2y
I suspect downvoters are misunderstanding "know" and "will be"; I think Turchin meant "If we knew" and "it would [then] be reasonable" (subjunctive).

I also have an article about survival on islands and have been thinking about surviving in caves. The topic of survival on ships is a really interesting one and I hope to turn to it some day but now I am working on other problems. 

1
Tom Gardiner
2y
That was really interesting to read. Let me know if you intend to continue down this line of research!

Hi!

I got the following message after pressed Apply:  
"The form Future Fund: Application for Funding is no longer accepting responses.
Try contacting the owner of the form if you think this is a mistake." at https://docs.google.com/forms/d/e/1FAIpQLScp_pbbqS2OeecQlo_perE6Vz8mcKBivtRAfBSfKyDicZkEiQ/closedform

 

Is your program closed or it is an error from my side? I wanted to apply on the topic  Infrastructure to recover after catastrophes

3
ketanrama
2y
Hi turchin - we're not currently accepting applications, and we don't know if or when we will do so in the future. If we do decide to accept applications again, i.e. to run another open call, we'll announce it on ftxfuturefund.org and here on the EA Forum. Thanks!
5
Pablo
2y
Future Fund:

To collect all that information we need superintelligent AI, and actually we don't need all vibrations, but only the most relevant pieces of data - the data which is capable to predict human behaviour. Such data could be collected from texts, photos, DNA and historical simulations - but it is better to invest in personal life-logging to increase ones chances to be resurrected. 

1
Jeffrey Kursonis
2y
Can you point me to more writing on this and tell me the history of it.

I am serious about resurrection of the dead, there are several ways, including running the simulation of the  whole history of mankind and filling the knowledge gaps with random noise which, thanks to Everett, will be correct in one of the branches. I explained this idea in longer article: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection

I need to clarify my views: I want to save humans first, and after that save all animals, from closest to humans to more remote. By "saving" I mean resurrection of the dead, of course. I am pro resurrection of mammoth and I am for cryonics for pets.  Such framework will eventually save everyone, so in infinity it converges with other approaches to saving animals.  

But "saving humans first" gives us a leverage, because we will have more powerful civilisation which will have higher capacity to do more good. If humans will extinct now, animals will ... (read more)

7
Harrison Durland
2y
I’m afraid you’ve totally lost me at this point. Saving mammoths?? Why?? And are you seriously suggesting that we can resurrect dead people whose brains have completely decayed? What? And what is this about saving humans first? No, we don’t have to save every human first, we theoretically only need to save enough so that the process of (whatever you’re trying to accomplish?) can continue. If we are strictly welfare-maximizing without arbitrary speciesism, it may mean prioritizing saving some of the existing animals over every human currently (although this may be unlikely). To be clear, I certainly understand that you aren’t saying you only care about saving your own life, but the post gives off those kinds of vibes nonetheless.

The transition from "good" to "wellbeing" seems rather innocent, but it opens the way to rather popular line of reasoning: that we should care only about the number of happy observer-moments, without caring whose are these moments. Extrapolating, we stop caring about real humans, but start caring about possible animals. In other words, it opens the way to pure utilitarian-open-individualist bonanza, where value of human life and individuality are lost and badness of death is ignored.  The last point is most important for me, as I view irreversible mor... (read more)

8
Harrison Durland
2y
To be totally honest, this really gives off vibes of "I personally don't want to die and I therefore don't like moral reasoning that even entertains the idea that humans (me) may not be the only thing we should care about." Gee, what a terrible world it might be if we "start caring about possible animals"!  Of course, that's probably not what you're actually/consciously arguing, but the vibes are still there. It particularly feels like motivated reasoning  when you gesture to abstract, weakly-defined concepts like the "value of human life and individuality" and imply they should supersede concepts like wellbeing, which, when properly defined and when approaching questions from a utilitarian framework, should arguably subsume everything morally relevant.  You seem to dispute the (fundamental concept? application?) of utilitarianism for a variety of reasons—some of which (e.g., your very first example regarding the fog of distance) I see as reflecting a remarkably shallow/motivated (mis)understanding of utilitarianism, to be honest. (For example, the fog case seems to not understand that utilitarian decision-making/analysis is compatible with decision-making under uncertainty.) If you'd like to make a more compelling criticism that stems from rebuffing utilitarianism, I would strongly learning more about the framework from people who at least decently understand and promote/use the concept, such as here: https://www.utilitarianism.net/objections-to-utilitarianism#general-ways-of-responding-to-objections-to-utiliarianism  

The problem with (1) is that here it is assumed that fuzzy set of well-being has a subset of "real goodness" inside it, but we just don't know how to define it correctly. But it could be the real goodness is outside well-being. In my view, reaching radical life extension  and death-reversal is more important than well-being, if it is understood as comfortable healthy life. 

The fact that  an organisation is doing good assumes that some concept of good exists in it. And we can't do good effectively without measuring it, which requires even str... (read more)

1
Harrison Durland
2y
In short, I don't find your arguments persuasive, and I think they're derived from some errors such as equivocation, weird definitions, etc. First of all, I don't understand the conflict here—why would you want life extension/death reversal if not to improve wellbeing? Wellbeing is almost definitionally what makes life worth living; I think you simply may not be translating or understanding "wellbeing" correctly. Furthermore, you don't seem to offer any justification for that view: what could plausibly make life extension and death-reversal more valuable than wellbeing (given that wellbeing is still what determines the quality of life of the extended lives). You can assert things as much as you'd like, but that doesn't justify the claims. Someone does not need to objectively, 100% confidently "know" what is "good" nor how to measure it if various rough principles, intuition, and partial analysis suffices. Maybe saving people from being tortured or killed isn't good—I can't mathematically or psychologically prove to you why it is good—but that doesn't mean I should be indifferent about pressing a button which prevents 100 people from being tortured until I can figure out how to rigorously prove what is "good." This almost feels like a non-sequitur that fails to explicitly make a point, but my impression is that it's saying "it's inconsistent/contradictory to think that we can decide what organizations should do but not be able to align AI." 1) This and the following paragraph still don't address my second point from my previous comment, and so you can't say "well, I know that (2) is a problem,  but I'm talking about the inconsistency"—a sufficient justification for the inconsistency is (2) all by itself; 2) The reason we can do this with organizations more comfortably is that mistakes are far more corrigible, whereas with sufficiently powerful AI systems, screwing up the alignment/goals may be the last meaningful mistake we ever make.  I very slightly agree with
Load more