All of avturchin's Comments + Replies

Hi! Just saw your comment - i am the author and i will write you

 Actually, I expected Gott equation to be mentioned here, as his Doomsday argument is a contemporary version of Laplace equation.

Also, qualified observers are not distributed linearly inside this period of time: from the idea of AI to creation of AI. If we assume that qualified observers are those who are interested in AI timing,  than it look like that such people are much more numerous closer to the end of the period. As result, a random qualified observer should find oneself closer to the end of the period. If the number of qualified observers is growing exponentially, the median is just one doubling before the end. This makes AI timing prior closer to current events. 

Thanks for great piece! One thing which may increase the extinction risk is that after the collapse, the remaining economy will be based not on agriculture and manufacturing, but on scavenging remains of previous civilisation. The problem with such economy is that it constantly shrinking and also does not help to learn useful skills, but instead helps local warlords arise and fight over leftovers. (Example: the economy of the post Soviet Union countries declined partly because it was more profitable to sell a factory for metal than to use it for manufactur... (read more)

Toby Ord estimated in the Precipice a one in a thousand probability of
existential risk this century due to climate change, largely due to
locking in a moist greenhouse effect. We would estimate the
feasibility of maintaining industrial civilization (with eventual
colonization of space) in this scenario. The physical space on
Antarctica is adequate for industrial civilization, but alternative
foods produced on other continents would likely be required, such as
foods grown in air-conditioned greenhouses, single-cell protein
powered by renewable hydrogen,  elec

... (read more)

I think that estimating fl should take into account the possibility of interstellar panspermia. Life appearing once could be disseminated through the whole galaxy in a few billion years via interstellar comets. 

This creates strong observation selection effect: the galaxies where panspermia is possible will create billion times more observers than non-panspermia galaxies, and we are certainly in such a galaxy. So, fl is likely to be 1. 

Interestingly, if no God exists, then all possible things should exist, and thus there is no end for our universe. To limit the number of actually existing things, we need some supernatural force, which allows only some worlds to exist, but which is not part of any of these worlds.

2
sighless
3y
Just happened on this post while doing some research for a script I'm writing. I find it all so interesting. My education involves a lot of theology and this comment makes me think of the famous pendulum swings throughout theological history between pantheism and transcendentalism. Either God is in all things, or they are completely "above" all things and unknowable. The former is more akin to what you're suggesting -- that the things we experience are the infinite, as opposed to the latter which says we experience the limited compared to the unlimited.  I do have a question for you, though. You seem pretty confident that a multiverse exists. What leads you to be so confident in that belief? 

Easily available BCI may fuel a possible epidemic of wireheading, which may result in civilisational decline.

I read in Tweeter (so it is not very good source) that one of the problem of the 3GD is cavitation inside discharge tubes. Cavitation is happening when the speed of the waterflow is above 10 meter per second and water creates "vacuum bubbles" which later collapse and create shockwaves which are able to destroy even strongest materials. The discharge channels are inside the body of the dam as we can see on photos and if there will be a problem, they will affect the dam from inside without overtoping. Obviously, such channels could be closed but th... (read more)

If they evolve, say, from cats, they will share the same type-values: power, sex, love to children as all mammals. By token-values will be different as they will like not human children but kittens etc. An advance non-human civilization may be more similar to ours than we-now to Ancient Egyptian, as it would have more rational world models.

The article may reflect my immoralist view point that in almost all circumstances it is better to be alive than not.

Future torture is useless and thus unlikely. Let's look on humanity: as we mature, we tend to care more about other species that lived on Earth and of minority cultures. Torture for fun or for experiment is only for those who don't know how to get information or pleasure in other ways. It is unlikely that advance civilization will deliberately torture humans. Even if resurrected humans will not have full agency, they may have much ... (read more)

2
RobertHarling
4y
Thanks for your response! I definitely see your point on the value of information to the future civilisation. The technology required to reach the moon and find the cache is likely quite different to the level required to resurrect humanity from the cache so the information could still be very valuable. An interesting consideration may be how we value a planet being under human control vs control of this new civilisation. We may think we cannot assume that the new civilisation would be doing valuable things but that a human planet would be quite valuable. This consideration would depend a lot on your moral beliefs. If we don't extrapolate the value of humanity to the value of this new civilisation, we could then ask whether we can extrapolate from how humanity would respond to finding the cache on the moon to how the new civilisation would respond.

We could survive by preserving data about humanity (on the Moon or other places), which will be found by the next civilisation on Earth, and they will recreate humans (based on our DNA) and our culture.

2
RobertHarling
4y
Thanks for your comment, I found that paper really interesting and it was definitely an idea I'd not considered before. My main two questions would be: 1) What is the main value of humanity being resurrected? - We could inherently value the preservation of humanity and it's culture. However, my intuition would be that humanity would be resurrected in small numbers and these humans might not even have very pleasant lives if they're being analysed or experimented on. Furthermore the resurrected humans are likely to have very little agency, being controlled by technologically superior beings. Therefore it would seem unlikely that the resurrected humans could create much value, much less achieve a grand future. 2) How valuable would information on humanity be to a civilisation that had technologically surpassed it? - The civilisation that resurrected humanity would probably be much more technologically advanced than humanity, and might even have it's own AI as mentioned in the paper. It would then seem that it must have overcome many of the technological x-risks to reach that point, so information on humanity succumbing to one may not be that useful. It may not be prepared for certain natural x-risks that could have caused human extinction, but these seem much less likely than manmade x-risks. Thanks again for such an interesting paper!

May be they are also less detectable, so early warning systems will not catch them on early stages?

3
jia
4y
yup! I tried to make this point in the section on trajectory: "Hypersonic missiles fly lower than ballistic missiles, which delays detection time by ground-based radar.". I'm trying to include the following photo to illustrate the point, but I can't seem to figure out how ): https://imgur.com/a/Ai7Ny7q

There is an idea of a multipandemic, that is several pandemics running simultaneously. This would significantly increase the probability of extinction.

4
axioman
4y
While I am unsure about how good of an idea it is to map out more plausible scenarios for existential risk from pathogens, I agree with the sentiment that the top level post seems seems to focus too narrowly on a specific scenario.

Yes, natural catastrophes probabilities could be presented as frequentist probabilities, but some estimates are based on logical uncertainty of the claims like "AGI is possible".

Also, are these probabilities conditioned on "all possible prevention measures are taken"? If yes, they are final probabilities which can't be made lower.

4
MichaelA
4y
In the main sheet, the estimates are all unconditional (unless I made mistakes). They're just people's estimates of the probabilities that things will actually occur. There's a separate sheet for conditional estimates. So presumably people's estimates of the chances these catastrophes occur would be lower conditional on people put in unexpectedly much effort to solve the problems. Also, here's a relevant quote from The Precipice, which helps contextualise Ord's estimates. He writes that his estimates already:

Great database!

Your estimates are presented as numerical values similar to probabilities. Is it actually probabilities and if yes, are they frequentist probabilities or Bayesian? And more generally: How we can define the "probability of end of the world"?

1
MichaelA
4y
I believe that all the numbers I've shown were probabilities. I'm pretty sure they were always presented in the original source as percentages, decimals, or "1 in [number]". What was the alternative you had in mind? I've seen some related estimates presented like "This can be expected to happen every x years", or "There's an x year return rate". Is that what you were thinking of? Several such estimates are given in Beard et al.'s appendix. But I don't think any are in the database. That wasn't primarily because they not quite probabilities (or not quite the right type), but rather because they were typically of things like the chance of an asteroid impact of a certain size, rather than direct estimates of the chance of existential catastrophe. (It's possible the asteroid impact wouldn't cause such a catastrophe.) As for whether the probabilities are frequentist or Bayesian, I think many sources weren't explicit about that. But I generally assumed they were meant as Bayesian, though they might have been based on frequentist probabilities. E.g., Toby Ord's estimates of natural risks seems to be based on the frequency in the past, but then they're explicitly about what'll happen in the next 100 years, and they're modified based on knowledge of whether our other knowledge suggests this period is more or less likely than average to have e.g. an asteroid impact. But to be certain how a given number was meant to be interpreted, one might have to check the original source (which I provide links or references to in the database).

For me, the most important intervention is to sleep on hard surface. I put 4 layers of yoga mat on my sofa, and it helps much.

Some internal air cleaner exist, including the ones with UV purification. My friend Denis Odinokov suggested to make a system to clean external air, which should consist of a tube with HEPA filter, ventilator and UV light source, which will create a positive air pressure inside the apartment. I think it is too difficult to hand-make at home. But it is another business opportunity of our time.

I heard about infection in HK via vent tubes.

If I were in a space with many people, I would like the windows will be open. At home, not.

1
Ramiro
4y
I agree that preventing exposure to virions is a priority, but I am concerned with indoor air quality overall, especially if people are staying indoors for long periods: https://en.m.wikipedia.org/wiki/Indoor_air_quality?wprov=sfla1

What are the chances that the virus will flow from the apartment beneath mine into the mine one during ventilation?

1
Ramiro
4y
Epistemic status: not my expertise, I'm guessing. It's hard for it to flow upwards, and it'll probably disperse a lot (since it doesn't reproduce outside a host, I guess this minimizes the chance of being infected)... but yeah, if your apartment is close to an infected person, there's a chance that the wind will carry virions to your apartment; that's why hospitals are supposed to place infected people according to the airflow. There's probably a trade-off between probability of external contamination vs. time virions stay viable on surfaces in an environment. It seems like, at least for other respiratory infections, for most collective environments, we should be more concerned about the latter. What's your opinion here? Of course, there's a point where the external environment becomes so contaminated (in a hospital, or if everyone in your building is infected) that you better insulate your personal environment as best as you can.

I think that for viruses it will be difficult to become completely radiation resistant, as it would require complete overhaul of their makeup: thicker walls, stronger self-repair.

There is a new animal in the room: private pay-to-play clinical trials in third countries. In one case, people have to pay 1 million USD to enrol into an anti-aging clinical trial. Some of them could be scams. But it an option to take the risk and get the vaccine earlier for customers, and to get volunteers for the company.

EDITED: Andre Watson will be now live about private vaccine creation: https://www.facebook.com/events/516073069307382/

It is currently renamed as Porfirich which is joke name with some relation to a novel by Dostoyvsy. It was created by just one programmer, Michael Grankin.

It is just part of life here. Even when I was in the university 20 years ago, there was a student who hated one professor and he mined the whole university every Thursday. They found him eventually. Current mining is either related to war with Ukraine or to money blackmail.

I once created a causal map of all global risks starting from the beginning of evolution and accumulation of biases – and up to the end. But it included too many high-knotted elements which make the reading of the map difficult. Smaller causal maps with less than 10 elements are better adapted for human understanding.

Another good idea from the biosecurity literature is "distancing": that any bio threat increases the tendency of people tp distant from each other via quarantine, masks, less travel, and thus R0 will decline, hopely below 1.

Some Chinese may think that it was a bioweapon used against them and may want a retaliation. This is how nuclear-biological war could start.

Maybe because of anchoring effect: everyone on metaculus sees the median prediction before he makes the bet and doesn't want to be much different from the group.

It could have longer tail, but given high R0 large part of human population could be simultaneously ill (or self isolated) in March-April 2020.

What is you opinion, Dave, could this could put food production at risk?

It looks like it almost not affecting children; a person of older age should give himself a higher estimate of being affected.

Thanks. "a bible of new vacuum" is nice, but should be "bubble".


Thanks. I always try to create a full list of possible solutions even if some seems very improbable.

I write it in English. 90 per cent my Russian friends could read English and also they probably know most of these news from different Russian media.

One such uncertainty is related to the conditional probability of x-risks and their relative order. Imagine that there is 90 per cent chance of biological x-risk before 2030, but if it doesn't happen, there is 90 per cent chance of AI-related x-risk event between 2030 and 2050.

In that case, total probability of survival extinction is 99 per cent, of which 90 is biological and only 9 is from AI. In other words, more remote risks are "reduced" in expected size by earlier risks which "overshadow" them.

Another point is that x-risks are by definition one-time events, so the frequentist probability is not applicable to them.

4
SiebeRozendal
4y
Yeah so the first point is what I'm referring to by timelines. And we should all also discount the risk of a particular hazard by the probability of achieving invulnerability.

What EY is doing now? Is he coding, writing fiction or new book, working on math foundations, providing general leadership?

Not sure why the initials are only provided. For the sake of clarity to other readers, EY = Eliezer Yudkowsky.

8
Buck
4y
Over the last year, Eliezer has been working on technical alignment research and also trying to rejigger some of his fiction-writing patterns toward short stories.

I think that there are other cost-effective interventions in life extension, including research in geroprotectors combinations and brain plastination.

TAME study got needed funding from a private donor:

"After closing the final $40m of its required $75m budget with a donation from a private source, the first drug trial directly targeting aging is set to begin at the end of this year, lead researcher Dr Nir Barzilai has revealed."

https://www.fightaging.org/archives/2019/09/tame-trial-for-the-effects-of-metformin-in-humans-to-proceed-this-year/

2
Emanuele_Ascani
5y
Thanks, great info. This post is officially outdated :)

If such message will be a description of a computer and a program for it, it is net bad. Think about malevolent AI, which anyone able to download from stars.

Such viral message is aimed on the self-replication and thus will eventually convert Earth into its next node which use all our resources to send copies of the message farther.

Simple darwinian logic implies that such viral messages should numerically dominate between all alien messages if any exists. I wrote an article, linked below to discuss the idea in details

If we know that there are aliens and they are sending some information, everybody will try to download their message. It is infohazard.

1
Max_Daniel
5y
I agree that information we received from aliens would likely spread widely. So in this sense I agree it would clearly be a potential info hazard. It seems unclear to me whether the effect of such information spreading would be net good or net bad. If you see reasons why it would probably be net bad, I'd be curious to learn about them.

I also have an article which compare different ETI-related risk, now under review in JBIS.

Global Catastrophic Risks Connected with Extra-Terrestrial Intelligence

The latest version was published as proper article in 2018:

The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI

Alexey Turchin. Journal of British Interpanetary Society 71 (2):71-79 (2018)

Great post. Also, I expected that Meditation on Moloch would be mentioned.

3
Aaron Gertler
5y
For those who haven't read the Meditation, it's a discussion of ways in which competitive pressures push civilizations into situations where almost all of our energy and happiness are eaten up by the scramble for scarce resources. (This is a very brief summary that leaves out a lot of important ideas, and I recommend reading the entire thing, despite its formidable length.)

There is a small probability that we are very wrong about climate sensivity and only in this case climate change is an existential risk. The reason for this is not in the climate science, but in the anthropic principle: if our climate is very fragile to the runaway global warming, we can't observe it, as we find ourselves only on planets where it didn't happen.

To fight runaway global warming we need different type of geo-engineering then for the ordinary climate management, as it should be able to provide quicker results for larger climate change... (read more)

Surely, there are lager effect sizes there, but they need much more testing to prove the safety and such testing is the most expensive part of any trials. There is a few already safe intervention which could help to extend life, that is, besides metformin, green tee and vitamin D.

Even as a trillion dollar project, fighting aging could be still cost-effective, after we divide the benefit for 10 billion people.

If we speaking on de novo therapies, current price of just one drug development is close to 10 billions, and comprehensive aging therapy like SENS s... (read more)

The main question as I see: is current spending of 1 billion-a-year on aging enough to delay aging for 10 years? Aging is a problem of (hyper)exponentially increasing complexity with time. There are probably a few interventions which could give 1-3 years of expected life extension (and aging delay): metformin, vitamin D and green tea, and proper testing of them could cost as few as tens millions of dollars as in proposed TAME study of metformin. This (+chance to survive for other life extending technologies) means much higher cost-effectiveness of such sma... (read more)

2
SarahC
5y
I believe there are larger effect sizes out there than metformin; metformin has a relatively small effect size on mice compared to other lifespan-modifying interventions, and the TAME trial chose metformin (as Barzilai admits) because it's extremely safe and well-studied, not because it's expected to be the best. I agree with you; I don't think aging research would be cost-effective at a trillion dollars of total funding. I expect that's hugely more money than necessary.

In fact, I tried also to explore this idea - which I find crucial - in my Russian book "Structure of global catastrophe", but my attempts to translate into English didn't work well, so I now slowly convert its content in the articles.

I would add an important link on the A Singular Chain of Events by Tonn and MacGregor, as well as work of Seth Baum on double catastrophes. The idea of "Peak everything" about simultaneous depletion of all natural resources also belong here, but should be combined with idea of Singularity as idea of a... (read more)

Theoretical reasons for Doomsday weapon was laid by Herman Khan in "On Thermonuclear war". I scanned related chapter here: https://www.scribd.com/document/16563514/Herman-Khan-On-Doomsday-machine

The main idea is that it is ideal defence weapon, as no body will ever attack a country owning such a device.

The idea of attacking the Yellowstone is discussed very often in Russian blogosphere (like here https://izborskiy-club.livejournal.com/310579.html), and interest to the geophysical weapons was strong in the Soviet Union (details here: http://nvo.n... (read more)

"Normal" nuclear war could be only only a first stage of multistage collapse. However, there are some ideas, how to use exiting nuclear stockpiles to cause more damage and trigger a larger global catastrophe - one is most discussed is nuking a supervolcano, but there are others. In Russian sources is a common place that retaliation attack on US may include attack on the Yellowstone, but I don't know if it is a part of the official doctrine.

Future nuclear war could be using even more destructive weapons (which may exist secretly now). Teller has been working on 10 gigaton bomb. Russians now making Poseidon large torpedo system which will be probably equipped with 100 Mt cobalt bombs.

4
kbog
5y
Absurd. Why would anyone do that? I'm sure it isn't. Also, scientifically speaking it doesn't even seem possible to ignite a supervolcano with nukes: https://www.iflscience.com/environment/what-would-happen-if-a-nuclear-bomb-was-dropped-on-yellowstone-supervolcano/ Even the most destructive historical weapons (e.g. Tsar Bomba) have not been deployed. Warheads have gotten smaller over recent decades. No reason for this trend to reverse.

"Normal" global warming is not x-risk, but possible heavy tail connected with something unknown could be. For example, the observed stability of our climate may be just an "anthropic shadow", and, in fact, climate transition to the next hotter meta-stable condition is long overdue, and could be triggered by small human actions.

The next meta-stable state may be with median temperature 57C according to the article https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4748134/ ("The climate instability is caused by a positive cloud feedback a... (read more)

Load more