Critics: "‘Long Reflection’ Is Crazy Bad Idea" https://www.overcomingbias.com/2021/10/long-reflection-is-crazy-bad-idea.html
Actually, I am going to write someday a short post "time machine as existential risk".
Technically, any time travel is possible only if timeline is branching, but it is ok in quantum multiverse. However, some changes in the past will be invariants: they will not change the future in the way that causes ground father paradox. Such invariants will be loopholes and have very high measure. UFO could be such invariants and this explains their strangeness: only strange thing are not changing future ti prevent their own existence.
Thanks for these details!
The report doesn't mention any likelihood of any of these events happening
May be you discuss a different report than I read. The one I read says:
In fact there is evidence of eruptions at Kivu in about 1000 year cycles and predictions based on observed accumulation rates (10-14% per year) suggest an eruption in the next 100-200 years, see Report [1].
Buying a house protects you against rent growth, income decline and inflation, as well as bad landlords and time consuming rent hunt.
Three arguments in favor of war soon:
How much are electricity, maintenance and property tax for this venue? Historic building may require expensive restoration and are subject to complex regulation.
I think it is more interesting to think about other people as of rational agents. If bitcoin grew to 100K as it was widely expected in 2021, SBF bets will pay off and he will become the first trillioner. He will also be able to return all money he took from creditors.
He may understood that there was only like 10 per cent chance to become trillioner, but if he thought that trillion dollars for preventing x-risks is the only chance to save humanity, then he knew that he should bet on this opportunity.
Now we live in a timeline where he lost and it is more tempting to say that he was irrational or mistaken. But maybe he was not.
One thing that he didn't use in his EV calculations is meta-level impact of failure on the popularity of EA and utilitarianism. Even relatively small failure in money could have almost infinite negative utility if topics like x-risks prevention become very unpopular.
I am interested to see how wood gasification s an energy source for cars could be bootstrapped in the case of industry collapse.
Another topic: how some cold tolerant crops from nothern regions ( fodder beet, rutabaga) could be planted in South in the case of nuclear war winter. I already tried some experiment remotely (asked friend) but it failed.
Also creating self-sustaing community on an island would be an interesting experiment.
The relation between warming and CO2 is exponential, s we need to count the number of doublings of CO2. Every doubling gives a constant increase of the temperature. Assuming that each doubling gives 2C and 22= 2exp4.5, we get around 9C above preindustrial level before we reach tipping point.
In the article the tipping point is above 4C (in the chart) plus 6C from warmer world = 10C, which gives us approximately the same result as I calculated above.
"Transition to a Moist Greenhouse with CO2 and solar forcing” https://www.nature.com/articles/ncomms10627
I think that the difference between tipping point and existential temperature should be clarified. Tipping point is the temperature after which self-sustaining loop of positive feedback starts. In the moisture greenhouse paper it is estimated to be at +4C, after which the temperature jumps to +40C in a few years. If we take +4 C above preindustrial level, it will be 1-3 above current level.
I didn't try to make any metaphysical claims. I just pointed on conditional probability: if someone is writing comments on LW, (s)he is (with very high probability) not an animal. Therefore LW-commentators are special non-random subset from all animals.
I think that here are presented two different conjectures:
"I am animal" - therefore liquid water on the planets etc.
"I am randomly selected from all animals".
The first is true and the second is false.
From climate point of view, we need to estimate not only the warming, but also the speed of warming, as higher speed gives high concentration of methane (and this differential equation has exponential solution). Anthropogenic global warming is special as it has very high speed of CO2 emission never happened before. We also have highest ever accumulation of methane hydrates. We could be past tipping point but do not know it yet, as exponential growth is slow in the beginning.
From SIA counteragrument follows that anthropic shadow can't be very st...
I use exponential prior to illustrate the example with a car. For other catastrophes, I take the tail of normal distribution, there the probability declines very quickly, even hyperexponentially. The math there is more complicated. But it does not affect the main result: if we have anthropic shadow, the expected survival time is around 0.1 of the past time in the wide range of initial parameters.
And in the situation of anthropic shadow we have very limited information about the type of distribution. Exponential and normal seems to be two most p...
If you were random animal, you will be an ant with 99.999999 probability. So either anthropic is totally wrong, or animals is wrong reference class.
Sanberg recently published its summary in twitter. he said that he uses the frequency of near-misses to estimate the power of anthropic shadow and found that near misses was not suppressed during the period of large nuclear stockpiles and it is evidence against anthropic shadow. I am not sure that it is true, as in early times the policy was more risky.
We don't know where is the tipping point, so uninformed prior gives equal chances for any T between 0 and, say, 20 C additional temperature increase. In that case 2C is 2 times more likely.
But the idea of anthorpic shadow tells us that tipining point is likely to be 10 per cent of the whole interval. And for 40C before moisture greenhouse it is 4C. But, interestingly, anthropic shadow tells us that smaller intervals are increasingly unlikely. So 1C increase is orders of magnitude less likely to cause a catastrophe than 4 C increase.
I will illus...
Anthropic shadow applies not to humanity, but to underlying conditions on which we can survive.
For example, the waves of asteroid bombardment are every 30 million years, but not exactly 30 mln.
The next wave is normally distributed around 30 with mean deviation, say, 1 mln years. If 33 mln years have gone without it, it means that we are 3 sigmas after the mean.
Image as a toy example a tense spring which is described by Hooke's law. Fs = kx.
Imagine also that we can observe only those springs that are tensed far beyond their normal breaking point = it is a model of anthropic shadow.
From logarithmic nature of the relation between remaining life expectancy and the power (probability of past survival) of anthropic shadow follows that for almost any anthropic shadow the remaining life expectancy is between 5-20 per cent of past survival time, lets call it dA.
For a tensed spring it means that its additional lengt...
Yes, agree. Two more points:
Not all population counts, but only those who can think about anthropic. A nuclear war will disproportionally destroy cities with universities, so the population of scientists could decline 10 times, while other population will be only halved.
Anthropic shadow means higher fragility: we underestimate how easy it is to trigger a nuclear war. Escalation is much easier. Accidents are more likely to be misinterpreted.
If there will obvious global runaway global warming, like +6C everywhere and growing month by month, people will demand "do something" about it and will accept attempts to use nuclear explosions to stop it.
I don't have access now to the document "Nuclear war near misses and anthropic shadows"
If we create artificial nuclear winter - it could be created by one strong actor unilaterally. No coordination is needed.
Such nuclear winter may last few years and naturally resolve to normality. During this process two things could happen: either the tipping point conditions also stop, like methane leakage ends. Or we create more permanent solution to our problem like more stable form of geoengienering.
The artificial nuclear winter doesn't need to be very strong (in -2-3 C range), so no major disruption of food production will happen.
I discuss different arguments against anthropic shadow in my new post, may be it would be interesting for you https://forum.effectivealtruism.org/posts/bdSpaB9xj67FPiewN/a-pin-and-a-balloon-anthropic-fragility-increases-chances-of
here https://forum.effectivealtruism.org/posts/bdSpaB9xj67FPiewN/a-pin-and-a-balloon-anthropic-fragility-increases-chances-of
I think, yes. We need a completely new science of "urgent geoengineering" - that is something like creating artificial nuclear winter by controlled fires in forests which will give us a few years of time to develop better methods or to reverse the dangerous trend.
I tried 6 years ago to create a more detailed plan (it may obsolete, but that is what I have) here
http://immortality-roadmap.com/warming3.pdf it is a chart
and it is its explanation https://forum.effectivealtruism.org/posts/C3F87C8r6QFXwnwqp/the-map-of-global-warming-prevention
Hi!
I have different understanding of moisture greenhouse based on what I've read. You said (oversimplifing) that the threshold for moisture greenhouse is 67C and the main risk from it is ocean evaporation.
But in my understanding 67 C is the level of moisture greenhouse climate. The climate according to some models will be stable on this level. 67 C mean temperature seems to almost lethal to humans but some people could survive on high mountains.
However, the threshold to moisture greenhouse, that is the tipping point after which the ...
There are two translation into Russian. One from 2009 in which Igor participated is here https://proza.ru/avtor/unau&book=4#4
But in 2020 a professional translation was made and is available here https://ubq124.wordpress.com/2019/12/22/the-hedonistic-imperative-pdf/
If we know that extinction event is inevitable soon, like asteroid impact, I think it will be reasonable to try to create remnants, perhaps on the Moon, that could provide possible aliens with information about humanity or even help to resurrect human beings.
I also have an article about survival on islands and have been thinking about surviving in caves. The topic of survival on ships is a really interesting one and I hope to turn to it some day but now I am working on other problems.
Hi!
I got the following message after pressed Apply:
"The form Future Fund: Application for Funding is no longer accepting responses.
Try contacting the owner of the form if you think this is a mistake." at https://docs.google.com/forms/d/e/1FAIpQLScp_pbbqS2OeecQlo_perE6Vz8mcKBivtRAfBSfKyDicZkEiQ/closedform
Is your program closed or it is an error from my side? I wanted to apply on the topic Infrastructure to recover after catastrophes
Check my site about it: http://digital-immortality-now.com/
Or my paper: Digital Immortality: Theory and Protocol for Indirect Mind Uploading
And there is a group in FB about life-logging as life extension where a few EA participate: https://www.facebook.com/groups/1271481189729828
To collect all that information we need superintelligent AI, and actually we don't need all vibrations, but only the most relevant pieces of data - the data which is capable to predict human behaviour. Such data could be collected from texts, photos, DNA and historical simulations - but it is better to invest in personal life-logging to increase ones chances to be resurrected.
I wrote two articles about resurrection: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection
and
I am serious about resurrection of the dead, there are several ways, including running the simulation of the whole history of mankind and filling the knowledge gaps with random noise which, thanks to Everett, will be correct in one of the branches. I explained this idea in longer article: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection
I need to clarify my views: I want to save humans first, and after that save all animals, from closest to humans to more remote. By "saving" I mean resurrection of the dead, of course. I am pro resurrection of mammoth and I am for cryonics for pets. Such framework will eventually save everyone, so in infinity it converges with other approaches to saving animals.
But "saving humans first" gives us a leverage, because we will have more powerful civilisation which will have higher capacity to do more good. If humans will extinct now, animals will ...
The transition from "good" to "wellbeing" seems rather innocent, but it opens the way to rather popular line of reasoning: that we should care only about the number of happy observer-moments, without caring whose are these moments. Extrapolating, we stop caring about real humans, but start caring about possible animals. In other words, it opens the way to pure utilitarian-open-individualist bonanza, where value of human life and individuality are lost and badness of death is ignored. The last point is most important for me, as I view irreversible mor...
The problem with (1) is that here it is assumed that fuzzy set of well-being has a subset of "real goodness" inside it, but we just don't know how to define it correctly. But it could be the real goodness is outside well-being. In my view, reaching radical life extension and death-reversal is more important than well-being, if it is understood as comfortable healthy life.
The fact that an organisation is doing good assumes that some concept of good exists in it. And we can't do good effectively without measuring it, which requires even str...
It also happens in personal life.