Hide table of contents

This map is part of the “Map of Natural Risks ” which is in turn part of the map “Typology of Global Risks.” 

 

The main ideas of the map

 

1. The danger posed by asteroids is diminishing as technology advances, mostly because we will prove that there will be no dangerous asteroids within the next 100 years and secondly, because of our growing ability to deflect them. But, observation is much more important.

2. An asteroid defense system could yield negative utility, as powerful deflection technologies or asteroids could be used as weapons. For example, a gigaton scale nuclear bomb in orbit may be used for asteroid deflection once in a million years, but for war – one in a thousand years or less.

3. But, maybe we live in the period of intense bombardment, as some scientists propose (Napier), and we have 100-200 times higher intensity of impacts compared to background. If this is true, the implication is roughly a 1 per cent chance of a 1 km body impact in the next 100 years.

So, we should prove that we don’t live in such a period.  (My thoughts: it may not be a coincidence that we live in such a period, as intense climate changes has helped to speed up human evolution.) Some scientists suggested at least three possible 1 km sized impacts in the last 10k years, including the Clovis comet  (13 ky)  , which debris may be created 500 000 oval structures, known as Carolina bays, Mahuika crater (1400 AD) and 5000BC crater in Indian ocean .

It is reasonable to be skeptical about any catastrophism style claims, as their prior probability is small, but disproving such claims factually could lead to the biggest reduction of expected asteroid risks. A lot of new counterevidence has been found, but the scientific discussion about recent bombardment episodes still continues.

It also interesting to note that historical annals have descriptions of many events which looks like impacts, and which include deaths, in one case as much as 10k in China (the same time as Mahuika crater). 

4. The biggest risks comes from dark comets, which pose the most difficulty for observation. We should find the way to calculate their real density and to locate them before they enter into the solar system. Naiper suggested that 100k years ago, a Centauri object (comet-like body on unstable orbits near Saturn) of 100 km size entered the inner Solar system and later broke into several smaller bodies. Most of them are now dark comets, which are very dark invisible objects, but some known remains could include the Encke comet, Tunguska body and Taurid meteor shower. Some of them could create impacts as claimed by the Holocene Impacts working group. The largest dark comets should still be visible as smaller bodies. Recent infrared observations limited their numbers.

5. Centaurs appear to be the most dangerous class of asteroid-like objects , which are asteroids orbiting between Jupiter and Neptune, and thus often affected by their gravitational field, which sends them into the inner solar system. Centaurs typically end their journey by colliding with planets or the Sun. Stable time for their orbits is around 10 million years. Also, their remote position from the Sun and Earth makes them difficult to observe compared to main belt asteroids.

But, if a Centaur is disturbed by closely approaching Saturn, it will only take about 5 years for it to reach the inner Solar system and Earth in almost a free fall trajectory. Many Centaurs are 100 km in size and are in fact ice bodies, which will disintegrate in the inner Solar system into smaller comets. According to Napier, a centaur enters the inner Solar system every 100k years.  A centaur named “Chiron”  likely will become a short periodic comet in the next 3 mln years and a 0.12 probability of becoming an earth-orbit crossing comet. Its diameter is 200 km and on breakup it will create many debris, which may pose a risk to Earth.

6. Space exploration based on self-replicating robots will prevent risks of accidental impacts in the next 100 years or less (but these come with other risks).

7. Currently, we have technologies to deflect only small objects and with very advanced warning, which are not posing existential risks.

8. In general, human extinction risk from asteroid impacts are overestimated compared to other risks. (If we are not living in the period of bombardment, but even if we are living in it, it also shows the resilience of the biosphere and humanity to recent impacts, as suggested impacts of kilometric sized bodies doesn’t influence ways of human evolution). An extinction level asteroid has the probability of approximately 1 in 300 000 in the next 100 years, based on past frequency of 10 km size bodies impacts, the last of which happened 35 mln years ago. Paleohumans successfully survived the Eltanin impact 2 mln ago from a 1-4 km body, and also quickly repopulated afterwards after other similar impact ). 

9. Non-proportional budget allocation for different x-risks. Overestimation of asteroid risks becomes especially clear if we compare the accepted probability of collision and proposed budgets on it (in order of billions of dollars), when compared with budgets on preventing other x-risks (10 mln USD total for preventing unfriendly AI).  But these proposed budgets for antiasteroid defense are not available now. B612 was only able to rise around 1-2 mln a year, the same as to MIRI budget.

10. Smaller impacts could result in civilizational collapse, which may be a self-sustained, multi-level process. See Hanson’s article about social collapse and human extinction. 

11. Anthropic shadow could result into underestimation of impact risk, especially of the risk of very large impacts.  I believe that such underestimation could be not more than of a factor of 10.

12. If a 100 km size body falls on the Sun, it would produce a flash 1000 times stronger than the sun’s luminosity for 1 second, which would result in fires and skin burns for humans on the day side of the Earth. Some scientists think that the 775 AD event is explained by even smaller impact.

Structure of the map

The horizontal axis of the map is showing more and more dangerous objects (with some obvious caveats, for example the size of comets could vary and risks from tail debris depends on whether or  not a Centarius comet has entered the inner solar system in the last 100k years and broken-up – see Napier).

 

The size of asteroids are roughly divided into 3 categories: local effect, civilizational collapse level and extinction level. But, the actual size boundary between these three categories is debatable, and depends of speed, composition, angle, location of impact and many unknowns of its consequences.

 

The vertical axis of the map shows our possible efforts in impact prevention. I don’t delve into details of the many projects, which are excellently described in Wikipedia.

 

I would appreciate any comments which will help to improve the map.

 

I reccomend to see the map in the pdf form as jpg below is compressed to fit the page.

The pdf of the map is here: http://immortality-roadmap.com/asteroid.pdf

 

 

 

 

 

 

7

0
0

Reactions

0
0

More posts like this

Comments8


Sorted by Click to highlight new comments since:

Very nice! I would add another stage of defense of alternate foods. If we were actually prepared with these, then I don't think we would get civilization collapse for 1 km diameter and maybe not even for 10 km.

I am now working on an article about submarines as possible refuges in case of a global catastrophe. One of idea I had for food provision for it is filtering of water to get plankton, the same ways as whale do it. What do you think about feasibility of such approach?

That might work, though people would probably prefer fishing. But with a 10 km diameter impact, there probably would not be much fish or plankton.

But may be some bacteria could still be suspended in the water as well as some organics?

Very interesting map. Lots of good information.

Upvoted. I also thought this might be relevant.

https://www.gwern.net/Colder%20Wars

Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.

I read a good paper on this but unfortunately I don't have access to my drive currently and can't recall the name.

Thanks - just saw this comment now. Not really miss the idea, but not decoded include it here.

Curated and popular this week
 ·  · 12m read
 · 
Economic growth is a unique field, because it is relevant to both the global development side of EA and the AI side of EA. Global development policy can be informed by models that offer helpful diagnostics into the drivers of growth, while growth models can also inform us about how AI progress will affect society. My friend asked me to create a growth theory reading list for an average EA who is interested in applying growth theory to EA concerns. This is my list. (It's shorter and more balanced between AI/GHD than this list) I hope it helps anyone who wants to dig into growth questions themselves. These papers require a fair amount of mathematical maturity. If you don't feel confident about your math, I encourage you to start with Jones 2016 to get a really strong grounding in the facts of growth, with some explanations in words for how growth economists think about fitting them into theories. Basics of growth These two papers cover the foundations of growth theory. They aren't strictly essential for understanding the other papers, but they're helpful and likely where you should start if you have no background in growth. Jones 2016 Sociologically, growth theory is all about finding facts that beg to be explained. For half a century, growth theory was almost singularly oriented around explaining the "Kaldor facts" of growth. These facts organize what theories are entertained, even though they cannot actually validate a theory – after all, a totally incorrect theory could arrive at the right answer by chance. In this way, growth theorists are engaged in detective work; they try to piece together the stories that make sense given the facts, making leaps when they have to. This places the facts of growth squarely in the center of theorizing, and Jones 2016 is the most comprehensive treatment of those facts, with accessible descriptions of how growth models try to represent those facts. You will notice that I recommend more than a few papers by Chad Jones in this
LintzA
 ·  · 15m read
 · 
Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achieve 25% on its Frontier Math
Omnizoid
 ·  · 5m read
 · 
Edit 1/29: Funding is back, baby!  Crossposted from my blog.   (This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it). A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus and phlegm, leaving him desperately gasping for air. He is just a few months old. And yet that’s how old he will be when he dies. The aforementioned scene is likely to become increasingly common in the coming years. Fortunately, there is still hope. Trump recently signed an executive order shutting off almost all foreign aid. Most terrifyingly, this included shutting off the PEPFAR program—the single most successful foreign aid program in my lifetime. PEPFAR provides treatment and prevention of HIV and AIDS—it has saved about 25 million people since its implementation in 2001, despite only taking less than 0.1% of the federal budget. Every single day that it is operative, PEPFAR supports: > * More than 222,000 people on treatment in the program collecting ARVs to stay healthy; > * More than 224,000 HIV tests, newly diagnosing 4,374 people with HIV – 10% of whom are pregnant women attending antenatal clinic visits; > * Services for 17,695 orphans and vulnerable children impacted by HIV; > * 7,163 cervical cancer screenings, newly diagnosing 363 women with cervical cancer or pre-cancerous lesions, and treating 324 women with positive cervical cancer results; > * Care and support for 3,618 women experiencing gender-based violence, including 779 women who experienced sexual violence. The most important thing PEPFAR does is provide life-saving anti-retroviral treatments to millions of victims of HIV. More than 20 million people living with HIV globally depend on daily anti-retrovirals, including over half a million children. These children, facing a deadly illness in desperately poor countries, are now going