Hide table of contents

In this post I will quantify the risk of natural asteroid/comet impacts, and summarise the argument made by Carl Sagan and Steven Ostro that developing asteroid deflection technology could be a net harm, as it enables us to accidentally or intentionally deflect harmless asteroids in to Earth. They argue that this increased risk likely outweighs the natural risk of asteroid impacts.

Cross posted from my blog here.

A video version of this is available here.

Introduction

Approximately 66 million years ago, a 10 km sized body struck Earth, and was likely one of the main contributors to the extinction of many species at the time. Bodies the size of 5 km or larger impact Earth on average every 20 million years (one might say we are overdue for one, but then one wouldn’t understand statistics). Asteroids 1 km or larger impact Earth every 500,000 years on average. Smaller bodies which can still do considerable local damage occur much more frequently (10 m wide bodies impact Earth on average every 10 years). It seems reasonable to say that only the first category (>~5 km) pose an existential threat, however many others pose major catastrophic threats*.

Given the likelihood of an asteroid impact (I use the word asteroid instead of asteroid and/or comet from here for sake of brevity), some argue that further improving detection and deflection technology are critical. Matheny (2007) estimates that, even if asteroid extinction events are improbable, due to the loss of future human generations if one were to occur, asteroid detection/deflection research and development could save a human life-year for $2.50 (US). Asteroid impact mitigation is not thought to be the most pressing existential threat (e.g. artificial intelligence or global pandemics), and yet it already seems to have better return on investment than the best now-centric human charities (though not non-human charities – I am largely ignoring non-humans here for simplicity and sake of argument).

The purpose of this article is to explore a depressing cautionary note in the field of asteroid impact mitigation. As we improve our ability to detect and (especially) deflect asteroids with an Earth-intersecting orbit away from Earth, we also improve our ability to deflect asteroids without an Earth-intersecting orbit in to Earth. This idea was first explored by Steven Ostro and Carl Sagan, and I will summarise their argument below.

Asteroid deflection as a DURC

A dual use research of concern (DURC) refers to research in the life sciences that, while intended for public benefit, could also be repurposed to cause public harm. One prominent example is that of disease and contagion research (can improve disease control, but can also be used to spread disease more effectively, either accidentally or maliciously). I will argue here that DURC can and should be applicable to any technology that has a potential dual use such as this.

Ostro and Sagan (1998) proposed that asteroid impacts could act as a double edged explanation for the Fermi paradox (why don’t we see any evidence of extraterrestrial civilisations?). The argument goes as follows: Those species that don’t develop asteroid deflection technology eventually go extinct due to some large impact, while those that do eventually go extinct because they accidentally or maliciously deflect a large asteroid into their planet. This has since been termed the ‘deflection dilemma‘.

The question arises: does the likelihood of a large impact increase as asteroid deflection technology is developed, rather than decrease? The most pressing existential and catastrophic threats today seem to be those that were created by technology (artificial intelligence, nuclear weapons, global health pandemics, anthropogenic global warming) rather than natural events (asteroid impacts, supervolcanoes, gamma ray bursts). Humanity has survived for millions of years (depending on how you define humanity), yet in the last 70 years have seen the advent of nuclear weapons and other technology that could meaningfully cause a catastrophic at any time. It seems possible therefore that the bigger risk will be that caused by technology, not the natural risk.

Ostro and Sagan (1994) argue that development of asteroid deflection technology is at the time of writing (and presumably today) premature, given the track record of global politics.

Who would maliciously deflect an asteroid?

Ignoring accidental deflection, which might occur when an asteroid is moved to an Earth or Lunar orbit for research or mining purposes (see this now scrapped proposal to bring a small asteroid in to Lunar orbit), there are two categories of actors that might maliciously deflect such a body; state actors and terrorist groups.

A state actor might be incentivised to authorise an asteroid strike on an enemy or potential enemy in situations where they wouldn’t necessarily authorise a nuclear strike or conventional invasion. For example, let us consider an asteroid of around 20 m in diameter. Near Earth orbit asteroids of around this size are often only detected several hours or days before passing between Earth and the Moon. If a state actor is able to identify an asteroid that will pass near Earth in secret before the global community has, they can feasibly send a mission to alter its orbit to intersect with Earth in a way such that it would not be detected until it is much too late. Assuming the state actor did its job well enough, it would be impossible for anyone to lay blame on them, let alone even guess that it might have been caused by malicious intent.

An asteroid of this size would be expected to have enough energy to cause an explosion 30 times the strength of the nuclear bomb dropped over Hiroshima in WWII.

We can temper the likelihood of this scenario by speculating that it is unlikely for some state actor to covertly discover a new asteroid and track its orbit without any other actor discovering it, considering there are transparent organisations working on tracking them. However, is it possible that a government organisation (e.g. NASA) could be ordered to not share information about a new asteroid?

What to do about this problem

Even if we don’t directly develop asteroid deflection technology, as other technologies progress (e.g. launching payloads becomes cheaper, propulsion systems become more efficient), it will become easier over time anyway. Other space weapons, such as anti-satellite weapons (direct ascent kinetic kill projectiles or directed energy weapons), space stored nuclear weapons, and kinetic bombardment (rods from god) will all become easier with general improvements in relevant technology.

The question arises – even if a small group of people were to decide that developing asteroid deflection technology causes more harm than good, what can they meaningfully do about it? The idea that developing asteroid deflection technology is good is so entrenched in popular opinion that it seems like arguing for less or no spending in the area might be a bad idea. This seems like a similar situation to where AI safety researchers find themselves. Advocating for less funding and development of AI seems relatively intractable, so they instead work on solutions to make AI safer. Another similar example is that of pandemics research – it has obvious benefits in building resilience to natural pandemics, but may also enable a malicious or accidental outbreak of an engineered pathogen.

Final thoughts

I have not considered the possibility of altering the orbit of an extinction class body (~10 km diameter or greater) in to an Earth intersecting orbit. While the damage of this would obviously be much greater, even ignoring considerations about future generations that would be lost, it would be significantly harder to alter the orbit of such a body. Also, we believe we have discovered all of the bodies of this size in a near Earth orbit (Huebner et al 2009), and so it would be much harder to do this covertly and without risking retaliation (e.g. mutually assured destruction via nuclear weapons). The possibility of altering the orbit of such bodies should still be considered, as it poses an existential/catastrophic risk while smaller bodies do not.

I have also chosen to largely not focus on other types of space weapons (see this book for an overview of space weapons generally) for similar reasons – the potential for dual-use is less clear, thus in theory making it harder to set up such technologies in space. It would also be more difficult to make the utilisation of such weapons look like an accident.

Future work

A cost benefit analysis that examines the pros and cons of developing asteroid deflection technology in a rigorous and numerical way should be a high priority. Such an analysis would consider the expected value of damage of natural asteroid impacts in comparison with the increased risk from developing technology (and possibly examine the opportunity cost of what could otherwise be done with the R&D funding). An example of such an analysis exists in the space of global health pandemics research, which would be a good starting point. I believe it is unclear at this time whether the benefits outweigh the risks, or vice versa (though at this time I lean towards the risks outweighing the benefits – an unfortunate conclusion for a PhD candidate researching asteroid exploration and deflection to come to).

Research regarding the technical feasibility of deflecting an asteroid into a specific target (e.g. a city) should be examined, however this analysis comes with drawbacks (see section on information hazards).

We should also consider policy and international cooperation solutions that can be set in place today to reduce the likelihood of accidental and malicious asteroid deflection occurring.

Information hazard disclaimer

An information hazard is “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” Much of the research in to the risk side of DURCs could be considered an information hazard. For example, a paper that demonstrates how easy it might be to engineer and release an advanced pathogen with the intent of raising concern could make it easier for someone to do just that. It even seems plausible that publishing such a paper could cause more harm than good. Similar research into asteroids as a DURC would have the same issue (indeed, this post itself could be an information hazard).

* An ‘existential threat’ typically refers to an event that could kill either all human life, or all life in general. A ‘catastrophic threat’ refers to an event that would cause substantial damage and suffering, but wouldn’t be expected to kill all human life, which would eventually rebuild.

Comments10
Sorted by Click to highlight new comments since: Today at 1:35 PM
Ignoring accidental deflection, which might occur when an asteroid is moved to an Earth or Lunar orbit for research or mining purposes

I haven't seen this mentioned in other discussion of asteroid risk (i.e. I don't think Ord mentions it in the Precipice) but I don't think it should be ignored so quickly. If states/corporations develop technology to transfer asteroids to Earth orbit then this seems like it would represent an equivalent dual-use concern. Indeed, it may be even riskier than just developing tools for deflection, as activities like mining could provide 'cover' for maliciously aiming an asteroid at Earth. On the positive side, similar tools can probably be used for both orbital transfer and deflection, so the risky technology may also be its own counter-technology.

We were pretty close to carrying out an asteroid redirect mission too (ARM), it was only cancelled in the last few years. It was for a small asteroid (~ a few metres across), but it could certainly happen sooner than I think most people suspect.

Vision of Earth fellows Kyle Laskowski and Ben Harack had a poster session on this topic at EA Global San Francisco 2019: https://www.visionofearth.org/wp-content/uploads/2019/07/Vision-of-Earth-Asteroid-Manipulation-Poster.pdf

They were also working on a paper on the topic.

Neat, I'll have to get in touch, thanks.

(Just a tangential clarification) You write:

* An ‘existential threat’ typically refers to an event that could kill either all human life, or all life in general.

That describes an extinction risk, which is one type of existential risk, but not the only type. Here are two of the most prominent definitions of existential risk:

An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom, emphasis added)

And:

An existential risk is a risk that threatens the destruction of humanity’s longterm potential. (Ord, The Precipice, emphasis added)

(See here for more details. And here for definitions of "global catastrophic risks".)

The idea that developing asteroid deflection technology is good is so entrenched in popular opinion that it seems like arguing for less or no spending in the area might be a bad idea. This seems like a similar situation to where AI safety researchers find themselves. Advocating for less funding and development of AI seems relatively intractable, so they instead work on solutions to make AI safer. Another similar example is that of pandemics research – it has obvious benefits in building resilience to natural pandemics, but may also enable a malicious or accidental outbreak of an engineered pathogen.

I'm not sure about this. I don't think I've ever heard about the idea that asteroid deflection technology would be good (or even about such technology at all) outside of EA. In contrast, potential benefits from AI are discussed widely, as are potential benefits from advanced medicine (and then to a lesser extent biotech advancements, and then maybe slightly pandemics research).

So I'm not sure if there is even widespread awareness of asteroid deflection technology, let alone entrenched views that it'd be good. This might mean pushing for differential progress in relation to this tech would be more tractable than that paragraph implies.

When I say that the idea is entrenched in popular opinion, I'm mostly referring to people in the space science/engineering fields - either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you'd be right that people would think about it much less, however the researchers and the policy makers are the ones you'd need to convince for something like this, in my view.

Ah, that makes sense, then. And I'd also guess that researchers and policy makers are the main people that would need to be convinced.

But that might also be partly because the general public probably doesn't think about this much or have a very strong/solidified opinion; that might make it easier for researchers and policy makers to act in either direction without worrying about popular opinion, and mean this can be a case of pulling the rope sideways. So influencing the development of asteroid deflection technology might still be more tractable in that particular regard than influencing AI development, since there's a smaller set of minds needing changing. (Though I'd still prioritise AI anyway due to the seemingly much greater probability of extreme outcomes there.)

I should also caveat that I don't know much at all about the asteroid deflection space.

Interesting post. I think the key points raised make sense to me.

I'll share a few quick thoughts in separate comments.

Firstly, this issue was briefly discussed in The Precipice by Toby Ord. Though I'm not sure if that discussion contained any important insights that this post missed.

Secondly, something that seemed (in my non-expert opinion) slightly odd about the discussion The Precipice, and that also seems applicable to this post, is the apparent focus just on how the benefits from being able to deflect asteroids away from the Earth compare to the risks from being able to deflect asteroids towards the Earth, without also discussing the risks just from a proliferation of additional nuclear explosives and related technologies. I.e., perhaps the explosives developed for use in deflection could just be used "directly" on targets on the Earth?

It's possible that there's a reason to not talk much about that side of things. Though recently I discovered that GCRI have a paper on that matter, which looks interesting, though I've only read the blog post summary.

Asteroid impact mitigation is not thought to be the most pressing existential threat (e.g. artificial intelligence or global pandemics)

I agree with that. I've also created a database of existential risk estimates, which can give a sense of various people's views on which risks are greatest and how large the differences are. It also includes a few estimates relevant to asteroid risk.

Curated and popular this week
Relevant opportunities