This is a link-post for an explainer of NASA's Double Asteroid Redirection Test (DART). It may be one of the most prominent existential risk reduction activities in the public sphere (the explainer even describes the likelihood of asteroid collisions large enough to threaten civilisation). I hadn't seen much talk about it.

DART will be reaching a two asteroid system in the evening of September 26. It has been travelling for around 10 months, and is now around 11 million kilometres away. The asteroids are not a threat to Earth in any way. It will autonomously target the smaller asteroid (Dimorphos, around 160m diameter) and collide with it at a speed of around 26,000 km/hr. 

This should inform the potential for future asteroid-redirection efforts. As noted in 'The Precipice' though, while potentially reducing the risk from asteroids, such a capability may pose a larger risk itself if used by malicious actors to target asteroids towards Earth.

70

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:
DC
20
2
4

As noted in 'The Precipice' though, while potentially reducing the risk from asteroids, such a capability may pose a larger risk itself if used by malicious actors to target asteroids towards Earth.

 

I am very confident that dual-use risk of improved asteroid deflection technology in general is much more likely than a random asteroid hitting us, and that therefore  this experiment has likely made the world worse off (with a bit less confidence, because maybe it's still easier to deflect asteroids defensively rather than offensively, and this experiment improved that defensive capability?). This is possibly my favorite example  of a crucial consideration, and also more speculatively, evidence that the sum of all x-risk reduction efforts taken together could be net-harmful (I'd give that a 5-25% chance?).

This is much more of a problem (and an overwhelming one) for risks/opportunities  that are microscopic compared to others. Baseline asteroid/comet risk is more like 1 in a billion. Much less opportunity for that with 1% or 10% risks.

To use asteroid deflection offensively, you’d have to:

  • Have motivation to destroy earth in indiscriminate fashion
  • Have asteroid deflection be within your technological and organizational capabilities
  • Have asteroid deflection be your easiest method of mass destruction
  • Avoid having your plans to hit Earth with an asteroid detected and disrupted in advance of launch of your asteroid deflection weapon
  • Redirect the asteroid with probably a large deviation in trajectory onto a very precise collision course with Earth
  • Avoid having that trajectory subsequently observed and disrupted by currently existing asteroid observation and deflection infrastructure now operating

By contrast, to have asteroid deflection offer a benefit given current information, the requirements are:

  • There has to be an asteroid on course to hit Earth that we haven't already detected
  • The asteroid has to be of a size class we can build and launch a DART at in time to nudge it anywhere slightly off course

A second form of benefit might be

  • Successfully operating a form of X-risk infrastructure gives a concrete example of something we already do to prevent X-risk and creates a path for government to sponsor more such projects.

As have previously been noted, the implicit flattish hierarchy of different points in pro-con lists can sometimes cause people to make bad decisions. 

 

Source: 80000 Hours

Some entirely made-up numbers (for the next 50 years):

  • Have motivation to destroy earth in indiscriminate fashion (~1)
  • Have asteroid deflection be within your technological and organizational capabilities (~1/10)
  • Have asteroid deflection be your easiest method of mass destruction (~1/7)
  • [Added] have naturally occurring asteroids on close enough trajectories that deflecting them towards Earth is a realistic proposition (~1/20?)
    • I think I have the least resilience here.
  • Avoid having your plans to hit Earth with an asteroid detected and disrupted in advance of launch of your asteroid deflection weapon (~1/50)
  • Redirect the asteroid with probably a large deviation in trajectory onto a very precise collision course with Earth (~1/8?)
  • Avoid having that trajectory subsequently observed and disrupted by currently existing asteroid observation and deflection infrastructure now operating (~1/10)
    • I think this is not independent of the previous 3 points, otherwise it'd be a lower probability)

~=1/5,600,000 or 1 in 5.6 * 10^6. However, I think these numbers are a bit of an understatement for total risk. This is because when I was making up numbers earlier, I was imagining the most likely actor to be able to pull this off in the next 50 years. But anthropogenic risks are disjunctive, multiple actors can attempt the same idea.

By contrast, to have asteroid deflection offer a benefit given current information, the requirements are:

  • There has to be an asteroid on course to hit Earth that we haven't already detected (~1/1,000,000,000)
    • (Note this is just numbers I pulled from Shulman's comment below)
  • The asteroid has to be of a size class we can build and launch a DART at in time to nudge it anywhere slightly off course (~1/2)
  • [Added] Just-in-time asteroid deflection without prior experiments are not sufficient (~1/2)

~=1/4,000,000,000 or 1 in 4*10^9.

So overall I'm skeptical that the first-order effects of deflecting natural asteroid risks is larger than the first-order effects of anthropogenic asteroid risks.

A second form of benefit might be

  • Successfully operating a form of X-risk infrastructure gives a concrete example of something we already do to prevent X-risk and creates a path for government to sponsor more such projects.

I agree with this. If the first-order effects are small, it's easy for second-order effects to dominate (assuming the second-order effects come from an entirely different channel than the first-order effects).

I appreciate the effort to put some numbers into this Fermi format! I'm not sure whether you intend the numbers, or the result, to represent your beliefs about the relative risks and benefits of this program. If they are representative, then I have a couple points to make.

I'm surprised you think there's a 10% chance that an actor who wants to destroy the Earth this century will have asteroid deflection within their technological capabilities. I'd assign this closer to a 1/1000 probability. The DART mission cost $324.5 million, was carried out by the world's economic and technological superpower, and its team page lists hundreds of names, all of whom I am sure are highly-qualified experts in one thing or another.

Maybe North Korea could get there, and want to use this as a second-strike alternative if they can't successfully develop a nuclear program? But we're spying on them like mad and I fully expect the required testing to make such a weapon work would receive the same harsh sanctions as their other military efforts.

I'd downweight the likelihood that asteroid deflection is their easiest method for doing so due to the difficulty with precision targeting from 1/7 to 1/1000. An asteroid of the size targeted by DART would take out hundreds of square miles (New York is 302 square miles, Earth's surface area is 197 million square miles). Targeting a high-population area puts even steeper demands on precision targeting and greater opportunity to mitigate damage by deflection to a lower-impact zone. It seems to me there are much easier ways for a terrorist to take out New York City than asteroid deflection.

Since your estimates for the two scenarios are only off by 3 OOMs, I think that these form the crux of our disagreement. I also note that this Fermi estimate no doubt has several conceptual shortcomings, and it would probably be useful to come up with an improved way to structure it.

Thanks for the engagement! Re:

I appreciate the effort to put some numbers into this Fermi format! I'm not sure whether you intend the numbers, or the result, to represent your beliefs about the relative risks and benefits of this program.

Those are meant to be my actual (possibly unstable) beliefs. With the  very important caveats that a) this is not a field I've thought about much at all and b) the numbers are entirely pulled from intuition, not even very simple models or basic online research.

Also, apparently NASA is putting the odds of a collision with Bennu, which is about the same size as Dimorphos, at 1/1750 in the next three centuries. That's not quite the same timeframe, and this is just a quick Google search result. A more authoritative number would be helpful. Given AI risk and the pace of tech change,  I think it makes sense to highly prioritize asteroid impacts this century, not in three centuries.

What I take from this mission is not so much 

"Great, now we are a bit safer from asteroids hitting the earth." 

but more like

"Great, NASA and the American public think existential risks like asteroids are worth taking seriously. The success of this mission might make it a bit easier to convince people that, one, there are other existential risks worth taking seriously and, two, that we can similarly reduce those risks through policy and technology innovation. Maybe now other existential risk reduction efforts will become more politically palatable, now that we can point to the success of this mission".

[Edit: here's a relevant article that supports my point: "Nasa’s mission gives hope we can defend our planet but human nature and technology present risks of their own"  https://on.ft.com/3LNySAM]

For more on this risk, see this interesting recent book: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity (Jun. 2020) Daniel Deudney

https://academic-oup-com.ezp.lib.cam.ac.uk/book/33656?login=true 

https://www.amazon.co.uk/Dark-Skies-Expansionism-Planetary-Geopolitics/dp/0190903341 

I really don't think dual use is in any way worrisome if humanity has several institutions capable of asteroid deflection, and a tiny one if there is only one. Quoting a comment I gave to finm in his post on  asteroid risks:

I don't think the dual-use should worry us much. I cannot estimate how much harder it is in general to divert an asteroid toward Earth than away from it, but I can confidently say that it is several orders of magnitude higher than 10x [figure finm gives as example in his text] (the precision needed would be staggering). In addition, to divert an asteroid toward Earth, one needs an asteroid. The closer the better. The fact that the risk of a big-enough asteroid hitting the Earth is so low indicates that there are not too many candidates. This factor has to be taken into account as well.

But, even if diverting an asteroid towards the Earth would be only 10 times harder than diverting it from the Earth, dual-use does not need to be a big concern. To actually manage to divert an asteroid towards the Earth one does not only need to divert it, one also needs to prevent the rest of humanity from diverting it away on time, which is much easier. So, as long as a small bunch of independent institutions are able and ready to divert asteroids, dual-use does not seem a concern to me.

I've been keeping tabs on this since mid-August when the following Metaculus question was created:

The community and I (97%, given NASA's track record of success) seem in agreement that it is unlikely DART fails to make an impact. Here are some useful Wikipedia links that aided me with the prediction: (Asteroid impact avoidance, Asteroid impact prediction, Near Earth-object (NEO), Potentially hazardous object). 

There are roughly 3 hours remaining until impact (https://dart.jhuapl.edu/); it seems unlikely that something goes awry, and I am firmly hoping for success.  

While I'm unfamiliar with the state of research on asteroid redirection or trophy systems for NEOs, DART seems like a major step in the correct direction, one where humanity faces a lower levels of risks from the collision of asteroids, comets, and other celestial objects with Earth. 

Here's a livestream - impact should be at 7:16 pm ET https://www.youtube.com/watch?v=-6Z1E0mW2ag

Impact successful - so exciting!

Curated and popular this week
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies