ABSTRACT Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.

24

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since: Today at 10:18 AM
[anonymous]4y9
0
0

I only recently got around to reading this, but I'm very glad I did! I definitely recommend reading the full paper, but for those interested, there is also this TED talk where you can get the gist of it.

In any case, the paper made me wonder about the possibility of having a sort of 'worst case scenario bunker system' capable of rebuilding society. I imagine such discussion was not included in this paper because it isn't relevant to protecting against a "devastation of civilization" (death of at least 15% of world population) and is relevant only in near-extinction level catastrophes. Even so, I am curious to hear what people think of this idea and would love to read your thoughts.

To elaborate for clarity, when I say 'worst case scenario bunker system', I obviously don't know specifically how this would work, but I am imagining some kind of really intense and massive bunker system, perhaps in space if not underground, that would have resources for people to stay alive as well as massive archives of information on rebuilding society. Then, there would have to be some system for choosing which people retreat to the bunkers during a disaster so that they would be qualified and well equipped to rebuild society. Additionally, it may prove wise to maintain a rotation of people (and other species) living in this safe haven so that it may be guaranteed ready at any given time. Of course, this all seems extremely difficult to pull off, but the stakes also seem extremely high. Additionally, perhaps if society is rebuilt from such a bunker after a global catastrophe, it would be rebuilt in a way that preempts the semi-anarchic default condition from the start.

Switzerland seems to have a bunker and archive system - link.

I'd recommend the full paper to anyone who's unsure about reading it. Bostrom's writing is as engaging as always, and he uses a couple of catchy phrases that could become common in the field of X-risk ("semi-anarchic default condition", "the apocalyptic residual").

I think one particular section should be reconsidered before this goes from a working paper into publication (if that's the final goal). Bostrom's "high-tech panopticon" segment, which suggests a way in which we could protect ourselves from individuals destroying civilization if "everybody is fitted with a 'freedom tag'" and monitored by "freedom officers", is clearly meant as an extreme hypothetical situation rather than a policy recommendation. But it seems destined to be picked up and reported as though it were, in fact, a policy recommendation, especially because he goes on to estimate costs and ponder potential benefits of the program.

I appreciate the idea, and the section that it's a part of (which includes considerations about "preemptive assassination" of people who seem likely to attempt a "city-destroying act"), as the sorts of things that really are worth considering if the future goes in certain directions. But it seems worthwhile to find a way to write about the same ideas without putting them in a format that will be quite as easy for the media to attack. (Making the section a bit more dry and boring might do the trick; the Panopticon example is highly vivid and visual.)

Actually, the paper has already been published in Global Policy (and in a very similar form to the one linked above): https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12718.


I have similar worries about making the high-tech panopticon too sticky a meme. I've updated slightly against this being a problem since there's been very little reporting on the paper. The only thing I've seen so far is this article from Financial Times: https://www.ft.com/content/dda3537e-01de-11e9-99df-6183d3002ee1. It reports on the paper in a very nuanced way.

Your link is broken, but it looks like the paper came out in September 2019, well after my comment (though my reservations still apply if those sections of the paper were unchanged).

Thanks for the update on media reporting! Vox also did a long piece on the working-paper version in Future Perfect, but with the nuance and understanding of EA that one would expect from Kelsey Piper.

I think the central "drawing balls from an urn" metaphor implies a more deterministic situation than that which we are actually in – that is, it implies that if technological progress continues, if we keep drawing balls from the urn, then at some point we will draw a black ball, and so civilizational devastation is basically inevitable. (Note that Nick Bostrom isn't actually saying this, but it's an easy conclusion to draw from the simplified metaphor). I'm worried that taking this metaphor at face value will turn people towards broadly restricting scientific development more than is necessarily warranted.

I offer a modification of the metaphor that relates to differential technological development. (In the middle of the paper, Bostrom already proposes a few modifications of the metaphor based on differential technological development, but not the following one). Whenever we draw a ball out of the urn, it affects the color of the other balls remaining in the urn. Importantly, some of the white balls we draw out of the urn (e.g., defensive technologies) lighten the color of any grey/black balls left in the run. A concrete example of this would be the summation of the advances in medicine over the past century, which have lowered the risk of a human-caused global pandemic. Therefore, continuing to draw balls out of the urn doesn't inevitably lead to civilizational disaster – as long as we can be sufficiently discriminate towards those white balls which have a risk-lowering effect.

Curated and popular this week
Relevant opportunities