Hide table of contents
  • (This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was encouraged to post something).
  • (Written in my personal capacity, reflecting only my own, underdeveloped views).
  • (Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the argument is also appreciated)

My status: doubt. Shallow ethical speculation, including attempts to consider different ethical perspectives on these questions that are both closer to and further from my own.


If I had my way: great qualities for existential risk reduction options

We know what we would like the perfect response to an existential risk to look like. If we could wave a wand, it would be great to have some ideal strategy that manages to simultaneously be: 

  • functionally ideal:
    • effective (significantly reduces the risks if successful, ideally permanently), 
    • reliable (high chance of success), 
    • technically feasible, 
    • politically viable, 
    • low-cost;
    • safe (little to no downside risk -- ie graceful failure),  
    • robust (effective, reliable, feasible, viable and safe across many possible future scenarios), 
    • [...]
  • ethically ideal
    • pluralistically ethical (no serious moral costs or rights violations entailed by intervention, under a wide variety of moral views), 
    • impartial (everyone is saved by its success; no one bears disproportionate costs of implementing the strategy) / 'paretotopian' (everyone is left better off, or at least no one is made badly worse off);
    • widely accepted (everyone (?) agrees to the strategy's deployment, either in active practice (e.g. after open democratic deliberation or participation), passive practice (e.g. everyone has been notified or informed about the strategy), or at least in principle (we cannot come up with objections from any extant political or ethical positions, after extensive red-teaming)), 
    • choice-preserving (does not lead to value lock-in and/or entail leaving a strong ethical fingerprint on the future
    • [...]
  • etc, etc.

But it may be tragically likely that interventions that combine every single one of these traits are just not on the table. To be clear, I think many proposed strategies for reducing existential risk at least aim at hitting many or all of these criteria. But these won't be the only actions that will be pursued around extreme risks.

What if the only feasible strategies to respond to existential risks--or the strategies that will most likely be pursued by other actors in response to existential risk--are all, to some extent, imperfect, flawed or 'bad'? 

Three 'bad' options and their moral dilemmas

In particular, I worry about at least three (possible or likely) classes of strategies that could be considered in response to existential risks or global catastrophes: (1) non-universal escape hatches or partial shields; (2) unilateral high-risk solutions; (3) strongly politically or ethically partisan solutions. 

All three plausibly constitute '(somewhat) bad' options. I don't want to say that these strategies should not be pursued (e.g. they may still be 'least-bad', given their likely alternatives; or 'acceptably bad', given an evaluation of the likely benefits versus costs). I also don't want to claim that we should not analyze these strategies (especially if they are likely to be adopted by some people in the world). 

But I do believe that all create moral dilemmas or tradeoffs that I am uncomfortable with--and risky 'failures' that could be entailed by taking one or another view on whether to use them. The existential risk community is not alone in facing these dilemmas--many other communities, movements, or actors that are interested in- or linked to existential risks face these dilemmas, whether they realize it or not. But these dilemmas do keep me up at night, and we might wrestle with them more.

Non-universal escape hatches & partial shields

Many existential risks (e.g. AI risk) are relatively all-or-nothing: failure would make any defense or holdout implausible. No one survives. 

Others however (e.g. nuclear war, extreme biorisks) are or appear potentially shield-able or escapable. People have proposed various strategies or avenues--from island refuges to submarines to bunkers or off-planet refuges--by which we could ensure that at least some people survive a catastrophe, if outright mitigation efforts fail. 

In this case, I worry about two converse ethical risks:

Existential desertion/escapism

  • Existential desertion / escapism: taking an escape route from an existential risk, while leaving many others behind to suffer it; or implementing a defensive policy that only shields a small part of today's world from the risk:
    • (weak) escape hatch examples: common public debates over "preppers", 'elite survivalists', -of 'abandon earth' space programs, etc.
    • (weak) partially protective shields examples: climate change/biorisk plans that settle for minimizing impact on / exposure of wealthy nations; vaccine hoarding; etc.
    • This position seems most ethically challenging when
      • the escape hatch is out of reach for many who want it (e.g. it's costly and requires many resources); 
      • your escape hatch or partial shield reroutes significant resources which would have been critical for the collective mitigation effort; 
      • your visible defection erodes common trust and morale (signaling distrust in the viability of joint efforts), and derails any coalitional projects to pursue or implement collective mitigation responses; 
      • you are disproportionately responsible for generating the existential risk in the first place, or you could otherwise take significant steps to mitigate it (e.g. nuclear state leader's bunkers); 
      • you are a really bad person; and you have such tools of value lock-in available to you that it is very likely that the continued survival of your descendant line seems net-bad;  
      • your ethical framework focuses on the risk's impacts on today's world and/or people over future people; 
    • This position seems less ethically challenging when
      • the rest of the world still does not take seriously the imminent risk you're concerned about in spite of your repeated attempts to warn them; 
      • the resources used for the local escape hatch are minor or non-fungible; 
      • (under some moral views:) the people taking the escape hatch are less responsible for generating or sustaining the risk in the first place; 
      • you sponsor the escape hatch for other people, or set up random escape hatch lotteries;
      • you adopts a total utilitarian or longtermist perspective where there is just a staggering premium on survival by anyone;

Existential conscription

  • existential conscription: (i.e. 'the existential crab bucket'): reverse of existential desertion / escapism: refusing to allow some people to take available-to-them escape- or protection strategies (that would ensure human survival even if collective risk mitigation projects fail to work). For instance, because you want to ensure everyone directs their energies/resources at the collective project of ensuring everyone alive is saved (even if at much lower odds), and/or you disapprove of people going it alone and free-riding;
    • examples: objections to (existential-risk-motivated) space colonization on grounds of the expected 'moral hazard' of abandoning the earth; (fictional example: 'Escapism' and space travel as a crime against humanity in Cixin Liu's The Dark Forest)
    • (very) weak example: early-COVID-19 policies to prevent public from buying face masks to reserve resources for public health workers; 
    • This position seems most ethically challenging when:  
      • the world hasn't actually gotten a solid collective risk mitigation program underway to collectively address the risk;
      • mitigation of the existential risk in question isn't strongly dependent on mass public action (or on the actions of the people aiming to leave) 
      • you disapprove of individual actions to collective problems on the basis of political aesthetics; 
      • you selectively judge and/or block escapist attempts primarily on political grounds, etc.
    • This position seems less ethically challenging when
      • there is strong reason to believe that the collective mitigation of the existential risk is possible, but is a weakest-link problem, one that also faces significant free-rider concerns; 
      • successful resolution of the existential risk is strongly dependent on the marginal resources that would be leaked away (either directly, or if it caused a cascading collapse in coalitional trust); 
      • ...? 

 

Unilateral high-risk solutions

Some possible solutions to existential risks might have a large risk of (non-existential but significant) catastrophe if their implementation fails or things do not go entirely to plan. 

This again creates two linked morally complex positions:

Existential wagering/roulette

  • existential wagering: (or: 'existential roulette'?): taking some strategy that might preserve all of humankind from an imminent existential risk, but which risks the lives and/or welfare of some significant number of people alive if not all goes to plan.
    • In some sense, this is the inverse of something like the 'nuclear peace' gambit (which [under charitable interpretations] stakes existential or globally catastrophic downside consequences in order to protect large parts of the world population from the likely threat of frequent great power wars).
    • Examples: Yudkowsky "if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent chan[c]e of killing more than one billion people, I'd take it" 
    • Uncertain examples: 
      • proposals for geoengineering which might pose major ecological risks if there is scientific error, or if deployed without the proper governance framework in place; 
      • proposals for extensive societal reorientation or moral revolution that would require historically rare levels of global buy-in and sacrifice (e.g. degrowth; half-earth socialism, ...); especially if these approaches misjudge crucial political conditions (e.g. they may not benefit from a gradual global transformation in ecological awareness, but rather face sustained global political opposition and conflict); (low confidence in this analysis);
      • What would not qualify: Andreas Malm's How To Blow Up A Pipeline, as downside risks of direct actions seem unlikely to be sufficiently catastrophic; 
    • This position seems most ethically challenging if: 
      • Your moral framework emphasises the prevention of harm (or exposure to risk)
      • the decision to deploy the risky intervention is taken by people very shielded from the harms of its failure (or success);
      • the intervention's failure modes creates unequal or differential impacts on different (demographic/political) groups;
      • the interventions' global impacts aren't a hypothetical or a risk, but a guaranteed cost; 
      • the existential catastrophe is still some time off, so there was no need to spin the wheel quite yet;
      • ...
    • This position seems (slightly) less ethically challenging if: 
      • after significant exploration and red-teaming, you've not yet identified any closely adjacent other plans that would avoid this risk;
      • you have (extremely) strong reason to expect the disaster is extremely close, and no one has other, better solutions in reserve; (we're at the game's final moveset, so it might as well be this);
      • exposure to the intervention's failure modes is significant but mostly globally random; 
      • if exposure to the costs is not random, you've at least followed a process to solicit buy-in from the at-risk parties (i.e. their agreement that they are willing to face the risks); and/or the decision to go ahead is taken by these people
      • ... 

Existential gridlock/veto

  • existential gridlock / veto: (partial) reverse of existential wagering: refusing to allow others to deploy any solutions to imminent existential risks because you perceive they could pose a nonzero risk to some part of the world;
    • example: 
      • opposition to geoengineering (that is grounded on downside-risk concerns rather than moral hazard concerns); ...
      • ?
    • This position seems most ethically challenging if:
      • There are no other feasible plans for addressing the existential risk;
      • The costs are to your parochial political interests rather than to sacred values, or to sacred values rather than the lives of large populations (uncertain about this, as in some moral systems certain sacred values would take precedence);
      • the intervention is strongly grounded in status quo bias, and doesn't pass a simple reversal test (e.g. if you lived in a world where the intervention was already deployed, you would argue against its cessation);
      • You impose an impossibly high burden of proof for establishing safety on the intervention's proponents --i.e. you haven't specified (to yourself or others) the risk threshold at which you'd accept the deployment-- to a level where it is unlikely that any sufficiently thorough intervention will ever be deployed.
    • This position seems (slightly) less ethically challenging if: 
      • We have strong reasons to expect that any intervention's proponents are likely to overestimate or overstate their success chances, and understate the failure mode impacts, of their suggestions; so they and us should be protected from running these risks;

 

Politically or ethically partisan solutions

Some possible solutions to existential risks might have the trait that, whether deliberately or indeliberately, and whether in means or in ends, they end up favouring or strengthening some political actors, or some ethical frameworks, over others.

Such approaches can give rise to three morally complex responses or risks:

Existential co-option

  • existential co-option: adopting interventions that might help mitigate an existential risk, but which end up empowering certain actors. These actors are not otherwise bound to our values, and they may end up using either the threat of the risk, or the established mitigation policies, in ways that are unrelated to mitigating the risk itself, and rather focused at e.g. social control or value lock-in; 
    • Examples: some proposals for surveillance systems;
    • [I got confused & tired here, and hope to work this out some other day]
    • [...]

Existential blackmail

  • existential blackmail: (e.g.: 'if we can't have the world, no one can--certainly not our political opponents'); a version of existential veto where you refuse to allow some other party to attempt to solve an existential risk for everyone, unless they do so in a way that fits your preferred means, and do so on terms that ensure the saved world reflects your own ideals or values. 
    • Examples: 
    • [I got confused & tired here, and hope to work this out some other day]
    • ...

Existential spite

  • existential spite: (i.e. 'existential I-told-you-so'): passive version of existential blackmail: a lack of interest in-/motivation for exploring potential solutions to existential risks, because as things stand under the status quo, your political adversaries/outgroup are likely to get the credit for any solution, and/or are likely to inherit the resulting world and future, either of which is anathema;
    • Examples: left as an exercise to the reader;

Conclusion

Writing this draft is a bit sobering. My point is not that the above solutions should all be avoided--nor that any of the 'approaches' above are all entirely illegitimate responses or views on them. It may not be the case that the correct approach lies in the middle of any of them.  However, I feel ultimately that the key point is that none of these positions are easy, obvious, unproblematic, or safe responses, in the face of existential risk. This is something that should trouble and provoke some thought, not just for us in the EA community, but also in the academic existential risk / Global Catastrophic Risk community, and for any other political movements that realise their stake in society's historical trajectory around extreme risks.  

I want better solutions. But while we wait, I want better ways of thinking about bad ones.

62

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 9:39 AM

EDIT: Made edits to this one day later, for clarity and to add one paragraph.

People muddle through life, adopting imperfect solutions routinely, iterating through some approaches, abandoning others, learning about new possibilities, seeing solutions clearly only in hindsight sometimes. The point is, if they take a problem seriously, then they take steps to solve it. Others can evaluate the effort or offer assistance, trying to redirect their efforts or save them from the consequences of a poor solution. Or they might be sold on ineffective solutions by others, dooming them to a worse path if they go along. Rarely do problems just solve themselves. Whatever solutions start with taking the problem seriously.

Despite doom-scrolling and predicting the end of the world, taking a problem seriously is not that common. Preppers do it, some politicians do it (not that many), and plenty of think tanks, rich people, and private orgs do it. From there you see solutions to existential risk, regardless of the quality of the solutions.

Between not taking a problems seriously, and doing the selfish thing, if the problem is taken seriously, there's not a lot of wiggle room for altruistic locally or globally effective solutions. For example, while people are still offering solutions to the climate crisis, it's been a crisis since the 1980's, and it's been under analysis since then. The solutions are not that different now, and that is actually worrying, because the situation has worsened, both in reality and in its implications, since the 1980's. Despite that, you can see developed countries don't take it seriously, there's corruption and shilling at all levels of policy and strategy around it, and the most widely cited sources mostly get ignored (ie, the IPCC).

If the world's countries are still in denial in 20-30 years, when GAST is at 2C, when we've seen many novel extreme weather events, and when we know to expect far worse near-term consequences for the biosphere than now, then we know that few feasible solutions to our extinction crisis will remain. As a scenario exercise, you can examine those remaining feasible ones for any that seem worthwhile. You'll be disappointed.

I took my time deciding that saving humanity was not, per se, an ethical requirement of being human. I consider existing people to have moral status, but I don't see the ethicality in guaranteeing lots of future people are conceived. In seeking moral clarity, I have reduced the solutions to existential risk to those that seem desirable to me. I don't consider it spiteful, more just disinterested.

EDIT: I also came to realize that life is not a party. I don't mean that I was once some lazy party animal, or that life should be split between parties and hard work. I mean that life lived properly wouldn't offer many sustainable opportunities for frivolous fun if one is not already fairly happy and safe in a supportive community. If you take away vices, a poison to culture, society, and psychological health, frivolous fun becomes harder to create on demand. This shifts the burden for life satisfaction to maintenance of desirable circumstances on a daily basis, or increases the demand for effort to gain those circumstances. And that can be a lot of effort over a long time period. I suspect that living in such a state of difficulty, when tasked with finding or maintaining happiness without vices (for example, without recreational drugs of various types, modern distractions, and novelties that we consider harmless), could be part of the solution to humanity's general problems as a species. However, I suspect anyone from today's society would find that alternative society undesirable, and so would not form such a goal as their future. If that is so, then we ignore the only option that seems available to us. The boring option of cleaning up our behavior and ending our indulgence in our vices. And avoiding the actual solution is a typical response for a lot of problems. "Of course we could stop problem X, but then we'd have to stop doing Y, and we like doing Y, so lets pretend Y doesn't matter and solve problem X some other way! " Which typically doesn't work. And so our pathway remains ambiguous, risking dangers from problem X so that we can keep doing vice Y.

Also, the disaster that has been human response to existential crisis has informed me that being a human that guarantees a future for humanity is actually really difficult, that that combination of altruism and selfishness, is not sustainable in all circumstances as a societal current. With that conclusion, I can form a different model of how and whether society can and should survive longterm, one built around a society that consistently works toward its own survival and well-being.

In my belief, that society has to:

  • be small
  • know its ecological niche and keep it
  • show a strong altruistic streak, among its own people and toward other species
  • have plenty of humility about its own future.
  • stay on Earth in a single location
  • have no interest in a diaspora
  • not see itself as deserving to spread or grow
  • maintain its own population size without difficulty or conflict
  • have overcome humanity's worst ills (for examples, drug abuse, misogyny, slavery, child abuse, epidemics, and war)
  • carry on despite setbacks and burdens

For me, it's less that humanity must survive so it can develop, and more that humanity must develop so that it is worthwhile if it survives. Technology is an essential part of that, mainly in its ability to raise life satisfaction and overcome humanity's ills.

Obviously, if people overcome simple denial, they'll pursue solutions of some sort. A focus on available solutions brings the discussion back to whether the solutions are worthwhile.

I have only skimmed this, but it seems quite good and I want more things like it on the forum. Positive feedback! 

These tragic tradeoffs also worry me deeply. Existential wagering is to me one of the more worrying, but also possible to avoid. However, the tradeoff between existential co-option and blackmail seems particularly hard to avoid for AI.

More from MMMaas
Curated and popular this week
Relevant opportunities