Ethics of existential risk

Discuss the topic on this page. Here is the place to ask questions and propose changes.
Comments21
Sorted by

I think this is probably worth citing here, but I've only read the abstract myself: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2807377

weak disagree. FWIW lots of good cites in endnotes to chapter 2 of The Precipice pp.305–12; and  Moynihan's X-Risk.

btw — there's a short section on this in my old Existential Risk wikipedia draft. maybe some useful stuff to incorporate into this. 

I tried to incorporate parts of that section, and in the process reorganized and expanded the article. Feel free to edit anything that seems inadequate.

I wrote: 

However, it is important to distinguish between the question of whether a given moral perspective would see x-risk reduction as net positive and the question of whether that moral perspective would prioritise x-risk reduction, and this distinction is not always made (see Daniel, 2020). 

Maybe the wiki shouldn't say "it is important" for something that could be contested? But I think it'd be pretty hard for a reasonable person to contest this. And I did give a source for someone else saying it (though it's just a Forum comment).

I wrote:

In the effective altruism community, this [neartermist perspective] is perhaps the most commonly discussed non-longtermist moral argument for x-risk reduction. 

This is based on my own non-systematic observations, rather than systematic data collection or some other source. I could be wrong. Or maybe it's obvious enough that I'm right that "perhaps" should be removed.

I also wrote:

Meanwhile, the "cosmic significance" moral foundation has received some attention among cosmologists and physicists concerned about extinction risk. 

This is something I recall Ord saying, maybe in multiple places, but I can't recall where. I think maybe his talk at the recent SERI conference? Ideally, someone will find a source, add it, and make the sentence more specific.

Things it'd be good to add:

  • Discussion of other moral perspectives, beyond the 5 Ord mentions
    • E.g., self-regarding, pure time discounting, person-affecting
    • In some cases, these involve "carving things up" differently to how Ord does, rather than just adding categories (e.g., pure time discounting and person-affecting could both be specific things that lead to a basically "present"-focused perspective)
  • Discussion of distinct moral perspectives given longtermism
    • e.g. longtermism plus an asymmetric person-affecting view
    • e.g., longtermism plus other suffering-focused ethical views
  • Discussion of how these different perspectives have different implications for which risks to prioritise
    • Perhaps most notably extinction risk vs reducing risks of worse-than-extinction futures vs improving quality of life conditional on survival
  • Discussion of how this might affect prioritisation of GCRs
    • E.g., I think the present- and civilizational-virtue-focused arguments apply to GCR reduction, but the past- and cosmic-significance-focused arguments probably don't
    • If the current name and scope is kept, it should be made clear that this is a distinct category that is being discussed as a sort of aside (i.e., that GCRs and x-risks are not the same things)

I'm not sure what the ideal scope and name for this tag would be. Pablo and I discussed that at some length here.

Here are some possible scopes:

  1. Basically all the main arguments for or against prioritising reducing existential risks
    • E.g., the 5 moral perspectives Ord discusses in The Precipice, focused on the past, the present, the future, civilization virtues, and cosmic significance
    • Also things like pure time discounting and population ethics
    • But also empirical, epistemological, or decision-theoretic arguments
      • E.g., the idea that the future might be net negative and that this might push against extinction risk reduction (though not necessarily against reducing some other existential risks), or the epistemic challenge to longtermism
  2. Basically all the main moral perspectives that might support or oppose prioritising reducing x-risks
    • So not including empirical, epistemological, or decision-theoretic arguments, except maybe in passing
  3. Basically all the main non-longtermist moral perspectives that might support or oppose reducing x-risks
    • E.g., the 4 moral perspectives Ord discusses except the one focused on the future
    • The reason we might want to have this scope is that it's already pretty easy to find discussion of the longtermist arguments for prioritising reducing x-risks, and there might be more value added by having a place dedicated to collecting the somewhat less common perspectives
  4. Just the neartermist argument for supporting reducing x-risks
    • The reason we might want to have this scope is that this is probably the most prominent non-longtermist argument for x-risk reduction, and the one that seems more important to me and I think to Pablo and various other EAs
      • Relatedly, this also seems like the most "EA-aligned"non-longtermist argument for x-risk reduction
  5. Any of the above, but focused more specifically on just arguments for prioritising x-risk reduction
    • Maybe also covering direct rebuttals, but not covering distinct arguments against
      • But I'm not sure how well that distinction can be made
  6. Any of the above, but for extinction risk specifically
  7. Any of the above, but for global catastrophic risks more broadly

Currently I think I lean towards 2 or 3, with 1 just behind. But I'm unsure.

With all of that in mind, here are some possible names, in roughly descending order of how much I like them:

  • Moral perspectives on existential risk reduction
  • Non-longtermist perspectives on existential risk reduction
  • Existential risk prioritization for non-longtermists
  • Non-longtermist arguments for existential risk reduction
  • Arguments for reducing existential risk
  • Arguments for existential risk prioritization
  • Near-termist existential risk prioritization
  • Near-termist arguments for existential risk reduction
  • Alternative perspectives on existential risk prioritization
    • I don't really like tag names that say "alternative" in a way that just assumes everyone will know what they're alternative to, but I'm throwing the idea out there anyway, and we do have some other tags with names like that
  • Any of the above names, but with "x-risks" instead (just to shorten it, while keeping the scope the same)
  • Any of the "Arguments for" names, but with "Arguments for and against" instead
  • Any of the above names that don't say "prioritisation" or similar, but tweaked to say "prioritising existential risks" or "prioritising existential risk reduction" or similar
  • Any of the above names, but with "extinction risk" or "GCRs" or "global catastrophic risks" (this would change the scope)
  • Any of the "near-termist" names, but with "short-termist" instead

How about "ethics of existential risk reduction"?

"Ethics of X" is a standard phrase.

Ok, I've now changed the title and changed the first sentence to: 

The ethics of existential risk concerns the questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, precisely why those things are as bad or good as they are, and how this differs between different specific existential risks.

This makes me notice something that's a bit odd about this wiki (compared to Wikipedia), which is that sometimes we're kind of making up a name and scope for what was really, until a given wiki edit, just a bundle of papers, blog posts, etc. Like, authors hadn't necessarily said "My paper is part of the body of work on the ethics of existential risk", and no one had previously said specifically that the ethics of existential risk covers those 4 questions I mention there. So this edit of mine is quite "original research"-y in the Wikipedia sense. 

Perhaps a more honest phrasing would be "We could use the term 'the ethics of existential risk' to describe a disparate, scattered collection of work that covers some combination of the questions of..." But that sounds less encyclopaedic. 

I'm not sure this is a problem, but it seems slightly odd, and I wanted to flag it in case other people had thoughts.

Expanding on "I'm not sure this is a problem": I feel like I, Pablo, and probably some other people are happy with me making edits like this, which are kind-of original research yet phrased as if what we're describing already existed with that name. But I don't know if we should be ok with editors in general doing that. So maybe we should have some policy indicating when it is vs isn't ok, how to approach it, that people should flag on the Discussion page when they've done it so it can be reviewed, etc.?

Actually, Googling "ethics of existential risk" does yield a fair number of hits at FHI, 80,000 Hours, etc. So I think calling it that isn't at risk of being original research.

Regarding your last paragraph, I think that it's in general a good idea if people flag on the Discussion page when they want to make big and non-obvious edits or additions (the threshold can be discussed). But that's a more general issue (doesn't just pertain to edits that could be seen as original research). I don't have a clear sense of exactly how it should be done, though.

What do folks think of just 'Ethics of existential risk'? The form would match other Wiki entries, such as Psychology of effective altruism. Also, similar formulas have been used in the academic literature: e.g. the subtitle of John Leslie's book is The Science and Ethics of Human Extinction (as opposed to The Science and Ethics of Human Extinction Prevention). I don't have a particularly strong preference, though.

I prefer this option to all others mentioned here. 

Yes, I think I prefer that (see my subsequent comment).

Yeah, I think that's probably better than all the suggestions listed above (including the current name). My current plan: Wait a day or so to see if there's any further commentary, and then probably change the name to "ethics of existential risk reduction".

Thanks for the suggestion :)

Great! Or just "ethics of existential risk".

Also, my hunch is that "existential risk" is better than "x-risk" in Wiki articles, since I think the Wiki should have a somewhat formal tone.

Yeah, thanks again, I think those are both good suggestions. 

I usually prefer "existential risk" in general and especially for the wiki. I deliberately decided to deviate from that general policy here, but I can't remember for sure why and I'm not sure I endorse it - I think it was basically just that the term is used a lot here, so it's a bit annoying to write and read the full version every time. But that's probably outweighed by the perks of sounding professional/formal. I've now switched this entry's text to saying "existential risk".

Thanks! Yeah I get that it may look slightly clunky but also agree that that's outweighed by the advantages of sounding more formal.

FWIW, and setting aside stylistic considerations for the Wiki, I dislike 'x-risk' as a term and avoid using it myself even in informal discussions. 

  • it's ambiguous between 'extinction' and 'existential', which is already a common confusion
  • it seems unserious and somewhat flippant (vaguely comic book/sci-fi vibes)
  • the 'x' prefix can denote the edgy, or sexual  (e.g. X Games; x-rated; Generation X?)
  • 'x' also often denotes an unknown value (e.g. in 'Cause X' — another abbreviation I dislike; or indeed Stefan's comment earlier in this thread)

Thanks for this comment. I was already aware of the first two downsides, and often lean away from the term for those reasons. But I hadn't considered the other two downsides, and they make sense to me, so this updates me towards more consistently avoiding the term.

Out of interest, do you use "x-risk" in e.g. Slack threads, google doc comments, and conversations at lunch? I.e., in contexts that are not just informal but also private and two-way (so it's easier to notice if something has been misunderstood or left a bad impression)? I think by default I'd continue to do that myself.

Thanks. I updated the Style Guide to reflect this:

The terms 'effective altruism', 'existential risk' and other expressions commonly abbreviated in informal discussion should be spelled in full and—unless they occur as part of a name—in lowercase.

I can't off the top of my head think of situations where the abbreviated form would be more appropriate, but if others have concrete cases in mind, please mention them here so that we can revise the Guide, if necessary.