Hide table of contents

Introduction

Myself — and many others — have argued that longtermism is not needed to argue for x-risk mitigation. That all such actions can be adequately justified within a neartermist framework, and given the poor reception/large inferential distance/inaccessibility of longtermist arguments, we might be better served arguing for x-risk mitigation within a strictly near termist framework.

But this is not fully accurate.

Averting extinction makes sense in near termist ethical frameworks (8 billion people dying is very bad), but extinction is not the only category of existential risk, and it's the only one that can readily be justified within neartermist frameworks.

Longtermism and Existential Risks

Excluding extinction, all the other existential risks — the very concept of an "existential risk" itself — implicitly rely on longtermism.

Toby Ord defined an existential catastrophe as an event that permanently curtails the longterm potential of humanity/human civilisation.

A few classes of existential catastrophe other than extinction:

  • Value lockin
  • Irreversible technological regression
  • Any discrete event that prevents us from reaching technological maturity
  • Any discrete event that leads to Bostrom's "Astronomical Waste"

(I would also add "technological stagnation" to the list. It's not a discrete event [so Ord didn't consider it as a catastrophe], but it has the same effect of curtailing the long term potential of human civilisation.)

We cannot even conceive of an existential catastrophe without a framework of "longterm potential of human civilisation", concepts of "technological maturity", "astronomical waste", etc.

All of these are concepts that are defined only within a longtermist framework.

Thus, existential risk mitigation is inherently a longtermist prospect.

Caveats

While extinction risks aren't the only existential risks, they are the one that has attracted the supermajority of attention and funding.

Excluding extinction risk mitigation, other longtermist projects looks like:

  • Grand strategy for humanity
  • Promoting more adequate/resilient institutions
  • Better mechanisms for coordination and cooperation
  • Governance of advanced/speculative technologies
  • Space settlement and colonisation
  • Etc.

Some of these actions may not have that large an effect on near term extinction risks.

Maybe there's an argument that we should argue for taking actions to mitigate near term extinction risks separately from other more inherently longtermist actions.

35

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since: Today at 6:15 AM

Most of the things that are being pursued as longtermist interventions only require caring about our grandchildren, or maybe great grandchildren, which well within scope of even many ethical frameworks that care about preferences but not future lives. The rest of the interventions potentially require caring about the next, say, 1,000 years - which still doesn't require anything like the actual longtermist assumptions. (Anything further out than that isn't really going to be amenable to the types of actions we're taking anyways.)

 

It is not clear to me that taking action on non-extinction x-risks would be in conflict with neartermist goals:

Value lockin -> like an AI singleton locking in a scenario that would not be optimal for longtermist goals? Isn't that akin to the alignment problem, and so directly intertwined with extinction risk?

Irreversible technological regression -> wouldn't this be incredibly bad for present humans and so coincide with neartermist goals?

Any discrete event that prevents us from reaching technological maturity -> wouldn't this essentially translate to reducing extinction risk as well as ensuring we have the freedom and wealth to pursue technological advancement, thus coinciding with neartermist goals?

Am I missing something?

[anonymous]1y5
0
0

I think it's a question of priorities. Yes, irreversible technological regression would be incredibly bad for present humans, but so would lots of other things that deserve a lot of attention from a neartermist perspective. However, once you start assigning non-trivial importance to the long-term future, things like this start looking incredibly incredibly bad and so get bumped up the priority list.

Also value lock-in could theoretically be caused by a totalitarian human regime with extremely high long-term stability.

I'd add s-risks as another longtermist priority not covered by either neartermist priorities or a focus on mitigating extinction risks (although one could argue that most s-risks are intimately entwined with AI alignment).

You seem to be not considering global catastrophic risk. This would generally not cause extinction, but could cause collapse of civilization from which we may not recover. And even if we do recover, we may end up losing significant fractions of long-term value. And it even if there's not a collapse of civilization, it could make global totalitarianism more likely, or worse values could end up in AI. At least some of these could be considered existential risk in the sense that much of the long-term value is lost. And yet preventing or mitigating them can generally be justified based on saving lives in the present generation.

Averting extinction makes sense in near termist ethical frameworks (8 billion people dying is very bad), but extinction is not the only category of existential risk, and it's the only one that can readily be justified within neartermist frameworks.

Doesn't this ignore the impacts of averting extinction on almost all moral patients in the near term, i.e. nonhuman animals, farmed and wild? Why think those are good or outweighed by the positive impacts on humans?

Sorry, I'm just very human centric in my moral thinking. Considering non human moral patients requires deliberate effort, and it's not something that readily comes to mind.

 

That said, while I do grant non human animals some consideration in some moral decision making, I don't particularly care for them here:

I'd destroy the rest of the biosphere in a heartbeat so that humanity may flourish among the stars.

Curated and popular this week
Relevant opportunities