Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

by jushy1 min read19th Dec 202011 comments

17

Pain and sufferingLongtermismLong-term futureExistential riskCause prioritization
Frontpage

EDIT: MichaelStJules pointed out that I'm mixing up extinction risks (narrower term) with existential risks (broader term). I've edited the post to fix this.

I think a major implication of longtermism is that "we should care far more about problems which will cause suffering to many generations, or problems that will deprive many generations of pleasure".

But if  like me, you accept Benatar's argument on the asymmetry of suffering and pleasure, i.e, that a lack of pleasure isn't a bad thing if no one is around to miss it, then the extinction component of an existential risk isn't a problem, since depriving many generations of pleasure by preventing them from existing in the first place isn't a bad thing. 

However, many extinction risks are "progressive" in the sense that they will cause suffering for many generations before causing extinction, so they would still be a cause for concern. But the fact that they  could cause extinction wouldn't really be relevant.

On the other hand, some extinction risks that EAs are concerned about could only affect a small number of generations (eg - very large asteroids), and could almost entirely be ignored in comparison to issues which could plague many generations. 

I think a reasonable amount of people agree with Benatar, because I think most people don't see depriving an individual of pleasure by preventing them from existing as a 'con' of contraception.

So I think some people could benefit from thinking harder about whether they see extinction prevention as a priority in light of Benatar's argument.

Originally posted to r/effectivealtruism.

11 comments, sorted by Highlighting new comments since Today at 8:48 PM
New Comment

'Longtermism' just says that improving the long-term future matters most, but it does not specify a moral view beyond that. So you can be longtermist and focus on averting extinction, or you can be longtermist and focus on preventing suffering (cf. suffering-focused ethics); or you can have some other notion of "improving". Most people who are both longtermist and suffering-focused work on preventing s-risks.  

That said, despite endorsing suffering-focused ethics myself, I think it's not helpful to frame this as "not caring" about existential risks; there are many good reasons for cooperation with other value systems.

Thank you for your input! I agree with the point about co-operation with other value systems.

EDIT: as MichaelStJules pointed out, I think I was also mixing up existential risks (a broader term) with extinction risks (a narrower term).

You might be interested in Teruji Thomas, "The Asymmetry, Uncertainty, and the Long Term", which covers different kinds of procreation asymmetries and concludes with the section "6 Extinction Risk Revisited". Some of he paper is petty technical, although the conclusion isn't. You could read section 6, watch the talk (25 minutes), and then read section 6 again.

Just a clarification: s-risks (risks of astronomical suffering) are existential risks. I think you may be thinking of extinction risks, specifically. Some existential risks, taken broadly enough, are both extinction risks and s-risks, e.g. AI risks, although the focus of work may be different depending on the more specific kind of AI risk.

EDIT: I stand corrected. See Carl Shulman's reply.

Just a clarification: s-risks (risks of astronomical suffering) are existential risks. 

This is not true by the definitions given in the original works that defined these terms. Existential risk is defined to only refer to things that are drastic relative to the potential of Earth-originating intelligent life:

where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Any X-risks are going to be in the same ballpark of importance if they occur, and immensely important to the history of Earth-originating life. Any x-risk is a big deal relative to that future potential.

S-risk is defined as just any case where there's vastly more total suffering than Earth history heretofore, not one where suffering is substantial relative to the downside potential of the future.

S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

 In an intergalactic civilization making heavy use of most stars, that would be met by situations where things are largely utopian but  1 in 100 billion people per year get a headache, or a hell where everyone was tortured all the time.  These are both defined as s-risks, but the bad elements in the former are microscopic compared to the latter, or the expected value of suffering.  

With even a tiny weight on views valuing good parts of future civilization the former could be an extremely good world, while the latter would be a disaster by any reasonable mixture of views. Even with a fanatical restriction to only consider suffering and not any other moral concerns, the badness  of the former should be almost completely ignored relative to the latter if there is non-negligible credence assigned to both.

 So while x-risks are all critical for civilization's upside potential if they occur, almost all s-risks will be incredibly small relative to the potential for suffering, and something  being an s-risk doesn't mean its occurrence would be an important part of the history of suffering if both have non-vanishing credence.

From the s-risk paper:

We should differentiate between existential risks (i.e., risks of “mere” extinction or failed potential) and risks of astronomical suffering1(“suffering risks” or “s-risks”). S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

The above distinctions are all the more important because the term “existential risk” has often been used interchangeably with “risks of extinction”, omitting any reference to the future’s quality.2 Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event that would create 1025 unhappy beings in a future that already contains 1035 happy individuals constitutes an s-risk, but not an x-risk.

If one were to make an analog to the definition of s-risk for loss of civilization's potential it would be something like risks of loss of potential welfare or goods much larger than seen on Earth so far. So it would be a risk of this type to delay interstellar colonization by a few minutes and colonize one less  star system. But such 'nano-x-risks' would have almost none of the claim to importance and attention that comes with the original definition of x-risk. Going from 10^20 star systems to 10^20 star systems less one should not be put in the same bucket as premature extinction or going from 10^20 to 10^9. So long as one does not have a completely fanatical view and gives some weight to different perspectives, longtermist views concerned with realizing civilization's potential should give way on such minor proportional differences to satisfy other moral concerns, even though the absolute scales are larger.

Bostrom's Astronomical Waste paper specifically discusses such things, but argues since their impact would be so small relative to existential risk they should not be a priority (at least in utilitarianish terms)  relative to the latter.

This disanalogy between the x-risk and s-risk definitions is a source of ongoing frustration to me, as s-risk discourse thus often conflates hellish futures (which are existential risks, and especially bad ones), or possibilities of suffering on a scale significant relative to the potential for suffering (or what we might expect), with bad events many orders of magnitude smaller or futures that are utopian by common sense standards and compared to our world or the downside potential.

I wish people interested in s-risks that are actually near worst-case scenarios, or that are large relative to the background potential or expectation for downside would use a different word or definition, that would make it possible to say things like 'people broadly agree that a future constituting an s-risk is a bad one, and not a utopia' or at least  'the occurrence of an s-risk is of the highest importance for the history of suffering.' 

This disanalogy between the x-risk and s-risk definitions is a source of ongoing frustration to me, as s-risk discourse thus often conflates hellish futures (which are existential risks, and especially bad ones), or possibilities of suffering on a scale significant relative to the potential for suffering (or what we might expect), with bad events many orders of magnitude smaller or futures that are utopian by common sense standards and compared to our world or the downside potential.

This is a fair enough critique. But I think that from the perspective of suffering-focused and many other non-total-symmetric-utilitarian value systems, the definition of x-risk is just as frustrating in its breadth. To such value systems, there is a massive moral difference between the badness of human extinction and a locked-in dystopian future, so they are not necessarily in "the same ballpark of importance." The former is only critical to the upside potential of the future if one has a non-obvious symmetric utilitarian conception of (moral) upside potential, or certain deontological premises that are also non-obvious.

Fair enough on the definitions. I had this talk in mind, but Max Daniel made a similar point about the definition in parentheses. I'm not sure people have cases like astronomical numbers of (not extremely severe) headaches in mind, but I suppose without any kind of lexicality, there might not be any good way to distinguish. I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.

EDIT: below was based on a misreading.

With even a tiny weight on views valuing good parts of future civilization the former could be an extremely good world, while the latter would be a disaster by any reasonable mixture of views. Even with a fanatical restriction to only consider suffering and not any other moral concerns, the badness  of the former should be almost completely ignored relative to the latter if there is non-negligible credence assigned to both.

This to me requires pretty specific assumptions about how to deal with moral uncertainty. It sounds like you're assuming a common scale between the theories (maximizing expected choice-worthiness), but that too could lead to fanaticism if you give any credence to lexicality. While I think there's an intuitive case for it when comparing certain theories (e.g. suffering should be valued roughly the same regardless of the theory), assuming a common scale also seems like the most restrictive approach to moral uncertainty among those discussed in the literature, and I'm not aware of any other approach that would lead to your conclusion. If you gave equal weight to negative utilitarianism and classical utilitarianism, for example, and used any other approach to moral uncertainty, it's plausible to me that s-risks would come out ahead of x-risks (although there's some overlap in causes, so you might work on both).

You could even go up a level to and use a method for moral uncertainty for your uncertainty over which approach to moral uncertainty to use on normative theories, and as long as you don't put most of your credence in a common scale approach, I don't think your conclusion would follow.

It sounds like you're assuming a common scale between the theories (maximizing expected choice-worthiness)).

A common scale isn't necessary for my conclusion (I think you're substituting it for a stronger claim?)  and  I didn't invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability  get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don't amount to dictatorship of one theory, the weight for an NU view is not going to spend its decision-influence on the former rather than the latter when they're both non-vanishing possibilities.

I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.

Sure (which will make the s-risk definition even more inapt for those people), and those scenarios will be approximately ignored vs scenarios that are more like 1/100 or 1/1000 being tortured on a lexical view, so there will still be the same problem of s-risk not tracking what's action-guiding or a big deal in the history of suffering.

Ah, in the quote I took, I thought you were comparing s-risks to x-risks where the good is lost when giving non-negligible credence to non-negative views, but you're comparing s-risks to far worse s-risks (x-risk-scale s-risks). I misread; my mistake.

Yeah I'm not really sure why we use the term x-risk anymore. There seems to be so much disagreement and confusion about where extinction, suffering, loss of potential, global catastrophic risks, etc. fit into the picture. More granularity seems desirable.

https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering is helpful.

Yes I am, thank you! I'll edit the post to clarify this. That would also explain the EA Survey considering X-risks and the Long-term future to be one category.