Hide table of contents

I've always had significant concerns regarding longtermism and its fundamental assumptions but the skepticism appears to have grown over time. This is despite or perhaps because of greater engagement with the views on the topic, the critiques and the responses . 

I have tried to carefully examine the claims and justifications of longtermist view together with the responses to the standard objections by its proponents  but nonetheless I am still left with the idea that there are serious issues with it that have not been adequately addressed. Before getting to the details of what I believe to be problematic with longtermism, I admit that it is entirely possible that there have been powerful responses to these objections that I am thus far unaware of. Part of writing this and clarifying the issues as precisely as possible is the hope that someone can easily point to comprehensive responses to them in case they exist.
 

My objections can be broadly divided into two categories : the first deals with the assumptions regarding the consequences of an extinction event and second, our ability to control what happens in the far future. In line with the literature on the subject, the first is related to the concept of cluelessness and the second, tractability of affecting the far future. 

First, let me start with what I believe would be the sequence of reasoning leading to the longtermist perspective:

  1. There will likely be a very large number of humans and post-humans coming into existence over the long term future (millions and billions of years from now).
  2. The welfare of these humans should matter to us from an ethical standpoint. Specifically, if there are actions we can take now that would improve the welfare of these future individuals or more dramatically their survival, they should enter into our present day ethical calculus.
  3. The sheer scale of the future of humanity means that even a small positive change over that future has an enormous total value, dwarfing anything else that may concern us presently.
  4. The above is true even if we use a conservative estimate of future human lives and discount the their welfare in comparison to our own. 
  5. An extinction risk (x-risk) event is defined as one where either entire humanity gets wiped out or its potential drastically reduced
  6. In the specific context of x-risk, any action taken to reduce the probability of such an event happening would, based on the previous assumptions, have a huge impact in terms of overall expected value.
  7. As such most individuals, societies and governments would be concerned about extinction risk without any knowledge of longtermism because the consequences seem very obviously bad in the time-horizons that people generally think in terms of. 
  8. What distinguishes longtermsim then is the claim that (a) extinction risk is even more important than we may naively assume because of its outsized impact on the far future which is generally not factored in and (b) we should be considering efforts to reduce  x-risk events far out in the future as well.
     

Now I am largely in agreement with points 1-4 and 7.  One of my objections (cluelessness) is related to assumption 6. I addition, I do have concerns about 8(b) but I should mention that I am not entirely sure whether that assumption is even correct? Specifically, insofar as extinction risk is concerned, is the goal of longtermism to reduce the probability of such x-risks in the far future too? As opposed to something near  or mid-term (about 100-1000 years).  I tried to find out if the time-horizons for potential x-risk events have been explicitly discussed in longtermism literature but I didn’t come across anything. In the following I assume that assumption 8(b) is part of the framework of longtermism.  

Also, if I am not mistaken, an x-risk event need not wipe out all of life on earth and the definition pertains to humanity’s survival and continuation as a species. In fact, do we know of any sudden catastrophic events that would eliminate all of life on the planet? 

 

Cluelessness

 

Let me first deal with the cluelessness problem. What is absolutely clear is that an extinction event would be terrible in the short and mid-term if we considered only the welfare of humans or post-humans. On the other hand, if we included  the welfare of all sentient species with  reasonable choice of weights for each, then it is quite possible that the net value of a persistent state of the universe is in fact net negative (in the context of the enormous number of farmed land animals and fish and the even greater number of animals in the wild). So an extinction event may lead to the net value going from negative to zero. 

However let’s ignore that for the moment and focus on the human race only.  While on a time horizon of up to a 1000-5000 years or maybe even more, the expected value of the extinction event is quite certainly negative and perhaps so in a large way, this is less clear when we look at horizons that are much longer than that. Why might this be the case? 

Let’s consider both the future after such an extinction event and also the counterfactual future not having such an extinction event. 

Post-extinction civilization

One of the intriguing possibilities is the evolution of post-extinction life forms (we've already recognized that is almost impossible that an extinction event will wipe out every life form as it exists today) towards organisms with human level intelligence. Is there any specific reason for discounting the possibility that arthropods or reptiles evolving over millions of years to something that equals or surpasses the intelligence of humans that were last alive? And over time scales of billions, we could enter the possibility of evolution from basic eukaryotes too. 

In fact, the life forms that post-extinction evolution will bring into existence need not resemble modern day humans or have comparable intelligence. The attribute of relevance for assignment for welfare utility is of course the degree of valenced experience that the species is capable of and in that regard there could be a wide variety of organisms possessing those attributes. 

 Needless to mention, the evolution towards something complex is not a certainty but since so much of longtermism rests on tiny probabilities, shouldn't we be factoring in the probability associated with this too (unless one can argue that it is so infinitesimally small that even in the longtermist calculations, it ends up being negligible). 

If we accept the possibility of advanced sentient life post-extinction, then what is the basis to assume that the net lives of such beings would have overall less value than the humans and post-humans of the counterfactual universe where the extinction did not happen?  More specifically, is there any good reason to assume that the odds are in favor of humans even by a little bit? If so, what exactly is the argument for that?

Again, it is very important to note here that such considerations come into play only when we look far into the future, over millions and billions of years. The future is of course imperative for longtermism calculus because it is predicated on the potential for human and post-human existence over such vast time scales into the future.  In other words, the very attribute that is used to argue in favor of longtermism is also what introduces deep uncertainty about any assumptions that could be made regarding the net value.

Averting an extinction event

Conversely, owing to a longtermist driven intervention, if an extinction event were to be averted at some time t_0 in the future, how do we know that the net value of all human (or comparably advanced life forms) that will come into existence beyond t_0 will even be positive? Again, one has to bear in the mind the sheer magnitude of the timescales and the multitude of trajectories that are possible. An argument that is typically advanced here is based on the assumption that future humans will value similar things that we do or be aligned close to it and hence our altruistic and other-regarding tendencies will prevail. This is not very convincing to me for the simple reason that it sounds extremely speculative amidst tremendous uncertainty. 
 

In their paper on the case for strong longtermism, MacAskill-Greaves recognize the cluelessness as a serious problem but nonetheless maintain that it that decisions taken with long term future in mind is ex-ante rational.  Perhaps I have missed something but I am not even sure they offered much in the way of  justification besides observing that it would be overly pessimistic to assume that we would have no idea of what the future would look like. This is not a very convincing argument as it is neither far-fetched nor pessimistic to think that the future in a hundred million years may look utterly different from anything that we can easily extrapolate our assumptions based on modern day human preferences and values.

Tractability

Our ability to alter future events or trajectories in specific ways by taking actions in the present diminishes with the temporal separation between the future we want to influence and us. It is entirely possible that we can come up with effective strategies to minimize asteroid collisions in the near-term (~100 years) but what happens when we think of an extinction event occurring 5000 years from now?  Now if there is a relatively well-defined causal sequence stretching from now until when the event does occur, we can of course come up with ways to disrupt that chain and nullify the threat. An example is the concern around takeover of the world by highly capable AI agents.  Here we can proactively take steps to minimize the probability of that occurring but it should be mentioned that this involves (a) continuous course of actions as long as AI systems will be around to avoid that undesirable future and not merely a one-off decision and (b) these fall  in alignment with near term priorities as well. However, what about phenomena that don't have such a clear causal linkage to the present? 
 

In this context, I’ve come across references to the paper by Christian Tarsney on epistemic challenges to longtermism that claims to address with the washing-out hypothesis. Despite the introductory sections setting up the problem very well (even acknowledging the results from chaos theory in complex systems), the actual model not only does not deal with any of this but has built into it an assumption that human intervention over the next 1000 years will reduce the probability of an extinction event in that period by some p. In other words, the very issue the paper ostensibly claims to provide new insights on is just assumed right at the beginning of the proposed model!  With that as the starting point, the rest of the paper is about estimating the expected value of the future civilizational expansion which is described by a specific model having several free parameters that are then analyzed for various outcomes. While that type of work is interesting (although extremely speculative), it has no bearing on the specific tractability question of how likely is that an action taken today will have the intended effect several hundred years from now when there is no clear causal chain of events one can trace

The line of argument that I often see being deployed to justify long term focus relies on the assumption that on expectation, a very small change in probability towards a desirable direction would lead to extremely large gains. Now,leaving aside philosophical objections to arguments based on tiny probabilities, it would seem that this reasoning would apply more forcefully to prevent an extinction in the near term or more generally over a time horizon where we can make a case for having significant control over outcomes. If we accept that, then the natural extension to mitigate such events in the more distant future is to ensure that the importance of such an extinction event is passed on from one generation to the next rather than take specific steps today to directly avert that. Each generation decides the optimal strategies to avoid extinction over a time-window where there is a feasible degree of control over events. 


No real objections after all?

One thing to observe here is that the objections raised above arise in the context of claims about events and welfare of humans and post-humans in the very long-term future. Indeed by definition, that perspective is baked in the very terminology. 

However, as noted earlier, extinction risk considerations in the near-term are extremely important and there is much less uncertainty about the catastrophic  consequences in the near and mid-term. It is also probably true that the existing efforts to mitigate that are insufficient and not commensurate with the magnitude of the risks posed to humanity's survival by such rare events.  If this is the view one takes, there is nothing in what I have said so far that would challenge it in any way.

It is only when we start making claims and assumptions around the prospects, priorities or experiences of humans millions of years from now that we find that the evidence is lacking and the justifications are vague and unconvincing. 

Again, it is possible that there are more sophisticated arguments that have explored these assumptions and objections more rigorously but I happen to be unaware of them.  


 

30

1
0

Reactions

1
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

the evolution towards something complex [after human extinction] is not a certainty but since so much of longtermism rests on tiny probabilities, shouldn't we be factoring in the probability associated with this too

Note that this is one of the "exogenous nullifying events" that Tarsney's model incorporates. The mere possibility that human survival isn't needed to secure a far-future flourishing civilization does not by itself undermine the claim that human survival improves the odds of better longterm outcomes. (But it's generally true that the higher the chances you place on such positive 'nullification', the less expected value you should assign to existential risk mitigation.)

Thanks for the comment. It is true the positive ENEs are part of the Tarsney model but there is no value assigned to the counterfactual scenario there (implicitly the value is 0). In fact, ENEs are relevant to the model insofar as they represent events that that nullify the extinction risk mitigation effort. There is no consideration of the future possibility of such scenarios and how that might diverge from the one where humanity's existence continues.

It is quite possible that human survival "improves the odds" of better outcomes as you say, but I am curious if there has been a more comprehensive exploration of this question.  Has there been an analysis examining the likelihood of post-extinction life forms and consideration of the various evolution scenarios? In the absence of that, this seems rather hand-wavy claim and while that is not in and of itself a reason to reject something, the case for longtermism needs either (a) a less rigorous argument that the overall probability distribution for intervention is favorable and not just expected value or (b) a fairly robust argument that at least the expected value is higher. 

I tried to find out if the time-horizons for potential x-risk events have been explicitly discussed in longtermism literature but I didn’t come across anything.

See here

Interesting considerations and if one accepts that these developments will happen within about 500 years from now, then that sets the upper bound for when the entire extinction risk events will occur in the future? 

I think you raise some really interesting points here, and I am inclined to agree with your skepticism of longtermism.

I just have one comment on your "tractability" section. In my understanding, longtermists advocate that we should prioritize reducing existential risk in the near-term, and say very little about reducing it in the long-term. I don't think I have seen longtermists advocating for your claim (8b) (although correct me if I'm wrong!). I think you're right that the tractability objection would make this claim seem very far-fetched.

The "longterm" bit of longtermism is relevant only in how they assess the value of reducing near-term existential risk, as you explain in your introduction. Longtermists believe that reducing near-term existential risk is overwhelmingly important, in a way that other people don't (although as you also point out, most people would still agree it is extremely important!)

I think the crucial point for longtermists is that reducing near-term existential risk is one of the only ways of having a very large positive influence on the far-future that is plausibly tractable. We 'only' have to become convinced that the future has astronomically large positive expected value, and this then automatically implies that reducing near-term existential risk will have an astronomically large positive expected impact. And reducing near-term extinction risk is something it feels like we have a chance of being successful at, in a way that reducing extinction risk in 5,000 years doesn't.

If anything, not only do longtermists not focus on reducing existential risk thousands of years from now, you can also argue that their worldview depends on the assumption that this future existential risk is already astronomically low. If it isn't, and there is a non-negligible probability per year of humanity being wiped out that persists indefinitely, then our expected future can't be that big. This is the "hinge of history"/"precipice" assumption: existential risk is quite big right now (so it's a problem we should worry about!) but if we can get through the next few centuries then it won't be very big after that (so that the expected value of the future is astronomical).

More specifically, is there any good reason to assume that the odds are in favor of humans even by a little bit? If so, what exactly is the argument for that?

There is a good argument from your perspective: human resource utilization is likely to be more similar to your values on reflection than a randomly chosen other species.

I've heard this point being made elsewhere too but I am not sure I fully understand that. What exactly are the values  on reflection you are referring to here? Is it values that is typically shared by those with a utilitarian bent or other philosophical schools that focus roughly on the well-being of all beings that are capable of experiencing pleasure and pain. A value system that is not narrowly focused on maximization for a minority at the exclusion of others?  

Now even in the real world the systems are setup in clear violation of such principles which is part of the reason for inequality, exploitation, marginalization etc.  And while one may argue that over centuries we would become more enlightened to collectively recognize these evils, it is not entirely obvious we would eliminate them. 

In any event, why do we assume that a different advanced civilization (especially one arising post-extinction from some of our common ancestors) would not converge to something like it especially since we recognize that our source of empathy and cooperation that form the basis for more sophisticated altruistic goals have played a role in our survival as a species? 

Maybe I am missing something but even probabilistically  speaking why assume one is more likely than the other?  

Executive summary: The author argues that longtermism rests on unjustified assumptions about the consequences of extinction events and our ability to control the far future, though near-term extinction risk reduction is still important.

Key points:

  1. Longtermism assumes extinction events would be catastrophic even over vast timescales, but post-extinction evolution could lead to comparable or greater welfare.
  2. Over millions or billions of years, it's highly uncertain whether post-extinction trajectories would be worse than trajectories where extinction is averted.
  3. Our ability to alter the far future diminishes over time, and there are no clear causal chains to influence events thousands of years from now.
  4. Arguments based on tiny probabilities of affecting the far future are unconvincing and could apply more to near-term risk reduction.
  5. Near-term extinction risk reduction is important and efforts may be insufficient, but claims about the far future rest on unjustified assumptions.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Is there any specific reason for discounting the possibility that arthropods or reptiles evolving over millions of years to something that equals or surpasses the intelligence of humans that were last alive?

No, I think analysis shouldn't discount this. Unless there is an unknown hard-to-pass point (a filter) between existing mammals/primates and human level civilization, it seems like life re-evolving is quite likely. (I'd say 85% chance of a new civilization conditional on human extinction, but not primate extinction, and 75% if primates also go extinct.)

There is also the potential for alien civilizations, though I think this has a lower probability (perhaps 50% that aliens capture >75% of the cosmic resources in our light cone if earth originating civilizations don't caputure these resources).

IMO, the dominant effect of extinction due to bio-risk is that a different earth originating species acquires power and my values on reflection are likely to be closer to humanities values on reflection than the other species. (I also have some influence over how humanity spends its resources, though I expect this effect is not that big.)

If you were equally happy with other species, then I think you still only take a 10x discount from these considerations because there is some possibility of a hard-to-pass barrier between other life and humans. 10x discounts don't usually seem like cruxes IMO.

I would also note that for AI x-risk, life intelligent life reevolving is unimportant. (I also think AI x-risk is unlikely to result in extinction because AIs are unlikely to want to kill all humans for various reasons.)

And over time scales of billions, we could enter the possibility of evolution from basic eukaryotes too. 

Earth will be habitable for about ~1 billion more years which probably isn't quite enough for this.

Curated and popular this week
Relevant opportunities