While longtermism is an interesting ethical principle, I believe the consequence of the extent of uncertainty involved with the impact of current decisions on future outcomes has not been fully explored. Specifically, while the expected value may seem reasonable, the magnitude of uncertainty is likely to dwarf it. I wrote a post on it and as far as I can tell, I have not seen a good argument addressing these issues. 

https://medium.com/@venky.physics/the-fundamental-problem-with-longtermism-33c9cfbbe7a5

To be clear, I understand the argument of risk-reward tradeoff and how one is often irrationally risk-averse  but I am not talking about that here. 

One way to think of this is the following:  if the impact of an intervention at present to influence long term future is characterized as a random variable X(t) , then, while the expectation value could be positive:

the standard deviation as a measure of uncertainty , 

could be so large that the coefficient of variation is very small: 

Further if the probability of a large downside,   is not negligible, where   , then I don't think that the intervention is very effective. 

 

Perhaps I have missed something here or there have been some good arguments against this perspective that I am not aware. I'd happy to hear about these. 

13

0
0

Reactions

0
0
Comments9
Sorted by Click to highlight new comments since: Today at 4:14 AM

This talk and paper discusses what I think are some of your concerns about growing uncertainty over longer and longer horizons.

This is a very interesting paper and while it covers a lot of ground that I have described in the introduction, the actual cubic growth model used has a number of limitations, perhaps the most significant of which is the assumption that it considers the causal effect of an intervention to diminish over time and converge towards some inevitable state: more precisely it assumes  as  , where  is some desirable future state and A and B are some distinct interventions at present.

Please correct me if I am wrong about this.

However, the introduction considers not just interventions fading out in terms of their ability to influence future events but often the sheer unpredictability of them.  In fact, much like I did, the idea from chaos theory is cited:

.... we know
on theoretical grounds that complex systems can be extremely sensitive to initial
conditions, such that very small changes produce very large differences in later con-
ditions (Lorenz, 1963; Schuster and Just, 2006). If human societies exhibit this sort
of “chaotic” behavior with respect to features that determine the long-term effects
of our actions (to put it very roughly), then attempts to predictably influence the
far future may be insuperably stymied by our inability to measure the present state
of the world with arbitrary precision.

 

But the model does not consider any of these cases. 

In any case, by the author's own analysis ( which is based on a large number of assumptions), there are several scenarios where the outcome is not favorable to the longtermist.  

Again, interesting work, but this modeling framework is not very persuasive to begin with (regardless of which way the final results point to).

(Note: I'm not well-steeped in the longtermism literature, so don't look to me as some philosophical ambassador; I'm only commenting since I hadn't seen any other answers yet.)

I get lost with your argument when you say "the standard deviation as a measure of uncertainty [...] could be so large that the coefficient of variation is very small" What is the significance/meaning of that?

I read your Medium post and I think I otherwise understand the general argument (and even share similar concerns at times). However,  my response to the argument you lay out there would mainly be as follows: yes, it technically is possible that a civilizational nuclear reset could lead to good outcomes in the long term, but it's also highly improbable. In the end, we have to weigh what is more plausible, and while there will be a lot of uncertainty, it isn't fair to characterize every situation as purely/symmetrically uncertain, and one of the major goals of longtermism is to seek out these cases where it seems that an intervention is more likely to help than to hurt in the long term. 

One of the major examples I've heard longtermists talk about is reducing x-risk. You seem to take issue with this point, but I think the reasoning here is tenuous at best. More specifically, consider the example you give regarding "what if nuclear reset leads to a society that is so "enlightened... that they no longer farm animals for food." Does it seem more plausible that nuclear reset will lead to an enlightened society or a worse society (and enormous suffering in the process)? As part of this, consider all the progress our current society has made in the past ~60 years regarding things like lab-grown meat and veganism—and how much progress in these and other fields would be lost in such a scenario. In this case, it seems far more plausible that preventing a nuclear holocaust will be better for the long term future.

Thanks for the response.  I believe I understand your objection but it would be helpful to distinguish the following two propositions:

a. A catastrophic risk in the next few years is likely to be horrible for humanity over the next 500 years.

b. A catastrophic risk in the next few years is likely to to leave humanity (and other sentient agents) worse off in the next 5,000,000 years, all things considered. 

 

I have no disagreement at all with the first but am deeply skeptical of the second. And that's where the divergence comes from.

The example of a post-nuclear generation being animal-right sensitive is just one possibility that I advanced; one may consider other areas such as universal disarmament,  open borders, end of racism/sexism. If the  probability  of a more tolerant humanity emerging from the ashes of a nuclear winter is even 0.00001, then from the perspective of someone looking back 100,000 years from now, it is not very obvious that the catastrophic risk was bad, all things considered.

For example, whatever the horrors of WWII may have been, the sustenance of relative peace and prosperity of Europe since 1945 owes a significant deal to the war. In addition, the widespread acknowledgement of  norms and conventions around torture and human rights is partly a consequence of the brutality of the war.  That of course if far from enough to conclude that the war was a net positive.  However 5000 years into the future, are you sure that in the majority of scenarios,   in retrospect, WW2 would still be a net negative event? 

In any case, I have added this as well in the post: 

If a longtermist were to state that the expected number of lives saved in say T (say 100,000 ) years is N (say 1,000,000) and that the probability of saving at least M (say 10,000) lives is 25% and that the probability of causing more deaths (or harm engendered) is less than 1%, all things considered (i.e., counter-factuals with opportunity cost), then I’ll put all this aside and join the club!

This seems to be an issue of only considering one side of the possibility distribution. I think it’s very arguable that a post-nuclear-holocaust society is just as if not more likely to be more racist/sexist, more violent or suspicious of others, more cruel to animals (if only because our progress in e.g., lab-grown meat will be undone), etc. in the long term. This is especially the case if history just keeps going through cycles of civilizational collapse and rebuilding—in which case we might have to suffer for hundreds of thousands of years (and subject animals to that many more years of cruelty) until we finally develop a civilization that is capable of maximizing human/sentient flourishing (assuming we don’t go extinct!)

You cite the example of post-WW2 peace, but I don’t think it’s that simple:

  1. there were many wars afterwards (e.g., the Korean War, Vietnam), they just weren’t all as global in scale. Thus, WW2 may have been more of a peak outlier at a unique moment in history.

  2. It’s entirely possible WW2 could have led to another, even worse war—we just got lucky. (consider how people thought WW1 would be the war to end all wars because of its brutality, only for WW2 to follow a few decades later)

  3. Inventions such as nuclear weapons, the strengthening of the international system in terms of trade and diplomacy, the disenchantment with fascism/totalitarianism (with the exception of communism), and a variety of other factors seemed to have helped to prevent a WW3; the brutality of WW2 was not the only factor.

Ultimately, I still consider that the argument that seemingly horrible things like nuclear holocausts (or The Holocaust) or world wars are more likely to produce good outcomes in the long term just generally seems improbable. (I just wish someone who is more familiar with longtermism would contribute)

You're completely correct about a couple of things, and not only am I not disputing them, they are crucial to my argument: first, that I am only focusing on only one side of the distribution, and the second, that the scenarios I am referring to (with WW2 counterfactual or nuclear war) are improbable.

Indeed, as I have said, even if the probability of the future scenarios I am positing  is of the order of 0.00001 (which makes it improbable),  that can hardly be the grounds to dismiss the argument in this context simply because longtermism appeals precisely to the immense consequences of events whose absolute probability is very low. 

 

At the risk of quoting out of context:

If we increase the odds of survival at one of the filters by one in a million, we can multiply one of the inputs for C by 1.000001.
So our new value of C is 0.01 x 0.01 x 1.000001 = 0.0001000001
New expected time remaining for civilization = M x C = 10,000,010,000

 

In much the same way,  it's absolutely correct that I am referring to one side of the distribution ; however it is not because the other-side does not exist or is not relevant  bur rather because I want to highlight the magnitude of uncertainty and how that expands with time. 

 

It follows also that I am in no way disputing (and my argument is somewhat orthogonal to) the different counterfactuals for WW2 you've outlined.  

I see what you mean, and again I have some sympathy for the argument that it's very difficult to be confident about a given probability distribution in terms of both positive and negative consequences. However, to summarize my concerns here, I still think that even if there is a large  amount of uncertainty, there is typically still reason to think that some things will have a positive expected value: preventing a given event (e.g., a global nuclear war) might have a ~0.001% of making existence worse in the long-term (possibility A), but it seems fair to estimate that preventing the same event also has a ~0.1% chance of producing an equal amount of long-term net benefit (B). Both estimates can be highly uncertain, but there doesn't seem to be a good reason to expect that (A) is more likely than (B). 

My concern thus far has been that it seems like your argument is saying "(A) and (B) are both really hard to estimate, and they're both really low likelihood—but neither is negligible. Thus, we can't really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)" (If that isn't your argument, feel free to clarify!). In contrast, my point is "Sometimes we can't know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try."

...it seems like your argument is saying "(A) and (B) are both really hard to estimate, and they're both really low likelihood—but neither is negligible. Thus, we can't really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)"

Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermism rests precisely on there being greater and greater number of humans (and other sentient independent agents) as the horizon of time expands. 

 

Sometimes we can't know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.

Fully agree that we should try but the case of longtermism remains rather weak until we have some estimates and bounds that can be reasonably justified. 

I think that the case for longtermism gets stronger if you consider truly irreversible catastrophic risks, for example human extinction. Lets say that there is a chance of 10% for the extinction of humankind. Suppose you suggest some policy that reduces this risk by 2%, but introduces a new extinction risk with a probability of 1%. Then it would be wise to enact this policy.

This kind of reasoning would be probably wrong if you have a chance of 2% for a very good outcome such as unlimited cheap energy, but an additional extinction risk of 1%.

Moreover, you cannot argue that everything will be OK several thousand years in the future if humankind is eradicated instead of "just" reduced to a much smaller population size. 

Your forum and your blog post contain many interesting thoughts and I think that the role of high variations in longtermism is indeed underexplored. Nevertheless, I think that even if everything that you  have written is correct, it would still be sensible to limit global warming and care for extinction risks. 

Curated and popular this week
Relevant opportunities