Hide table of contents

Short summary: we don't know how much warming will make civilization collapse, and it might not be unreasonable to think of climate change as a major x-risk on these grounds.

Background/Epistemic status: I’m a climate scientist who has been in and around the Effective Altruism community for quite a few years and has spent some time thinking about worst-case outcomes. The following content was partially inspired by this forum comment by SammyDMartin; it includes many of the same points, but also takes things further and potentially in some different directions. I claim no certainty (or even high confidence) in anything said here — the main goal is to encourage discussion.

Introduction

Outside of the EA world, discussion of climate change as a top-priority emergency and/or existential threat is ubiquitous: from politicians to scientists to popular movements and books. EA literature typically takes a different view: climate change is bad but not an existential threat, and there are many other threats that could cause more damage (like unaligned AI or engineered pandemics) and deserve our marginal resources more.

The disconnect between these two views on climate change has bothered me for a long time. Not because I necessarily agree strongly with one side or the other, but because I just haven’t had a good model for why this disconnect exists. I don’t think the answer is quite as simple as “the people who think climate change is a top priority haven’t thought seriously about existential risks and cause prioritization”; anecdotally, I’ve met many people within EA who have, and who still have a nagging feeling that EA is somehow downplaying the risks from climate change. So what is going on?

This post examines two distinct, related, and potentially provocative claims that I think cut to the heart of the disconnect:

  1. most of the subjective existential risk from climate change comes from the uncertainty in how much damage is caused by a given amount of warming
  2. it's not obviously unreasonable to think the existential risk from climate change is as large as that from AI or pandemics.

The focus is on the importance of climate change as a cause area, not neglectedness or tractability; I’ll revisit this omission a bit at the end. The discussion is also purposely aimed at a longtermist/existential-risk-prioritizing audience; the importance of climate change under other moral views can and should be discussed, but I will not do so here.

Background

Following, e.g., Halstead, it is instructive to split the question of climate change damages into three numbered questions:[1]

  1. How much greenhouse gas will humanity emit?
  2. How much warming will there be per unit CO-equivalent greenhouse gas?
  3. How much damage will be caused (to humanity) by a given amount of warming?

Question 1 can in principle be addressed by economic and political projections, thinking about the economics of decarbonization, and so on. Question 2 is addressed by climate science: climate modeling, looking at past climate changes, and so on. Question 3 has typically been addressed by economic modeling (more on this later) and by considering hard limits for habitability by humans[2].

Much existing EA analysis of climate change as an existential risk focuses on Question 2[3]. This makes sense in part because it’s the most well-quantified: scientists and the IPCC publish probability distributions of this value, called the “climate sensitivity[4]. Conditional on a given carbon emission scenario, we can then find out the probability of a certain amount of warming by a certain date (e.g. 2100). If you think that some extreme level of warming will cause existential catastrophe (say 6/10/14 °C, whichever you fancy[5]), then the existential risk from climate change is simply the probability of exceeding that degree of warming; the sketch below summarizes this. So far, so good.

 

If we assume that there is a threshold level of warming that will trigger existential catastrophe, the probability of this happening (i.e. the existential risk) is the red shaded area.

The distinction between existential catastrophe and human extinction is important; the former is a destruction of humanity’s long-term potential, while the latter involves the death of every single human being. Extinction would obviously destroy humanity’s potential, but so too would an unrecoverable collapse of global civilization. The existential risk from climate change includes both the risk that climate change kills all humans and the risk that climate change triggers a chain of events (involving war, famine, mass migration, etc.) leading to unrecoverable collapse.

Claim 1: uncertainty in damages is the main source of existential risk

The first claim is the following. Most of the subjective existential risk from climate change comes from the uncertainty about how much damage will be caused by a given amount of warming (Question 3). In other words, it comes more from the uncertainty about what warming level is existential, rather than the uncertainty about how much warming there will be.

Why to believe this

Let’s start with a thought experiment. For simplicity, assume that global warming by 2100 has a 1% likelihood of being above 6 °C and a 50% likelihood of being above 3 °C (these numbers are reasonable, see e.g. here). Let’s say, just for the sake of argument, that climate change will trigger unrecoverable collapse if and only if global warming exceeds 6 °C. Then the existential risk from climate change is 1%, and it arises entirely due to our uncertainty in the warming per unit CO (Question 2).

But if we’re not confident about how much it takes to induce unrecoverable collapse, something interesting can happen. Let’s say we assign a 2% chance to the possibility that 3 °C of warming is sufficient. Now, the subjective risk of climate-change-induced existential catastrophe approximately doubles (1% + [50% * 2%] = 2%)[6], and this increase is entirely due to our uncertainty in the damages caused by warming (Question 3).

 

The same as above, but now we’re unsure about how much warming will trigger existential catastrophe. Even a small chance of a lower threshold (blue) would substantially affect the total subjective risk.

This effect doesn’t depend on how optimistic or pessimistic you are with respect to humanity’s resilience to warming. Let’s say you first think that existential catastrophe will only occur somewhere very far into the tail of extreme warming, and that there is a 1/10,000 chance that warming reaches this level. Now, even a 2/10,000 belief that 3 °C warming could trigger unrecoverable collapse is still enough to double the total existential risk.

So the question then becomes: how well can we constrain where such a collapse threshold might be? We could consider this as a probability distribution, similar to how we think about how much warming there will be per unit CO (climate sensitivity; Question 2).  Purely schematically, this could look something like this:

 

A probability distribution for where the threshold level of warming for collapse could be, analogous to distributions of the warming per unit CO. The shape of the distribution is purely schematic.

But it’s worth noting that this is really different from understanding the climate sensitivity: there we can get nice neat probability distributions because the climate system obeys physical laws that we (mostly) understand very well[7], and because we can draw on the wealth of data about how CO and climate change covaried in the deep past[8]. On the other hand, the dynamics of 8 billion people interacting through a tangled web of social systems are not described by a manageable set of equations, and by necessity there are no past examples of climate change leading to global collapse that we can use to calibrate our models. I’ll get a little more into the details later, but it seems to me that a priori we should be very skeptical of our current ability to constrain such a collapse threshold anywhere near as well as the climate sensitivity. Therefore, the latter uncertainty (i.e. the damages) is probably where much of the existential risk comes from (subjectively speaking).

The argument here is not yet about what kinds of probabilities it’s actually reasonable to assign to existential catastrophe at moderate levels of warming. It’s just arguing that the constraints on this are weak enough — much weaker than those on climate sensitivity — that this is the uncertainty that dominates in calculations of the existential risk. If this is true, one interesting corollary would be that further constraining the climate sensitivity may not be of much use in constraining the extent to which we should believe that climate change poses an existential threat.

How this might be wrong

One way this could be wrong is if the story about some threshold level of warming for existential catastrophe (unrecoverable collapse and/or human extinction) is fundamentally inapplicable here. For example, perhaps there are no such thresholds, or there are only thresholds for collapse in general and it’s something else that determines whether or not civilization can recover or not. I agree that things are probably not this simple, but right now the threshold story seems to me a useful enough approximation of the real world to try to learn something from it.

Another way this could be wrong is if it’s somehow unreasonable to put non-negligible probabilities on collapse at moderate levels of warming, for example if the existing evidence/literature rules this out with sufficient certainty. I’ll discuss this further in the next section.

Claim 2: Climate change versus other existential risks

The second claim might seem especially provocative to many readers, but please bear with me.

It is not obviously unreasonable to think that the existential risk from climate change is as large (i.e. of a similar order of magnitude) as the risk from AI or pandemics. 

To be clear, the argument is not that this view is correct, just that it is not obviously unreasonable; the distinction will become important. I’ve also purposely made this claim a very strong one: even if you don’t end up agreeing, consider as we go whether you might agree with a weaker version[9]

Why to believe this

Let’s look at some numbers. The Precipice estimates the existential risk[10] from AI at 1 in 10, from engineered pandemics at 1 in 30, and from climate change at 1 in 1000. 80,000 Hours goes even lower for climate change, suggesting that its total contribution to existential risk is “something like 1 in 10,000”.

The “standard approach” of assuming that climate change can only cause existential catastrophe at extreme levels of warming — say much higher than 6 °C — straightforwardly gives probabilities consistent with these estimates. For example, the chance of 6 °C warming by 2100 might be on the order of 1/100[11], and the probability of higher levels of warming is much smaller. If that’s true, it’s clearly unreasonable to argue based on the uncertainty in warming that existential risk from climate change is of order 1/10.

But what about arguing for a large existential risk from climate change based on uncertainty in how much it takes to induce existential catastrophe? As a partial example of such a view, we can use Mark Lynas’s appearance on the 80,000 Hours podcast, in which he suggests that

“[global civilizational collapse has] a 30 to 40% chance of happening at three degrees, and a 60% chance of happening at four degrees, and 90% at five degrees, and 97% at six degrees.”.

Now, the 50% chance of warming reaching 3 °C cited in the thought experiment is approximately reasonable[12], and together with a 30% chance of collapse occurring at 3 °C we would have a risk of climate-change-induced collapse of at least 1/10 (0.5*0.3=0.15)[13]. If this collapse is unrecoverable, or even just somewhat likely to be so, the existential risk from climate change would be of the same order — as high as that from AI or pandemics

How reasonable is it to assign these kinds of probabilities to collapse at moderate levels of warming? (I’ll address the issue of recovery from collapse later.) Many in the EA community seem to disagree[14]. Lynas’s book (Our Final Warning) has also been criticized on the EA Forum for potential misinterpretation of evidence. Given all of this, it may be tempting to just dismiss this view out of hand; however, let’s examine this more carefully.

For convenience, let’s use p to denote the probability that 3 °C of global warming can induce global civilizational collapse. What would a good justification for a certain estimate of p actually look like? As I noted earlier, understanding the damage caused by a given amount of warming is much harder — and a fundamentally different problem — than understanding the warming for a given amount of CO. We can only come up with well-constrained probability distributions for the latter because the underlying physics is well understood (i.e. can mostly be described using relatively simple equations) and we have applicable empirical records of past climate change. But this isn’t the case for the former problem.

What about the economic models of climate change impacts? I don’t claim to be much of an expert on this subject, but you don’t need to be one to be skeptical of how useful these models are for quantifying risks of global collapse (even if they are very useful for other purposes). These models rarely even attempt[15] to consider wars, mass migration, famine, or any of the other processes that would probably be fundamental to a climate-change induced collapse. A model that is designed so as not to consider process X is irrelevant for quantifying the likelihood of X[16]. Imagine if someone told you “I know that this asteroid isn’t going to hit Earth, because I studied its trajectory in a simulation that doesn’t allow for the possibility of asteroids hitting Earth”.

The purpose of pointing out these weaknesses is not to argue for or against any particular value of p. The purpose is to suggest that, given these weaknesses, people’s estimates of p are likely substantially (and perhaps dominantly) affected by their intuitions. I don’t mean that people are “just” using their intuitions — one can have detailed discussions about specific causal pathways (war, famine, etc.) and their likelihoods — but that, in the absence of models anywhere near as objective as those used to understand climate sensitivity, estimates of p on this basis will still end up being strongly coloured by people’s intuitions.

Some people’s estimates of p are very low (this seems to include most EAs that have written on the topic), and some people’s estimates — like Mark Lynas’s — are very high (I’m probably somewhere in the middle). But the key point, if they are indeed mostly based on intuition, is that it’s not immediately clear that any of these views is objectively more justified. While many within EA may favor low values of p (and if true, it could be quite interesting to ask why), this means that a pessimistic estimate (even of the “p is order 10%” magnitude) is not obviously unreasonable.  

Finally, let’s briefly consider the probability that a global collapse is unrecoverable — call this p. If the 30% probability of collapse considered initially refers only to collapse more generally, our calculation of existential risk is not complete without including p. But estimating this probability seems at least as hard as estimating the probability of collapse in the first place — in which case the above argument applies again! In other words, estimates of p also lack good objective constraints, and people’s estimates would likely be strongly driven by their intuitions. With a 50% chance of 3 °C warming and a 30% chance of collapse at 3 °C, a value of p = 0.5 will still yield an existential risk of order 1/10 (0.5*0.3*0.5=0.075). You would need great confidence in a small value of p to be able to dismiss climate change as an existential risk on this particular basis.

How this might be wrong

Right now I can think of a few major ways in which this might be wrong. The first and most obvious one is if the case for low values of p or p can actually be made much more rigorously and conclusively than I have presented it here. In other words, if it can somehow be made clear that civilization is extremely unlikely to collapse at moderate levels of warming and/or is extremely likely to recover from such a collapse. If this is true, my only response is that I’d love to see this!

Next, perhaps I’m demanding too much of “models” in assessing the probabilities of certain outcomes. Among contributors to existential risk, the warming per unit CO is very much an outlier in terms of how easy it is to constrain objectively. Compared to AI and pandemics, the fact that we have economic models at all might imply that we should be less uncertain about how damaging climate change will be relative to those risks. But again, the economic models are still very obviously flawed in terms of understanding existential risk, and I’d love to see much more detailed discussion on this until we have some clarity on what reasonable values of p are.

Related to this, maybe there’s some reason why it’s actually fine to rely on low values of p or p largely from intuition. I’m quite skeptical of this, but open to being convinced.

Finally, there’s a way Claim 2 could be wrong even if human civilization is actually as fragile as the pessimistic perspective suggests. In that case, the existential risk from AI and pandemics could also be much higher than the estimates of 1/10 and 1/30 quoted above (although clearly there’s a limit of 1), and so there could still be a huge difference in importance between these risks and climate change. If this is right, I think there’d still be something interesting here: the fragility itself would seem to be by far the dominant contributor to total existential risk, and we should think of ways to do something about that.

What now?

The motivation I gave at the beginning of this post was to understand the disconnect between climate optimists and pessimists, especially within EA. To help do this, I presented and examined two claims: that the majority of the existential risk from climate change is due to uncertainty in the damages caused by warming; and that it is not obviously unreasonable to think that the existential risk from climate change is similar to that from AI and pandemics.

A key point is that understanding how likely moderate levels of global warming are to cause global collapse is really hard; the same applies to how likely civilization is to recover from such a collapse. Different intuitions on this can end up leading to vast differences in the estimated existential risk from climate change: from the 1/1000 - 1/10000 risks given in past EA assessments, to something perhaps as large as 1/10 in the most pessimistic case. For estimates that rely a lot on intuition, it’s hard to assess objectively which position is more correct.

Of course importance is not all that matters; I did not consider neglectedness and tractability. In the case of climate change, part of the reason that it’s relatively less prioritized than AI and pandemics is that it’s considered much less neglected (see 80,000 Hours). This certainly seems reasonable. But it’s also worth noting that in this post we’ve been talking about disagreements about the importance of climate change that span many orders of magnitude, and this could have a substantial effect on relative prioritization regardless of neglectedness and tractability. Furthermore, a new understanding of where the key uncertainties/sources of subjective risk are might open up new opportunities for impact.

Ultimately, regardless of how much you agree with any of this, I hope this post stimulates some useful discussion! I’m interested to hear all of your comments.

Acknowledgments

Helpful comments and discussions were provided by: Emily, Gatlen, Goodwin, Juan, Mira, Sarthak, Xuan. All mistakes are my own.

  1. ^

     This reduces damages from anthropogenic Earth system change to global warming only, which is far from ideal but sufficient for the purposes of this post.

  2. ^

     One classic example of this is the temperature beyond which humans would die of heat stress (see for example Sherwood and Huber 2010: An adaptability limit to climate change due to heat stress). Reaching this limit globally would certainly be very bad for humanity; however, this probably requires warming on the order of 10 °C or greater. This post will focus on the damages that can occur at much lower levels of warming, and so I won’t discuss this again here.

  3. ^

     See Halstead, 80000 Hours, and The Precipice (Ord, 2020, pages 102-113)

  4. ^

     There are nuances regarding definitions, but they’re not really relevant for our purposes.

  5. ^

     There is certainly a clear upper limit somewhere: if it’s not the heat stress mentioned above, then it’ll be the runaway greenhouse. However, both lie quite far in the long tail of warming (i.e. are quite unlikely).

  6. ^

     There’s a very small amount of double counting going on here, but I neglect it for simplicity. I also haven’t considered P(existential catastrophe) as a continuous function of warming: this would probably just make the effect larger anyway.

  7. ^

     Fluid dynamics, radiation, etc.

  8. ^
  9. ^

     For example: “It is not obviously unreasonable to think that the existential risk from climate change is much larger than current mainstream EA evaluations, even if it’s not quite on the level of AI or pandemics”

  10. ^

     Strictly speaking, the risk of existential catastrophe in the next 100 years.

  11. ^

     See discussion here.

  12. ^

     Again, compare to discussion here, which includes estimates from the IPCC Sixth Assessment Report.

  13. ^

     This even neglects the probabilities he assigns for collapse at warming beyond 3 °C.

  14. ^

     See the literature already discussed (The Precipice, 80,000 Hours profile, the podcast with Mark Lynas, etc.)

  15. ^

     I am not personally aware of any climate change impact models that do (especially widely used ones, which is what matters), but I am not an expert on this and so I might just not have heard of them.

  16. ^

     In fact, this paper shows that the ubiquitous climate economics model DICE is incapable of generating an economic collapse with any (!) level of climate-induced damages. Sound reasonable?

Comments28
Sorted by Click to highlight new comments since: Today at 1:02 AM
[anonymous]2y55
1
1

Thanks for writing this I thought it was very clear and so upvoted. I disagree with both claims 1 and 2 and will try to explain why. 

I think there are ways to constrain climate damages. This is what the climate economics literature tries to do. That literature has come in for some justified criticism in recent years for using out of date literature and being a bit of a mess. However, it remains true that recent studies using recent data that try to add up all of the costs of climate change that people talk about tend to find that the monetised value of the costs of 4 degrees C are equivalent to a 5-10% counterfactual reduction in GDP 2100 relative to a world without climate change. This is not relative to today, it is relative to 2100. On all plausible socioeconomic scenarios, income per head will have grown by at least several hundred % up to 2100, so average living standards will be higher. 

To give one example, Takakura et al add up impacts from

  • Changes in agricultural productivity
  • Undernourishment
  • Heat-related excess mortality
  • Cooling/heating demand
  • Occupational-health costs
  • Hydroelectric generation capacity
  • Thermal power generation capacity
  • Fluvial flooding and coastal inundation

Their model produces the following results. Note that RCP4.5 is now widely seen as 'business as usual', and it implies about 2.7 degrees above pre-industrial. The monetized impact is about 2% of GDP in 2100 relative to a counterfactual without climate change. For RCP8.5, which implies about 4.4 degrees (there is <5% chance of this on current policy) implies a monetized cost of 5-10% of GDP. 

Which impact channel do you think this model is missing such that its estimates are wrong by 2,000%?

There is one model by Burke et al (2015) that finds that GDP will be 25% lower in 2100, with a 5% chance it will be 60% lower in 2100 relative to a counterfactual without climate change. This is a massive outlier relative to the rest of the literature, but even this model does not find that average living standards would decline. This study tries to calculate the costs of warming using historical data on interannual weather variation, and I personally don't  trust it, as I will try to explain in a forthcoming report. 

In summary, most economic models project that 4K of warming would do damage equivalent to knocking 5% of GDP in a world in which incomes would be much higher due to economic growth. No models project that living standards would decline relative to today. If your claim were true, one would have thought that at least some models would predict a decline in living standards. 

This literature would have to be very dramatically wrong in order to justify the claim that climate change is comparable to biorisk and AI. My views on AI and bio are as follows: I believe that there is >20% chance that civilisation will be completely upended in the next 20-30 years, with outcomes as bad as a >50% decline in GDP relative to today. No-one who has tried to develop a model and add up the costs of climate change according to the latest literature thinks that this is true of climate change. So in my view it is unreasonable to think that climate change is in a similar ballpark. I also think nuclear war risk is far greater. 

Independently, it is just hard to see why 3K would do such massive damage. The world has warmed by nearly a degree since 1980 and average living standards increased by several hundred %. What is the actual mechanism whereby an extra 2K would cause the collapse of civilisation? Civilisations thrive at very different temperatures across the globe - eg the southern US is more than 3K warmer than the northern US - and people live through very large seasonal and diurnal temperature changes. What  is meant to happen at 3k which would destroy civilisation?

cwa
2y35
1
0

Thanks for your comments, for the detailed response, and for upvoting on clarity rather than agreement! I'm looking forward to your upcoming report.

I am not enough of an expert on the economic models at this moment for a debate on the detailed ins and outs of the models to be particularly productive. Nevertheless I do have a lot of experience with mathematical modeling in general, and particularly in modeling systems with nonlinear phenomena (i.e. cascading/systemic effects). From this background, I find the complete absence from these models of phenomena that will almost obviously be key drivers in any global collapse (war, mass migration, etc. --- which are notably missing from the list you give) rather disturbing. As I said in the text, there is no way that models excluding phenomenon X can give you a reasonable estimate for the likelihood of X.

Of course things like war and mass migration are missing from the models because they're really hard to model, and so you can't fault the economic modelers for that. But all models are a crude abstraction of reality anyway; what's important is whether, for the scenario being studied, the models describe the real world in any useful way. I gladly concede that economic models are helpful for predicting, e.g. "small" changes in GDP due to climate change, but see no grounds yet for moderating my skepticism on their ability to say anything meaningful about risks of collapse.

I emphasize that these are not good grounds for thinking that the collapse risk is very high, and this is also not the position I am defending! But they are good grounds for being skeptical of the ability for current models to truly constrain the probability of these extreme scenarios.

I think this is too bearish on the economic modeling. If you want to argue that climate change could pose some risk of civilization collapse, you have to argue that some pathway exists from climate to a direct impact on society that prevents the society from functioning. When discussing collapse scenarios from climate most people (I think) are envisiging food, water, or energy production becoming so difficult that this causes further societal failures. But the economic models strongly suggest that the perturbations on these fronts are only "small", so that we shouldn't expect these to lead to a collapse. I think in this regime we should trust the economic modeling. If instead the economic models were finding really large effects (say, a 50% reduction in food production), then I would agree that the economic models were no longer reliable. At this point society would be functioning in a very different regime from present, so we wouldn't expect the economic modeling to be very useful.

You could argue that the economic models are missing some other effect that could cause collapse, but I think it is difficult to tell such a story. The story that climate change will increase the number of wars is fairly speculative, and then you would have to argue that war could cause collapse, which is implausible excepting nuclear war. I think there is something to this story, but would be surprised if climate change were the predominant factor in whether we have a nuclear war in the next century.

Famine induced mass migration also seems very unlikely to cause civilization collapse. It would be very easy with modern technology for a wealthy country to defend itself against arbitrarily large groups of desperate, starving refuges. Indeed, to my knowledge there has been no analogy for a famine->mass migration->collapse of neighbouring society chain of events in the historic record, despite many horrific famines. I haven't investigated this quesiton in detail however, and would be very interested if such events have in fact occurred.

Thanks for the perspective! I agree in part with your point about trusting the models while the perturbations they predict are small, but even then I'd say that there are two very different possibilities:

  1. we can safely ignore real-world nonlinearities, cascading effects, etc., because the economic models suggest the perturbations are small.
  2. the predicted perturbations are small because the economic models neglect key real-world nonlinearities and cascading effects.

As long as we think the second option is plausible enough, strong skepticism of the models remains justified. I don't claim to know what's actually the case here --- this seems like a pretty important thing to work on understanding better.

I don't understand 2. The neglected cascading effects have to cascade from somewhere. You are saying that the model could be missing an effect on the variables in its system from variables outside the system. But the variables outside the system you highlight are only going to be activated when the variables in the system are highly perturbed!

Forgetting the models for a second, if the only causal story to wars and mass migration that we can think of goes through high levels of economic disruption, then it is sufficient to see small levels of economic disruption and conclude that wars and mass migration are very unlikely.

I do not think that "small levels" is necessarily what we see - increased rates of natural disasters have really substantial effects on migration and could produce localized resource conflicts. But those don't seem large scale enough to trigger global catastrophes.

[anonymous]2y15
0
0

I agree with Damon's comment. To add to that, in the post, you appeal to Mark Lynas' opinion on the risk of collapse due to climate change. But he thinks civilisation will collapse due to  the direct effects. 

Another point is that economic models can shed light on how big the indirect effects are meant to be. Presumably if something has larger direct effects, then it will have larger indirect effects, as a rule. If the models are correct and the direct costs of climate change of 2-3C are equivalent to ~5% GDP, that would put it in the ballpark many other problems that constrain global welfare, like housing regulation, poor pricing of water, underinvestment in R&D, the lack of a land value tax, etc. But few argue that these sorts of problems are a key driver of the risk of nuclear war this century.

Skepticism of the Takakura paper / economic modelling of climate damages

I'm somewhat skeptical of the Takakura paper you mention. First, 2% loss in GDP relative to the counterfactual at 2.7 degrees just seems way too small. I think this is partly due to this paper, which whilst a bit emotive, seems to point out some quite major flaws in Nordhaus' methodology for calculating climate damages (e.g. 90% of GDP will be unaffected as it happens indoors) who also arrived at 2.1% at 3 degrees warming. Keen in that paper also seems to think that due to the assumption above, plus using the impact of current temperatures on GDP (as you did at the end of your comment, see below), Nordhaus might be underestimating climate damages by an order of magnitude, bringing it to 20%. I feel like a model which has serious methodological flaws (Nordhaus) which produces answer similar to Takakura indicates that Takakura also has things that it hasn't accounted for. 

Independently, it is just hard to see why 3K would do such massive damage. The world has warmed by nearly a degree since 1980 and average living standards increased by several hundred %. What is the actual mechanism whereby an extra 2K would cause the collapse of civilisation?

Like above, I think due to nonlinearity in climate impacts/damages, it doesn't make sense to that a 1.2 degree temperature increase with little/no impact on living standards will scale up to 3 degrees of warming with also very little impact on living standards. 

In terms of the mechanism, there's credible IPCC estimates that put forced migration due to climate change between 25-1000 million people,  with a potential median  of 200 million people by 2050. In my opinion, forced migration of that scale could play a role in societal destabilisation, even though the chances of this happening might only be quite small (e.g.5%). For example, there is nothing in Takakura about impact of forced migration, civil unrest, etc, which could also mean they've underestimated the GDP impacts. There's probably many other mechanisms they've not been able to include, and the world is very complex, so it seems overconfident to put a lot of weight on one paper that only models 9 specific impacts.

Uncertainty on GDP being a good predictor of x-risk

Otherwise, I'm not even sure that GDP is the best indicator of the probability of x-risk posed by climate change. Even in Takakura's paper, they say that:

We used the percentage of GDP as an impact indicator but we do not claim it is the best or only indicator to evaluate the impacts of climate change.

For example, one could make a similar claim for AI analogous to what you said about climate:

The world has warmed by nearly a degree since 1980 and average living standards increased by several hundred %.

 namely: "AI and ML algorithms have developed exponentially in the past 50 years, yet living standards have only gotten better. So how could additional improvements in ML or AI lead to existential risk?" 

Whilst this example probably isn't perfect, I think it highlights how the past isn't necessarily a good predictor of future x-risk from certain scenarios. Whilst the AI scenario might lead to more of a rapidly worsening scenario if we develop misaligned AGI, it's feasible that passing climate tipping points would trigger a serious increase in x-risk that economic models or past GDP metrics fail to account for.

[anonymous]2y9
0
0

Hi James, thanks for this. 

Note that the Takakura paper was not cherry picked as shown in the chart below from the IPCC

The big outlier in green is Burke, but the structural modelling studies that add up the costs of climate change in different sectors tend to put the costs of climate change at <10%. 

I don't think a criticism of Nordhaus' model is a criticism of Takakura's model. They are quite different. I agree that the Nordhaus stuff is very flawed. Unfortunately, I think that because of these flaws, people have written off the whole of climate economics, which on the whole produces similar findings to Nordhaus, and not all of which is flawed.  I checked their references, and Takakura uses up to date literature on all of the impact mechanisms. 

I agree that they don't cover indirect risks. I would view the literature as a good estimate of the direct costs of climate change, which are less than 10% of GDP. It is still a useful corrective to views like Lynas and others that due to the direct effects on agriculture and the like, the chance of civilisational collapse is at 3C is like 30%. 

To be clear, they are not measuring costs that would show up in GDP statistics. They are measuring the welfare costs of climate change expressed in monetised terms, so including effects on health, output and so on. 

I take the ding on the AI/ML growth analogy. 

If the claim is  that the indirect risks of 3C are large enough to destabilise civilisation because displacement would lead to nuclear war or something then we can argue the toss on that one (perhaps now?). But if the claim is that the direct costs are sufficient to destroy civilisation, as Lynas and others seem to think, I just think that is just clearly wrong

I want to add another dimension which underlies a huge amount of implicit disagreement: climate change impact timelines  versus technological growth and AI / other technology takeoff. In order to see climate as a critical threat, you need the other key sources of risk to be much farther away than we currently expect.

To explain, it seems likely that most of the severe climate impact is post-2050, perhaps closer to 2070, or even later i.e. likely  occurring well after we have passed peak emissions, but before we manage net-negative emissions. But if we have managed to build AGI, powerful nanotech, or nearly-arbitrarily-flexible synthetic biology by then, which is likely, we seem to have only two possibilities - either we're screwed because those technologies go wrong, or fixing climate is far easier because we can achieve an arbitrary level of atmospheric CO2, perhaps  via automated AI labor building carbon reduction and carbon capture installations, or via nano- or bio-tech capture of atmospheric CO2. Collapse due to warming is therefore an existential threat only if nothing significant changes in our technological capabilities. But longtermist EAs have spent years explicitly  arguing that this is between somewhat and incredibly unlikely. 

Still, I think  slow technological progress has non-trivial probability. We should cover all our bases, and ensure we don't screw up the climate. But given the lack of neglectedness, I rely on our (thoroughly mediocre but only mostly inadequate) civilization to address the problem, unfortunately far more slowly than would be smart, with catastrophic but not existentially threatening impacts in the coming decades due to our delay in fixing the problem. In the meantime, I'm going to support CO2 mitigation, and thank everyone who is working on this very important area - but still focus my altruistic energy dedicated to maximizing impact elsewhere.

Thanks for this, upvoted! I agree with you that timelines seem like a really important angle that I neglected in the post --- I don't have a fully formed opinion about this yet but will think about it some more.

Thanks for this very thoughtful and well-researched post!

Very much agree with "Claim 1", this also seems not only the most severe uncertainty and disagreements between EAs (e.g. John and I disagree on this even though we agree on the "Good News on Climate Change" directional update), but also generally expert as well as the published literature (the variation in damage functions is larger than climate sensitivity & emissions scenarios).

I also agree with a large part of "Claim 2", in particular that until now the estimates on indirect existential risk are not particularly strongly justified (the discussion here is interesting on this).

Great to hear, thanks! Appreciate the link to the discussion, and the points you make --- I definitely agree that there's no reason to think that the direct and indirect risks from climate change are anywhere near the same order of magnitude, and that this is one way an unjustified sense of confidence can creep in.

As I explain in my comment, I really don't think that either claim is the source of most disagreements - the relative timing of AI, nano, and biotech versus climate impact are the real crux.

I think there's a difference between being source of most uncertainty and source of biggest disagreement.

As I understand cwa's "Claim 1" it really just says "the largest uncertainty in the badness of climate change is the level of damage not emissions or warming levels which are less uncertain".

This can be true even if one thinks the indirect existential risk of climate is very low.

Similarly, the core of cwa's second claim does not seem to be a particular statement about the size of the risk but rather that current knowledge does not constrain this very much and that we cannot rule out high risks based on models that are extremely limited and a priori exclude those mechanisms that people worrying about indirect existential/catastrophic risk from climate think contain the majority of the damage.

I'm claiming, per the other comment, that relative speed would be both the substantive largest uncertainty, and the largest source of disagreement.  

 

Despite Claim 1, if technology changes rapidly, the emissions and warming levels which are "less uncertain" could change drastically faster, which changes the question in important ways.  And I think claim 2 is mistaken in its implication, in that even if the risk of existential catastrophe from AI and biorisk are not obviously several orders of magnitude higher - though I claim that they are - the probability of having radically transformative technology of one the the two types is much less arguably of the same order of magnitude, and that's the necessary crux.

This is a really interesting post. The tail risks I've seen considered are generally tail warming scenarios, and once you frame it this way, it makes obviously more sense to focus on tail adaptation scenarios.

I am not sure I'm on board with using Lynas's estimates to frame the debate. I agree we don't have model evidence to base estimates off of, but 30-40% risk of civilizational collapse from 3 degrees seems absolutely insane to me. I could equally say that I estimate the probability at 100% and then say "well it's probably somewhere in the middle". Unreasonably large estimates shouldn't be allowed to frame the discussion just because we can't reject them with model evidence yet.

Here is some weak evidence that tail adaptation risk is not that high in median warming scenarios: the IPCC AR6 report does not emphasize tail risk from adaptation as having a modestly high probability, even though climate scientists are probably the most sympathetic to the idea that it does have a high probability. I'll try to find an exact quote from it about the issue when I'm on my computer...

Thanks for your comments! Some very quick thoughts on your latter two points:

I agree that one should be careful with letting outlier estimates drive the discussion on what the correct estimate is. Nevertheless, I do think that highlighting Lynas's estimate serves two particularly useful purposes here: it highlights that even an estimate this high is hard to concretely refute (as you noted), and it opens up the discussion of the wide range of intervening values (from the "standard" estimates of 1/10000 or 1/1000 all the way to, say, 1/10). While Lynas's estimate might indeed be an outlier, I suspect that a substantial fraction within EA have estimates scattered across this range; you can still think that Lynas overestimated the risk by two whole orders of magnitude and that the "standard" EA estimates are too low.

I also agree with you that climate scientists might be more sympathetic than most to the idea of climate-change driven collapse at median levels of warming, and so the fact that they don't really mention this in the literature is interesting. However I do suspect that this is primarily due to the difficulty of actually studying this using existing modeling frameworks (as discussed above) as well as the way in which "burden of proof" is typically interpreted in science (and especially in IPCC reports, where it seems to be unusually high). I think this makes it all the more important for EA folks to look at in more detail.

[anonymous]2y17
0
0

I wouldn't put any weight on Lynas' opinion. As I discussed in my review of his book, he argues that at 3C of warming, agriculture in the US would be completely destroyed. This is a gross misrepresentation of Glotter and Elliot. When I showed them Lynas' interpretation, Joshua Elliot said "Wow. Yeah, that is definitely not the correct interpretation." When I showed this to Lynas, he didn't care. 

This one misunderstanding alone is highly significant - he incorrectly thinks that the world's second largest food producer will produce zero food at 3C. This is a clear misunderstanding. In fact, all of the projections that take into account likely agricultural progress suggest that US food production will increase over the 21st century despite climate change. 

Yup, I totally agree it's important for EA people to look at it in more detail. My impression was that IPCC reports survey expert priors, which do not have to be solely based on existing modelling frameworks, so I don't know if that is a huge barrier. But I will defer to you on how the report is made.

I am not sure of the value of the two benefits you describe. If most experts had priors scattered fully over the range up to 30-40%, then I would be very shocked and would update my beliefs a lot. I just feel like most priors are very very far from this upper bound and that there is a false equivalence being implied by discussing this whole range.

Similarly, if the purpose is to show that a number can't be concretely refuted, I don't think that is valuable. The burden of proof is on someone making the claim, not vice versa. If I claim that the risk of human extinction this century is 80%, you would have a hard time concretely refuting me. But that wouldn't lend any credence to my argument because I haven't established it in any way.

My knee jerk reaction was also that 30-40% was an unreasonably high number, however, I kind of disagree with "Unreasonably large estimates shouldn't be allowed to frame the discussion just because we can't reject them with model evidence yet" since it's hard to know what numbers are reasonable in the first place. It also provides a good starting point to the discussion and to challenge my assumptions on why I automatically want to reject the 30-40%.

What makes 30-40% a better starting point than 90%? Or 5%?

We talked about this at EAGxBoston and I'm so glad you posted this. Something from our conversation that you don't mention here but I think is also relevant is the problem modeling the effects of systemic species lost and the cascading effect across ecosystems.

Hmm, well, I see the consequences of climate change as extreme and stark this century, but of course not leading to human extinction, that's a step too far.

if I could offer a thought experiment, proposed by an interesting philosopher: What is the smallest change in climate, or one-off event, some minor hiccup, that could lead to the destruction of human civilization because of humanity's response to it?

Under the narrowest idea of "could", virtually any disruption in food or water supply could cause it, especially in nuclear conflict zones like India-Pakistan or South Korea-North Korea. But what seems more likely to cause it would be a much larger event.

Thanks for the post! The climate issue is something I keep on coming up against when I try to think seriously about other long-term possible x risks. This seems particularly concerning given the uncertainty of the models you describe here. I'm not across all of the studies as other commentors here, but it just strikes me that if cause prioritisation was going to be externally influenced, climate would be the big ticket topic that we are most likely to be led astray by.

What strikes me with this issue in EA, more than any other, is the reliance on our own intuition/politicisation in deciding which level of risk we ascribe to it. Understandably, given the high level of mainstream media and political coverage this issue has seen. I don't want to be provocative, but with this issue more than others I think EA community members need to be especially cognisant about models (or authors) we tend to agree with, and think critically about where we might be employing cognitive dissonance and other heuristics.

Thanks for this post, agree with other comments that it's very well written and clear, and on first reading I agree with the core message even if some of the specific points/evidence offered may be debatable (e.g. discussions in the comments re: Lynas). Upvoted!

 

I want to draw attention to one major issue with the analysis that also permeates the dicsussion about climate change as an x-risk both elsewhere and in the discussion here in the comments.

'Following, e.g., Halstead, it is instructive to split the question of climate change damages into three numbered questions'

'This reduces damages from anthropogenic Earth system change to global warming only, which is far from ideal but sufficient for the purposes of this post.'

The reduction of the climate, environmental and ecological crises to only GHG and warming is a major issue within existing sustainability literature, public discourse and governmental policies. These crises and in particular their risks & risk transmission pathways go far beyond this narrow focus on GHG and warming, and ignoring the other ways in which compound human activity is stressing various life-critical earth systems prevents us from having a clear understanding of how our physical environment relates to x-risk and GCRs. 

What are we talking about here? 

If we want only to understand whether warming could lead to x-risk, then that is an entirely different question to whether or not current human activity and its sum consequences on the natural world (could) present an x-risk/GCR. I would argue that the former is an interesting question, but it ignores that the crises we're facing are not only limited to warming, but involve unprecedented changes (in speed) and pressures across a number of life-critical systems.

We're at risk here of not seeing the whole board, and thats a big problem when we're talking about how far we should prioritise these crises within EA.

Some of the various other dynamics that are not included in discussions limited to warming include eutrophication & arable land loss, biodiversity loss & ecosystem degredation, fresh water use & scarcity, air pollution. 

And that's not including the various social challenges that are fundamentally linked to these crises.

 

A second point - you've focused on importance here. I would argue that there is a major case to be made for neglectedness too in terms of targeting the most important/potentially impactful 'solutions' or interventions we can make to address these crises. 

An example: It's often estimated that around 95% of carbon offsets are avoidance credits, which do nothing to offset scope 1 & 2 emissions. We need that money to flow into removal credits instead, but the initial investment is not there yet to make removal technologies competitive and scalable enough to be sold in the offset market (this is beginning to change, but too slowly). 

A second example: Taking some of the IPCC mitigation options as starting points, the shift to bikes and e-bikes & electric light and heavy vehicles have the potential to reduce emissions by 0.2 and 0.8 GtCO2eq/year, compared to 1.1 for energy efficiency improvements and 2.9 for ecosystem restoration, but they recieved 2.9bn and 12bn respectively in VC funding over 24 months compared to just 0.7bn and 0.2bn for energy efficiency and restoration over the same period.

Sustainability is full of misallocated funding and a lack of evidence-driven intervention... it is not guaranteed that socieities at large (governments, publics and markets) will effectively address and solve this challenge, EA could have a role to play here.

 

A note: I see a lot of interesting discussion about economic modelling in the comments - based on experience in my current role working with investors and transitions experts to create scenarios based on these  kinds of models, my impression so far is that they are not comprehensive in accounting for the full scope of expected impacts across socio-ecological-technological systems (something that investors themselves consistently report and are looking to rectify), but I will spend some time reading the originial papers and models before responding to those comments specifically.

Thanks for these great points! I agree that these are both things that should be looked at further.

Thanks for the post. I agree broadly and would also note we do a bad job of thinking through long tail risks associated with climate change. Climate exacerbates the polycrisis (rise of caeserism, challenges of mass migration 10-100x what we see today). There's also potential cataclismic effects for human biology like breaking the 98.6 first line homeostatic defense. https://alltrades.substack.com/p/mushrooms-global-warming-and-the?r=hynj&utm_campaign=post&utm_medium=email&utm_source=twitter

More from cwa
Curated and popular this week
Relevant opportunities