Hide table of contents
by [anonymous]
11 min read 24

73

[Important Edit: I have realised there was an error in my model of how much we will emit, as used the wrong measure of carbon intensity (CO2/$ rather than CO2/kwh). Consequently, I now use a simplified form of the Kaya identity. This suggests that the risk of extreme warming is higher than I initially said. Thanks to Johannes Ackva for pointing this out.]

Understanding the probability of extreme warming of more than 6, 8 or 10 degrees is highly consequential for understanding how we should prioritise climate change relative to other global catastrophic risks. How hot it will get depends on:

● How much we emit

● How sensitive the climate is to emissions

Here, I construct a model of each of these uncertain questions. I conclude that:

  1. Assigning a probability distribution to a broad range of possible ‘business as usual’ scenarios up to 2100, on what I believe to be the most plausible estimate of climate sensitivity, the probability of eventual warming of more than 6 degrees is around 6% and of more than 10 degrees is 1 in 1000.
  2. Assigning a probability distribution to a broad range of possible ‘business as usual’ scenarios up to 2200, on what I believe to be the most plausible estimate of climate sensitivity, the probability of eventual warming of more than 6 degrees is around 16% and of more than 10 degrees is around 1%.

This suggests a lower risk of extreme warming than other leading estimates, such as from Wagner and Weitzman. This is due to differences in priors across climate sensitivity. Nonetheless, the probability of extreme warming is uncomfortably high and strong mitigation remains imperative.

There are two forces here pushing in different directions. On the one hand, many estimates of climate sensitivity are too high due to the faulty use of Bayesian statistics. On the other, focusing only on the most likely 'business as usual' pathway ignores the downside risk of higher-than-expected emissions, due, for example, to surprising economic growth or population growth. Overall, it looks as though the risk is lower than some leading estimates, but still worth worrying about.

I am grateful to Johannes Ackva and Will MacAskill for thoughts and comments. Mistakes are my own.

1. How much will we emit?

How much we emit depends on choices we make. When we are trying to understand how bad climate change could be, I think it is most useful to try to understand how much we will emit if things roughly carry on as they have been doing over the last 20 or 30 years. This gives us a baseline or ‘business as usual’ set of scenarios which allow us to understand how much danger we are in if we don’t make extra efforts to decarbonise relative to what we are doing at the moment.

Existing literature

There are estimates of how much we are likely to emit in the literature. Rogelj et al (2016) provides a good overview of the literature:

_____________________

[1]

The bars here show the median estimate of emissions across different emissions scenarios, and the vertical black lines show the range due to scenario spread - though it is unclear from the text what confidence interval this is supposed to depict. INDCs are Intended Nationally Determined Contributions that countries have made in accordance with the Paris Agreement.

The bottom of the conditional INDC scenario to the top no policy scenario spreads from 2 trillion tonnes of CO2 to 7 trillion tonnes of CO2. This is equivalent to the bottom end of RCP 4.5 (the medium-low emissions pathway) and the top end of RCP 8.5 (the high emissions pathway). Median cumulative emissions on current policies is 3.5 trillion tonnes of CO2, which is about the middle of RCP6.0 (the medium high emissions pathway). You can check how cumulative emissions correspond to emissions pathways with this table:

[2]

My own model of likely emissions

It remains somewhat unclear from the Rogelj et al (2016) estimate how probability should be distributed across these scenarios. How plausible is a global no policies scenario, for example? Thus, I have constructed a model myself which tries to give a plausible probability density function across a range of emissions scenarios. To do this, I have given different estimates of the three parameters in the Kaya Identity:

Total cumulative CO2 emissions is the product of three factors: (1) human population, (2) GDP per capita, (3) carbon intensity (emissions per $).

My estimate of these parameters:

● Uses existing estimates of the likely trends in these parameters over the century, where available

● Where these are not available, I extrapolate for the trends over the past 30 or so years in the parameters of interest.

The model is here. It includes:

● Three estimates up to 2100 of the likely range of business as usual emissions.

○ One is based on extrapolating growth in GDP per capita from the last 30 years

○ Another is based on the Christensen et al expert survey of forecasts of economic growth.

○ One assuming that there is an AI explosion leading to growth of 10% per year.

● One estimate of emissions to 2200 which extrapolates GDP per capita growth from our experience over the last 30 years.

The results are here:

(Note results will vary in the real model depending on when you refresh the model.)

There is large uncertainty about the parameters that make up the Kaya Identity and this produces large uncertainty about business as usual. For example, 2% economic growth produces an economy that is 5 times larger in 2100, whereas 3% growth produces an economy that is 10 times larger. The 95% confidence interval for UN population projections stretches from 9.5 billion to 13 billion people. This is why there is such uncertainty about how much we will emit, assuming that we make no extra effort to reduce emissions.

Emissions and CO2 concentrations to 2100

● The median business as usual scenario to 2100 is the medium-high emissions pathway (in the RCP6 range), which is roughly the same as what happens if countries continue on current policy

○ This corresponds to atmospheric CO2 concentrations of about 700ppm.

● The upper 5% bound of cumulative emissions is beyond the RCP8.5 range.

Emissions and CO2 concentrations to 2200

If we continue on current trends up to 2200, then the median pathway is 11 trillion tonnes, and there is a 5% chance of more than 31 trillion tonnes (which is bad news).

Flaws in the model

This model assumes that the parameters in the Kaya Identity are independent, which is false. So, the model should be taken with something of a grain of salt. Nevertheless, I do think it is useful for giving a fairly plausible range of uncertainty about what we could emit without making extra effort to mitigate.

2. How hot will it get?

The relationship between CO2 concentrations and warming is logarithmic: at least within a certain range, each doubling of concentrations produces the same amount of warming. Equilibrium climate sensitivity measures how much the planet warms after a doubling of CO2 concentrations, once the climate system has reached equilibrium. There is uncertainty about the true equilibrium climate sensitivity. The IPCC does not give a formal probability distribution function over equilibrium climate sensitivity, but instead states:

“Based on the combined evidence from observed climate change including the observed 20th century warming, climate models, feedback analysis and paleoclimate, as discussed above, ECS “is likely [>66% chance] in the range 1.5°C to 4.5°C with high confidence. ECS is positive, extremely unlikely [<1% chance] less than 1°C (high confidence), and very unlikely [<10% chance] greater than 6°C (medium confidence)”.[3]

Lamentably, this leaves the nature of the right tail of climate sensitivity very unclear. In Climate Shock, Wagner and Weitzman discuss how to convert this into a probability distribution function. They end up positing that the underlying distribution is lognormal,[4] which suggests a distribution over climate sensitivity that looks like this:

This is a heavy tailed distribution which, as we shall see, leaves us with a high chance of extreme warming.

The influence of uniform priors

I think this estimate of climate sensitivity and others like it are flawed. As far as I can tell, the heavy right tail produced in many IPCC estimates of climate sensitivity is entirely a product of the fact that these posterior distributions are updated from a uniform prior over climate sensitivity with an arbitrary cut-off at 10 degrees or 20 degrees. I checked some of the papers for IPCC models of climate sensitivity that have a long tail and they either: explicitly use a uniform prior which makes a large difference to tail behaviour,[5] or do not say whether or not they use a uniform prior (but I would guess that they do). When this is combined with the likelihood ratio from the data and evidence that we have, we end up with a posterior distribution with heavy right tails.

However, as Annan and Hargreaves (2011) have argued, the use of a uniform prior is unjustified. Firstly, climate scientists use these priors on the assumption that they involve “zero information”, but this is not the case. Secondly and relatedly, the cut-off is arbitrary. Why not have a cut-off at 50 degrees?

Thirdly, it is not the case that before analysing modern instrumental and paleoclimatic data on the climate, we would rationally believe that a doubling from pre-industrial levels of 280ppm to 560ppm would be equally likely to produce warming of 3 degrees or 20 degrees. In fact, before analysing modern data sets, scientists had settled on a 67% confidence range of between 1.5 and 4.5 degrees in 1979, and this likely range has barely changed since.[6] As Annan and Hargreaves note:

“This estimate was produced well in advance of any modern probabilistic analysis of the warming trend and much other observational data, and could barely have been affected by the strong multidecadal trend in global temperature that has emerged since around 1975. Therefore, it could be considered a sensible basis for a credible prior to be updated by recent data.”[7]

Arguments from physical laws also suggest that extreme values of 10 degrees or 20 degrees are extremely unlikely.

If we use a more plausible prior based on an expert survey by Webster and Sokolov, and update this with a likelihood ratio from modern data sets, the resulting posterior 95% confidence interval for climate sensitivity is 1.2–3.6 degrees.

For the sake of sensitivity analysis, Annan and Hargreaves also update using a prior following a Cauchy distribution with greatly exaggerated tails. This prior is quite extreme. It implies a probability of climate sensitivity exceeding 6 degrees of 18% and a probability of more than 15 degrees of 5%. It seems likely that the experts in 1979 without access to modern data sets would have thought this implausible. The posterior upper 95% confidence bound from this prior is 4.7 degrees. The influence of different priors is shown here:

[8]

How hot could it get?

In the second guesstimate model, I have modelled the implications of these different estimates of climate sensitivity for how hot it could get unless we make extra effort to reduce emissions. I have put the estimates of cumulative emissions from the first model and converted them into the corresponding 95% confidence interval for CO2 concentrations.[9] This gives us the correct 95% confidence interval, which is the main thing we are interested in because we want to know the tail risk. Unfortunately, it doesn’t give us the right median. (I’m not sure how to resolve this in Guesstimate).

Emissions to 2100

For simplicity, I just report results from the estimate of emissions that extrapolates from the Christensen et al economic growth forecasts.

● On the Wagner and Weitzman estimate of climate sensitivity, there is about a 15% chance of more than 6 degrees of eventual warming, and a 1% chance of more than 10 degrees

● On the Webster estimate, there is a 6% chance of 6 degrees of warming, and a 0.1% chance of more than 10 degrees.

● On the Cauchy estimate, there is a 14% chance of warming of more than 6 degrees and a 3% chance of more than 10 degrees.

Of these estimates, I think the Webster prior is the most plausible, and this suggests that the chance of 6 degrees is markedly lower than Wagner and Weitzman estimate, on business as usual.

Emissions to 2200

If we assume that we will continue business as usual past 2100:

● On the Wagner and Weitzman estimate of climate sensitivity, there is about a 24% chance of more than 6 degrees of warming, and a 6% chance of more than 10 degrees

● On the Webster estimate, there is a 16% chance of 6 degrees of warming, and a 1% chance of more than 10 degrees.

● On the Cauchy estimate, there is a 22% chance of warming of more than 6 degrees and a 4.3% chance of more than 10 degrees.

Conclusions

Climate Shock by Wagner and Weitzman is one of the best treatments of the importance of tail risk for climate change. Still, I think the very high tail risk suggested by Wagner and Weitzman’s model is a result of the mistaken use of uniform priors in IPCC models of climate sensitivity. If we just take the most likely emissions scenario without extra effort, the chance of 6 degrees is <1%, whereas Wagner and Weitzman estimate that it is >10%.

However, this effect is offset by the risk that emissions are much higher than expected. Once we account for the full distribution of plausible 'business as usual' scenarios, the risk of more than 5 degrees is 6%: still lower than Wagner and Weitzman, but certainly worth worrying about. The chance of 10 degrees of warming is in the 1 in 1000 range. 10 degrees has been posited as a plausible threshold at which climate change poses a direct existential risk.[10] This suggests that the direct existential risk of climate change remains a concern.

If we fail to get our act together by the 22nd century, cumulative emissions could be truly massive, with all the civilisational strife this entails.

Will MacAskill has also pointed out to me that if there is an AI explosion, energy demand will increase massively. I am not sure what to make of this possibility for how promising climate change is to work on as a problem.

Finally, it is worth mentioning that these estimates of climate sensitivity exclude some potentially important carbon cycle feedbacks. However, as I argue here, the median view in the literature is that these feedbacks are much less important than anthropogenic CO2 emissions. However, these feedbacks are understudied, so there is likely considerable model uncertainty.

[1] Joeri Rogelj et al., “Paris Agreement Climate Proposals Need a Boost to Keep Warming Well below 2 °C,” Nature 534, no. 7609 (June 30, 2016): 635, https://doi.org/10.1038/nature18307.

[2] IPCC, Climate Change 2013: The Physical Science Basis: Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, ed. T. F. Stocker et al. (Cambridge University Press, 2013), 27.

[3] IPCC, 84.

[4] Gernot Wagner and Martin L. Weitzman, Climate Shock: The Economic Consequences of a Hotter Planet (Princeton: Princeton University Press, 2015), 182–83.

[5] Olson Roman et al., “A Climate Sensitivity Estimate Using Bayesian Fusion of Instrumental Observations and an Earth System Model,” Journal of Geophysical Research: Atmospheres 117, no. D4 (February 21, 2012), https://doi.org/10.1029/2011JD016620; Lorenzo Tomassini et al., “Robust Bayesian Uncertainty Analysis of Climate System Properties Using Markov Chain Monte Carlo Methods,” Journal of Climate 20, no. 7 (April 1, 2007): 1239–54, https://doi.org/10.1175/JCLI4064.1.

[6] J. D. Annan and J. C. Hargreaves, “On the Generation and Interpretation of Probabilistic Estimates of Climate Sensitivity,” Climatic Change 104, no. 3–4 (February 1, 2011): 429–30, https://doi.org/10.1007/s10584-009-9715-y.

[7] Annan and Hargreaves, 429–30.

[8] Annan and Hargreaves, 431.

[9] This conversion is based on Malte Meinshausen et al., “The RCP Greenhouse Gas Concentrations and Their Extensions from 1765 to 2300,” Climatic Change 109, no. 1–2 (November 1, 2011): Table 4, https://doi.org/10.1007/s10584-011-0156-z.

[10] Martin L. Weitzman, “Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change,” Review of Environmental Economics and Policy 5, no. 2 (July 1, 2011): 275–92, https://doi.org/10.1093/reep/rer006; Steven C. Sherwood and Matthew Huber, “An Adaptability Limit to Climate Change Due to Heat Stress,” Proceedings of the National Academy of Sciences 107, no. 21 (May 25, 2010): 9552–55, https://doi.org/10.1073/pnas.0913352107.

Comments24
Sorted by Click to highlight new comments since:

Thanks -- this is great and very useful, good to have a tractable model of the chain from emissions to warming!

A couple of questions, comments, and ideas on the emissions trajectory path of the model, very much in constructive spirit.

Happy to help implement them if you think this would be useful and grateful for any clarifications!

I. What is the meaning of “extra effort” and, relatedly, what can we conclude from this analysis?
You write that the model is about constructing a baseline if we do not pursue “extra effort”. It is not quite clear to me what would constitute “extra effort” here, this is not only an issue with your model but with similar attempts as well (e.g. whether RCP 8.5 represents business-as-usual or not). But this seems key for knowing how to use this. From what you write, the main goal is cause prioritization at the current margin, but it seems to me that the emissions trajectory model more closely resembles a worst case that would bound “how bad can it get?”. I will explain through the remainder why I think that. Also, just to clarify, I understand that this is a rough and simple model, though it seems to me that the simplifications tend to upwards-bias the estimate.

II. Carbon intensity assumptions seem closer to worst case than “no extra effort”

The main variable that has past effort flowing into it (assuming we haven’t -- at scale -- foregone growth or kids for climate, which seems really unlikely for the last 30 years) appears to be carbon intensity and this is modeled based on the average trend from the last 30 years (right?). If that is so, then I think there are many reasons to think that this estimate is on the pessimistic side, because:

(a) The overarching impact on carbon intensity over the past 30 years has been China’s unprecedented rise in coal consumption, which has weakened the carbon intensity decline due to decoupling in much of the OECD. China is not growing as fast anymore and it does seem, absent the growth shock scenarios via AI or other growth explosions (which you account for separately), rather unlikely that another similarly sized region will experience such strong fossil-driven growth.

(b) For much of the last 30 years, climate policy has not existed in earnest in most parts of the world (The intensity variable goes until 2014, which makes this even more true). This is different now. While climate policy is much weaker than we would like it to be, it is not non-existent anymore.

(c) Relatedly, the last 10 years have seen technological breakthroughs that have not materialized in their impact on carbon intensity yet but that will drive down carbon intensity and that arguably have not been business as usual but rather the result of active climate policy -- primarily, cheap renewables (at least for low levels of penetration) and electric mobility. If one thinks that past carbon intensity trends have mostly been driven by forces other than climate policy, then the existence of climate policy should have an additional effect on carbon intensity (not necessarily emission totals) that we should be able to see going forward.

(d) There are many more of those in store (which we both heavily advocate for :)) that would likely not happen or happen later absent climate concerns, e.g. advanced nuclear, CCS, cheap power-to-gas etc. We would hope those things to happen sooner rather than later, but I don’t think it is right to think that “no extra effort” implies that those things will never happen or that they happen at the same rate and with the same impact on carbon intensity in which they have happened before. [Counterpoint: Long-run low oil prices]

(e) Even if we were to think that there are other “Chinas” in store that would drive up both growth and carbon intensity for a couple of decades, experiences with long-run growth trajectories seem to point towards lower carbon intensity.

III. Worst-case implications of assumed independence

As you write, the assumption of indepence is for simplicity and clearly false, but I do think the effect of this simplification likely leads to a significant over-estimation of climate damage for several reasons:

(a) The independence assumption means that high growth and high carbon intensity over the 21st century is equally likely as low growth and low carbon intensity. But, over such a long time frame, (i) Kuznets curve effects, (ii) affluence-induced postmaterialism and environmental concern, (iii) more low-carbon than high-carbon innovation, (iv) post-industrialization etc. (not fully independent reasons, hinting at the same process) would all seem to suggest that a high growth case would lead to a lower carbon-intensity over the long run, i.e. some moderation that should make the extreme case of high growth and high carbon intensity leading to extreme warming less likely.

I find it hard to think of a substantive relationship between long-run growth and carbon intensity that would make this more likely (but would be good to think about more, maybe I am too positively biased in my expectation here!).

(b) I would also think that for the AI-growth scenarios and carbon intensity the relationship between growth and carbon intensity would probably be strongly negative. If everyone gets a lot richer when there are lots of AI-breakthroughs this might push emissions up in total, but at least in terms of intensity a lowering of intensity seems more likely than not (because AI helps with resource efficiency, with innovation and with realizing preferences and most rich people do not enjoy pollution for its own sake, so even if they don’t give up on consumption there will be some positive incentive for resource efficiency and resultant low carbon intensity) [I am more unsure about this point than the other ones]

(c) Over the time-frame considered, we will be able to observe warming and at least narrow our understanding of climate sensitivity somewhat. As such, assuming that a high-emissions high climate-sensitivity case is as likely as a low-emissions low climate sensitivity case (i.e. assuming independence) seems quite pessimistic, at least when one -- as you tend to suggest (and I agree) -- puts low probability on such abrupt and catastrophic climate change that reaction is impossible.

IV. Other aspects that make this model close to a worst-case

Aside: From what I can tell, the model does not include emissions from land-use, land use change and agriculture. E.g. you write that emissions are 35 GtCO2/year, while with non-energy emissions it is closer to 50Gt/year. Luckily, many of those are short-lived pollutants, so we can somewhat discard them for the present analysis, but forestry and land-use-change seems quite important for mid-range scenarios of climate progress.

(a) As far as I can tell, the model assumes zero negative emissions, while almost all models of successful climate mitigation assume a significant role of negative emissions and there is strongly increasing interest in this space and no fundamental reason to think that all methods of storing carbon will always be expensive. So, this seems fairly pessimistic, especially for those cases of most concern -- assuming high growth, high climate damage and no finding of negative emissions tech seems really unlikely.

(b) The model also assumes zero geo-engineering. But I find it quite hard to imagine a world where we have (i) high growth, (ii) high climate damage and (ii) no geo-engineering (maybe if Russia threatens nuclear war to parties manipulating the climate, but that seems the only case I can think of). I tend to think of geo-engineering as somewhat bounding the badness of climate (and then inviting the potential badness of geo-engineering), what makes you think otherwise?


In any case, great post and looking forward to your thoughts!

[anonymous]2
0
0

Hi Johannes, thanks for this.

1. By extra effort, I have in mind that we are on a trend line of effort on climate change, and extra effort would be a diversion from this historical trend. One rough proxy for this would be the global average marginal carbon price, which has crept up from about $0 to about $2 per tonne over the last 15 years. If in the next 10 years the global average marginal carbon price were to increase to $40, that would be a diversion from the trend line of climate effort.

2&3. These are all specific factors that a more precise estimate might take into account. It would be worth playing around with the estimate to see what effect this was on predicted emissions. It's worth noting that the rough 'follow the last 30 years' approach does produce the same median as other much more sophisticated models. Also note that if you play around with the model and put in much higher projected reductions, you do get a median that is more like RCP4.5, but the tail risk of much higher than expected emissions remains

Reasons to think carbon intensity might reduce beyond trend

  • a. China is anomalous both in terms of growth and reliance on coal. (This doesn't update me that much. The growth factor should be accounted for in the other parameter, and as long as coal is the cheapest energy source, our default assumptions should be that poor countries will rely on it to escape poverty. This is worsened by general equilibrium effects - declines in demand for coal in the West will reduce the cost for everyone else.)
  • b. Climate policy is kicking off. (I don't put much weight on this. Climate action is still very weak across the world and there are horrible political factors that push against strong changes in the trend)
  • c. Technologies in store that should help with carbon intensity. The most obvious ones are solar and wind, which follow an exponential cost reduction curve. If these start to play the role that some people predict, then we might see carbon intensity reduce below the trend line. (I personally would be surprised if these ever get to more than 30% of electricity worldwide, which is still a big deal. The super-pro-renewables people might expect more like 80% of electricity. But perhaps our default should be something like what we're witnessing in Germany, where they are trying very hard to push solar and wind but it's not having much of an effect on carbon intensity of gdp.)
  • d. I doubt large-scale CCS would happen on large enough scale without significant diversions from the trend in climate effort since it costs >$30 per tonne. This is less true but still true for advanced nuclear. The only countries that have pushed nuclear to a significant extent have done so for reasons of energy security. So, the reference class isn't that promising.
  • e. Yeah I agree with these points about assumed independence. The main caveat I would have is that political coordination is much harder than expected.
  • f. I agree that we have time to learn about high climate sensitivity so there is also an interaction there which reduces overall tail risk.

Reasons to think carbon intensity might increase beyond trend

  • g. Maybe political coordination is much harder than we expect. Maybe there is an arms race and people give up on climate.

All of this suggests that the estimates of carbon intensity decline might be biased a bit upwards. The most important factors seem to be decline in costs of renewables, electric cars and potentially advanced nuclear, as well as factors e and f.

4. I don't put much weight on the absence of negative emissions (unless we discover a very cheap form of it). If negative emissions remains at >$50/tonne, I don't see it having much of a role in the 'no extra effort' scenario.

Yeah I think it's most useful to think about what will happen if solar geoengineering is not an option, as this will allow us to figure out the potential benefits of solar geo from an ex risk pov. I agree that solar geo could be a useful backstop, though I don't see it getting deployed unless something quite extreme happens and I would view the governance challenges of deployed solar geo as a major way in which climate change contributes to GCRs.

Hi John,

Thanks for the clarifications and responses!

Regarding your points:

1. Thanks for clarifying the meaning -- so it is not a worst case, but more a baseline where extra effort would be going beyond what we currently see.

It still seems to me what you model is significantly more pessimistic than that.

I think average marginal carbon prices are not a good proxy of overall climate policy effort, because carbon prices are usually not the (i) only climate policy, (ii) mostly not the dominant climate policy (possible exceptions of Sweden and British Columbia, but those are both negligible jurisdictions in terms of emissions) and (iii) other much stronger policies exist and drive carbon intensity reductions.

E.g. we both mention renewables, electric mobility and advanced nuclear as (potentially) important influences on carbon intensity trends, yet none of those has been brought about by carbon pricing policies, but by innovation and deployment policy. Across Europe, progressive states in the US, and China, we have fairly aggressive policies to stimulate low-carbon tech, often with implied carbon prices (technology specific and realized via subsidies) in the 100s USD/tCO2 range.

So, I think even without extra effort, there are significant efforts underway to drive cost differentials down, at least for electric power and light-duty transport, and that is very clearly the result of climate policy (plus air pollution policy).

This is far from enough, but I don’t think it is well-proxied by the state of average carbon pricing policy.

2.

a. On China: Yes, the growth factor is in the growth parameter, but it is *also* in the intensity parameter as a weight, in the same period in which China rises quickly by burning lots of coal its economic importance also increases strongly (i.e. its weight in defining the trend).

I would agree that we should expect developing countries to escape poverty as cheaply as possible, though the other aspect there is that the sheer centralized action capacity and population size are anomalous for the Chinese case. Plus, availability and price of natural gas and renewables have somewhat changed since China’s decision to go all the way with coal.

b. Climate policy kicking off: I think we are talking about different things here. Yes, global climate policy is very weak and I would agree with you that we should, for example, not necessarily expect a change in trajectory from the Paris Agreement.

But despite that, strong climate policy exists in some places and will affect carbon intensity once championed technologies do scale. And this is new and this has not been reflected in carbon intensity yet but likely will.

c. Technologies in store: (I actually think the most significant technology for this to date will be electric mobility.) But even if it is solar and wind, I don’t think that “what solar and wind have done in Germany so far” is a good proxy for “what the technologies accelerated by some governments will do worldwide”, because (i) Germany isn’t very sunny, (ii) we phased out nuclear at the same time (genius, I know!), and (iii) we are already experiencing value deflation which most parts of the world will reach significantly later. (iv) Plus, the share of electrification and thereby the impact of low-carbon electric sources will already increase in a “no extra effort” case (v) And we are still in the beginning of seeing the impact of those technologies globally (the data from which you extrapolate the intensity ends in 2014).

d. New technologies in store: CCS and advanced nuclear both might or might not happen and I hope we can make them more likely to happen and happen faster, but at least for Europe and progressive parts of the US carbon prices in the range of USD 50 by 2030 (or comparable non-price policies) are part of my prediction of “no extra effort”. I agree with the relative evaluation of CCS and advanced nuclear.

e. Political coordination: I think both your and my “no extra effort” case assume essentially zero political coordination. When you assume carbon intensity trends going forward based on the last 30 years (and those end in 2014, i.e. pre-Paris), where there was very little coordination on emissions (in the grand scheme of things, Kyoto doesn’t really matter), there being even less coordination might be a plausible worst case, but just assuming continued no coordination should not change the estimate much. Likewise, I think your estimate is pessimistic not because I am more optimistic about global coordination, but because I think you underplay the non-coordinated-but-present efforts by some governments to change relative cost. If they have some effect, then carbon intensity declines in the future should be higher than in the last 30 years as a matter of default no-extra-effort-prediction.

g. Breakdown of cooperation / arms race: I agree with that. That should widen our range of estimates, not sure it should shift the median much (but the mean).

4. Negative emissions: As discussed above, I think also in the no-extra-effort scenario there is significant effort do enable low-carbon tech, and it seems a fairly pessimistic assumption that by the end of the century we will not have at least some cheap negative emissions tech (not necessarily enough to offset all emissions, but significantly more than having no effect in expectation). This is not the world I am seeing when I see what UK, EU, progressive governments in US are doing to further technological development. We are not in a world where no one is trying to make low-carbon solutions succeed and get cheaper.
And in particular, it seems hard to imagine a world with high climate sensitivity, high growth and no one attempting to bring down the cost of negative emissions approaches.

This seems quite at odds with typical dynamics of higher problem severity and higher capability driving a more active search for solutions, of which negative emissions are attractive because they can still work after we failed on having foresight early on and avoid some of the more unpredictable risks of geo-engineering.

On geo-engineering: You seem to answer a different question here, the value of geo-engineering. But if the question of the model is, “how hot will it get?”, then I think it makes sense to make an explicit assumption about when you would expect it being used based on empirical expectation.

In terms of conclusion

You write:

“All of this suggests that the estimates of carbon intensity decline might be biased a bit upwards. The most important factors seem to be decline in costs of renewables, electric cars and potentially advanced nuclear, as well as factors e and f.”

I think that downplays the issue and it conflates two distinct effects as if they affected the same variable (carbon intensity), which they do not.

From your list a-d (and g?) are responses to effects on carbon intensity (in my list the points under II).

From your list e-f and the issues under III in my list affect the probability that all four variables driving warming (population, GDP per capita, carbon intensity, climate sensitivity) vary in the same direction with regards to their effect on overall warming probabilities, which is probably less likely (we agree on that) and thereby will have an effect on expected warming quite different from the potential upward bias in carbon intensity.

This latter point is very different from arguing for a mean/median change in carbon intensity decline rate.

As you suggest, I will try to play around with the model a bit and see what the effects of these different assumptions are. Thanks for the good discussion!

[anonymous]2
0
0

2a. On China, I don't think population size matters for the carbon intensity of gdp - that should mostly be accounted for in the population parameter. agree that gas and reneawables are cheaper and that might be a reason that emerging economies won't use as much coal.

b. It doesn't seem to matter that much that strong climate policy exists in some places, e.g. Sweden. What seems to matter is whether there has been a notable change in global climate policy over the last 5 years that renders just extrapolating from the last 30 years especially unreliable. I don't see anything that would justify that.

c. Yeah that's fair. renewables a re a reason to think that intensity might decline below trend.

d. The places where we might get $50 per tonne carbon prices are less than 10% of global emissions to 2050, on current trends. historical experience suggests that some extremely wealthy left wing countries might impose high carbon prices in the next 30 years. Unfortunately, this won't have much of an effect on global emissions. there might be some places that impose high carbon prices, but they will cover a small fraction of emissions, just following the trend of the last 5, 10 or 30 years.

e. I don't think they assume zero political coordination. Political coordination would increase on the trend it has been doing over the last 5, 10 or 30 years.

4. It might be that there is cheap negative emissions, but this doesn't seem to me in the most likely range of scenarios (barring ocean fertilisation or something being good, which I don't know much about). It's worth pointing out that median carbon intensity ends up being pretty low on the estimate I give - it is a quarter of carbon intensity today, and the the upper end of the 95th percentile is a tenth of carbon intensity today. This is with climate action proceeding at the same dismal pace as it has over the last 30 years.

5. It would be weird to model geoengineering in this model. It seems to make much more sense to think about geoengineering as one of the solution tools you could use given where emissions might go. If you start thinking about the probability of solar geo in this model, the emissions wouldn't matter anyway, the temperature would. If you think there is a 10% chance of solar geo, then you would have to model at 10% chance of there being 2 degrees of warming. I think it is much easier to think about solar geo separately from this model.

Re 2a, China, what matters is the degree to which it has influenced global average carbon intensity. It is difficult to think of an event as impactful on global average carbon intensity than the boom of the second largest country population wise fueled by coal, at least as long as the estimate of carbon intensity is global (population and gdp/capita matter here as time increasing weights for carbon intensity).

Re 2b, the state of strong local climate policy matters insofar as it gives reason for global carbon intensity decline going forward and the initiatives of California and EU countries on electric mobility and renewables have been very decisive changes begun the past 20 years but with most of their impact in the future.

Re 4, it seems pretty likely to me that we will figure out some negative emissions options that are cheap, there will be strong reasons to try, there are many natural and technological approaches and there is still time for progress on that. But can't offer you more than that as justification, I guess I just have a different prior for that.

Re 5, this might be more about semantics then. I agree it would not be natural to build this into the model (though the way you suggest would work) but I also think that for scenarios with more than 2 or 3 degrees of warming expectations about geoengineering will drive a significant part of the answer to your question of how hot it will get.

Something I forgot to mention in my comments before: Peter Watson suggested to me it's reasonably likely that estimates of climate sensitivity will be revised upwards for the next IPCC, as the latest generation of models are running hotter. (e.g. https://www.carbonbrief.org/guest-post-why-results-from-the-next-generation-of-climate-models-matter, https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782 - "The range of ECS values across models has widened in CMIP6, particularly on the high end, and now includes nine models with values exceeding the CMIP5 maximum (Figure 1a). Specifically, the range has increased from 2.1–4.7 K in CMIP5 to 1.8–5.6 K in CMIP6.") This could drive up the probability mass over 6 degrees in your model by quite a bit, so could be worth doing a sensitivity analysis on that.

[anonymous]2
0
0

Ah, I didn't know that, thanks, I haven't followed the literature that closely over the last year. I'll put that into the model.

On a side note, that does seem high, and doesn't seem like it would fit with the observational data for the last 200 years very well.

Cloud formation was the biggest unknown feedback loop and efforts to model them more accurately has led to the increase in range. The effects only start at unprecedented levels of warming which is why observational data may not fit.

https://e360.yale.edu/features/why-clouds-are-the-key-to-new-troubling-projections-on-warming

[anonymous]2
0
0

right, that's bad news.

Peter here - so actually I'd say this isn't clear now - here's some recent work for example suggesting that estimates of future warming won't change much compared to those from the previous set of models once recent observed warming is used as a constraint i.e. those newer models with higher sensitivity seem to warm too fast compared to observations e.g. https://advances.sciencemag.org/content/6/12/eaaz9549 . Well, the models are only one piece of evidence going into the overall estimate anyway. I don't follow the literature on this closely enough to be confident about what the IPCC will actually conclude.

TL;DR: Assuming everything can be fit with a linear trend completely overwhelms the importance of working out what that trend is in these extreme cases, so while instructive for median behaviour, I don’t believe this approach is sufficient to assert anything about tail probabilities.

It’s good to see so much work summarised in one page, but the cost of this is rigour. I agree with the problems with using ECS as mentioned above, and add that, since these trajectories do not result in net 0 CO$_2$ emissions at 2100, it’s not even a good approach to estimate the temperature in 4100 (in the imaginary world where really slow and hard-to-model things stay the same, but easy-to-model things don’t, except CO$_2$ [1]). It’s also worth noting that TCRE normally assumes a linear CO$_2$-T relationship rather than logarithmic, although this is disputed [2], and not really designed for changes of many degrees. A similar problem exists for carbon intensity. You assume an exponential decay, but so far we’ve seen a pretty linear one. (This would imply negative emissions will eventually happen on their own!) While it’s good that you put so much effort into probability distributions of these values, it doesn’t help if you’re wrong about the equation they go in.

Regarding constant carbon intensity improvements (geometric, linear or otherwise) and extra effort, I’m not really clear what you’re proposing needs conserved – a conservation of the current level of effort into decarbonisation, or a conserved rate of change of effort into decarbonisation (since we’ve clearly been putting more effort in recently). It feels like you’re implying a constant effort derivative, i.e. slowly increasing carbon price and legislation.

You (and many others) complain the IPCC does not report extremes of the ECS PDF, then complain about what they are. The IPCC specifically makes a point of not quoting values for these extremes because there isn’t any consensus on it. We do not have > 95% confidence that the full simulations aren’t missing some big factor, in the same way we missed the breakdown of the ozone layer until after it was observed. The presence of the ozone hole, and various other weird new atmospheric chemistries, places similar limits on our confidence in paleoclimate data, as does the unprecedented rate of CO$_2$ release [3]. This does indeed increase the importance of the priors, which is why the fact we can’t agree on them is so problematic. By this point I don’t think it’s possible to disentangle true priors from decades of simulations, back-of-the-envelope calculations and climate history, and since we want to use all of these factors later, none of them can be considered truly prior. The degree of agreement between old and new estimates of ECS is interesting but irrelevant, since it doesn’t include those tails.

Your final point, that the ‘median view’ is that Earth system feedbacks are less important, is inconsistent with the degree of rigour shown elsewhere in the article. You aren’t interested in the median view of these, you’re interested in the 95th percentile views. And that should feature some of these ZOMG WE’RE GOING TO DIE!!!1 papers.

[1] Beyond equilibrium climate sensitivity, Knutti et al 2017 http://iacweb.ethz.ch/staff/mariaru/BeyondEquilibriumClimateSensitivity/KnuttiRugensteinHegerl17.pdf

[2] Implications of non-linearities between cumulative CO2 emissions and CO2 -induced warming for assessing the remaining carbon budget, Nicholls et al.

https://iopscience.iop.org/article/10.1088/1748-9326/ab83af/pdf

[3] Anthropogenic carbon release rate unprecedented during the past 66 million years, Zeebe et al.

https://www.nature.com/articles/ngeo2681

What does an eventual warming of six degrees imply for the amount of warming that will take place in (as opposed to due to emissions in), say, the next century? The amount of global catastrophic risk seems like it depends more on whether warming outpaces humanity's ability to adapt than on how long warming continues.

[anonymous]2
0
0

I agree this is important. I'll try to get to this when I have a bit more time.

Very interesting! I wanted to note that this further supports Will's comment on his recent post that understanding prior-setting better could be very high-impact.

Thanks for this analysis, it's very interesting. You might find it simpler and more accurate to go straight from emissions to warming using the transient climate response to cumulative carbon emissions (TCRE) rather than climate sensitivity, though (see https://en.wikipedia.org/wiki/Transient_climate_response_to_cumulative_carbon_emissions ). A problem with using ECS is that it gives you the warming that occurs after Earth has reached equilibrium with a given CO2 concentration. However, in reality, the CO2 concentration won't stay constant once we've stopped emitting, but will decline as it is slowly taken up by the Earth system. The result found in many Earth system models is that temperatures rise linearly with emissions and once emissions stop, temperatures also stop rising, rather than rising to reach the value implied by the ECS for the peak concentration value (at the point when emissions stop). (Though, temperatures would still rise further if the ECS were very high, since the Earth would be experiencing a much larger radiative forcing in that case.) So I think this would reduce the chance of high temperatures a bit.

[anonymous]2
0
0

Note the change in substance above.

Great post, and great point about the priors! I have a question about how to use/interpret these which I'd love help with from you or someone else who understands this better than I do.

Can I draw implications of your models about emissions scenarios as defined by the IPCC?

First, can I take the first model to indicate something about how likely various emissions pathways (e.g., RCP 6) are if we take little 'extra action'? e.g., on the "JH extrapolation" version of business as usual that we're 95% likely not to reach above the mean RCP 8.5 emissions scenario (6180 Gt), 70% likely not to reach above the mean RCP 6.0 scenario (3885 Gt), etc? (all by 2100)

Second, can I take your second model to indicate something about how how much warming we'd get if we were to reach those emissions scenarios? So if RCP 6.0 is the 70th percentile outcome of business as usual (on the 'JH extrapolation' version), can we then take the 70th percentile of the probability density function for one of the sensitivity assumptions (say, the Webster one) for how hot it will get on that version of business as usual + that sensitivity assumption to get the amount of warming predicted for RCP 6.0 -- i.e., 3C?

[anonymous]3
0
0

Hi Arden,

1. Yes that's right. (Which scenario is better calibrated is up in the air. The Christensen et al one is an expert estimate but economists are very bad at predicting growth next year, so it's not clear that an expert survey is better than just extrapolating from the last 30 years.)

2. Yes, that is what the second model is *trying* to do but note that qualification that it is mainly trying to estimate the tail risk, so to get the 95% confidence interval right. In guesstimate, the median concentration isn't right (given the first model), but the 95% CI is. However, it would be quite easy to make a model estimating the chance of a certain level of warming conditional on a particular level of CO2 concentrations. I have added an example of this to the bottom of the second guesstimate model. If you think, following Rogelj et al, that the most likely current policy scenario is 700ppm, then the 95% confidence interval for warming is 1.6 to 5 degrees, with a median of 2.9 degrees. The chance of more than 6 degrees is about 1%. This shows the effect of priors - on Wagner and Weitzman's estimate, the chance of >6 degrees is more like 10%

Thanks, this is helpful!

This is one of those findings that, once it's laid out clearly, seems so simple and important that you wonder why no one did this before. So great science.

Is it right that the AI scenario is an extension in the Guesstimate model, and doesn't connect to your extrapolation of cumulative emissions? To me it seems more likely than not that the rapid growth in the AI scenario would result in part from AI-driven technological progress in a swathe of economic sectors, including energy, and that this could substantially drive down carbon intensity.

[anonymous]3
0
0

Yeah, i think the AI explosion scenario should be taken with several piles of salt. It gets to the point that we need to be consistent about our expectations about AI timelines and about climate change. As you suggest, AI-driven progress could drive down carbon intensity, but climate change remains a horrible coordination problem, so it's not clear that all AI progress overcomes that

Also note that your estimate for emissions in the AI explosion scenario exceeds the highest estimates for how much fossil fuel there is left to burn. The upper bound given in IPCC AR5 (WG3.C7.p.525) is ~13.6 PtC (or ~5*10^16 tons CO2).

Awesome post!

[anonymous]2
0
0

haha yes thanks matthew, that's a good spot!

So, the thought is that we would have some non-trivial probability mass on burning all the fossil fuels if there is an AI explosion. My best guess would be that this makes working on AI better than working on marginal climate stuff but I'm not sure how to think about this yet

I wasn't thinking about any implications like that really. My guess would be that the Kaya Identity isn't the right tool for thinking about either (i) extreme growth scenarios; or (ii) the fossil fuel endgame; and definitely not (iii) AI takeoff scenarios.

If I were more confident in the resource estimate, I would probably switch out the AI explosion scenario for a 'we burn all the fossil fuels' scenario. I'm not sure we can rule out the possibility that the actual limit is a few orders of magnitude more than 13.6PtC. IPCC cites Rogner 2014 for the figure. In personal communication, one scientist described Rogner's previous (1997) estimate as:

a mishmash of unreliable information, including self-reported questionnaires by individual governments

It would be great to better understand these estimates — I'm surprised there isn't more work on this. In particular, you'd think there would be geologically-based models of how much carbon there is, that aren't so strongly grounded in known-reserves + current/near-term technological capabilities.

Curated and popular this week
Relevant opportunities