Dec 03, 2017
This is a writeup of a finding from the Causal Networks Model, created by CEA summer research fellows Alex Barry and Denise Melchin. Owen Cotton-Barratt provided the original idea, which was further developed by Max Dalton. Both, along with Stefan Schubert, provided comments and feedback throughout the process.
This is the final part of a multipart series of posts explaining what the model is, how it works and our findings. We recommend you read at the ‘Introduction and user guide’ before this post, to give the correct background on our model. The series has the following structure:
Introduction & user guide (Recommended before reading this post)
Technical guide (optional, description of the technical details of how the model works)
Findings (writeup of all findings)
Climate catastrophe (this post)
The structure of this post is as follows:
Many people in the Effective Altruism community think that the value of the far future, and the vast number of potential future humans that could exist there, mean that a top priority is working to prevent existential risks that could wipe out humanity or significantly curtail our future potential.
By far the majority of the research into such risks currently being conducted by the EA community is on the risks from superintelligent AI, with some additional work being done on risks from synthetic biology and nuclear war. When listing examples of existential risks (and the closely related global catastrophic risks or GCRs ), the potential of runaway climate change turning out to be a GCR is also sometimes mentioned . However there seems to have been comparatively little work undertaken on trying to estimate the likelihood of runaway climate change, or the chance of it being a GCR, despite climate change being one of the better studied and data rich areas outside of the EA community. There also seem to be very few EAs working on object level problems in climate change.
When working on the Causal Networks Model we considered climate change as a variable due to its interconnectedness, both in terms of being affected by many of the actions we take and itself affecting many of the outcomes we care about. We also attempted to estimate climate change’s influence on existential risk, and we integrated this into the model, leading to this post. However the arguments, and their conclusions as laid out below, do not require or rely upon the rest of the model.
The IPCC’s 2015 climate model predicts an approximately 10% chance of 6+ degrees of warming by 2100 under mid to high emissions portfolios , with 8 degrees or higher of warming being hard to rule out due to the nature of the uncertainty in the models .
These levels of warming - whilst unlikely to cause anything like human extinction directly, nevertheless have the potential to be a GCR . These is because they could cause very significant changes in agricultural productivity, rendering much currently farmed land barren, as well as increasing the number and severity of many kinds of natural disasters, amongst many other effects.
The combined impact of all these simultaneous stressors being applied globally does not seem to be well studied, but it appears plausible these have a >20% chance of acting as a GCR and leading to the effective destruction of the global economy.
Once in this state of 6+ degrees of warming and a collapsed global economy, it again seems plausible (although very uncertain)  that the inhospitality of the new climate would render humanity permanently unable to recover to our current levels of technological / civilisational sophistication. This would this act as a “Loss of potential” x-risk .
Whilst the latter two stages of the argument are quite speculative, this is no worse than the case for other existential risks, and it seems hard to defend a <0.1% chance of existential risk from runaway climate change before the late 2100s, with estimates as high as a few percent also seeming reasonable.
The fact that runaway climate change has a significant chance of being an existential risk raises a number of important implications:
1. Anything that increases the risk of runaway climate change (e.g. emitting more CO2) should be considered to be damaging on existential risk scales. This is in contrast to most existential risks where almost all ‘unrelated’ activities do not affect the risk: for example, you should not expect any of your day-to-day activities to influence the chance of global nuclear war.
One particular implication is that any activity one expects to cause net CO2e emissions and not correspondingly reduce existential risk in some other way should be considered to be likely to have a significantly negative impact. As well as things such as driving and jet travel, this could potentially also apply to activities currently considered robustly good in other ways, such as donating money to global poverty charities, or improving the welfare of farmed animals, both of which seem likely to increase CO2e emissions. (See ‘cage-free costs’ in Part III for an elaboration of the latter point)
2. Conversely, decreasing the risk of runaway climate change (for example, by researching potential geoengineering solutions or donating to Cool Earth) could potentially be an effective way to reduce existential risk. Whether or not there is comparative value in becoming a researcher in this area seems to depend to a large degree on whether you expect conventional climate change research to adequately cover the tail risks.
There also seems to be a particular appeal to this sort of action, because the arguments for runaway climate change as an existential risk seem less speculative  than those for some other existential risks; most of the uncertainty comes from the likelihood of a GCR leading to extinction. Therefore if you were convinced of the value of preventing GCRs but sceptical of the value of research in these areas, reducing emissions might fill an ethical niche.
3. Due to the comparative strength of the arguments for runaway climate change as an existential risk, and the relatively concrete estimates of its probability, it seems like a good candidate to be used as an example when introducing the concept of existential risks.
There seem to be good arguments in favour of runaway climate change as a potential risk. Although one might consider other existential risks as higher priority due to increased likelihood, neglectedness or proximity, runaway climate change has a couple of unique features that seem worth exploring. The first is that the interconnected nature of climate change means that many innocuous seeming acts may be predictably increasing existential risk. The second is the relative neglectedness of climate change within Effective Altruism.
This concludes our series of posts on the Causal Networks Model - we hope they have been informative. If you are interested, as mentioned in Part I, you can access the model yourself to see how different assumptions affect the results.
Feel free to ask questions in the comment section, or email us (email@example.com or firstname.lastname@example.org).
 Defined as events that would kill at least 10% of the population of the Earth.
 As discussed on page 279 here https://scholar.harvard.edu/files/weitzman/files/fattaileduncertaintyeconomics.pdf
 There does not seem to be very good discussion on this I could find, but see e.g. https://www.greenfacts.org/en/impacts-global-warming/index.htm for a (clearly motivated) elaboration of the impact of 4 degrees of warming. Very extreme cases are also considered briefly in .
 This seems to be the main weakness in the argument, and a place where people seem to reasonably significantly disagree. Whilst the arguments are fairly robust to different estimate of humanities likelihood of recovering, if one thinks humanity is very likely (95%+) to recover, then the argument loses significant bite compared to other existential risks.
 Discussed under “2.2. Permanent stagnation” here http://www.existential-risk.org/concept.html
 ‘Speculative’ may not be quite the right word here, I am more trying to convey that the type of risk here seems to be somewhat qualitatively different to that in the (say) AI risk case. In the climate change case most experts agree roughly on the probability of the bad outcome, we just have empirical uncertainty. This is opposed to the AI risk case where there is significant disagreement by experts about the level of risk. It thus seems that there should perhaps be some outside view considerations or similar that favour the climate change case.