Hide table of contents

Climate change is likely to kill a lot of people.[1] It is unlikely to kill all people. Longtermists worry about human extinction, typically more than they do about anything else. Accordingly, some longtermists present risks of human extinction in contradistinction to the risks of climate change, sometimes arguing that even a small risk of extinction is worse than the near certainty of major climate change. The (longtermist) Future Fund’s areas of interest do not include climate change, and other EA funders have spent much less on climate change than on other causes. According to a new, book-length longtermist report, “the risk of human extinction from the direct effects of climate change now seems extremely small… other problems are more pressing than climate change.” 

Skeptics about longtermism have seized repeatedly on this relative dismissal of one of the signature challenges of our time. This blogpost argues that longtermists can and should take climate change very seriously, by putting more money and creative efforts into investigating and fighting it and some of its worst repercussions. The reason is multiple lock-in bad effects that climate change could have on (1) human survival under so-called runaway scenarios, (2) human survival under the likeliest climate change scenarios, and (3) human life-quality and longevity under the likeliest scenarios, a matter which, I shall argue, longtermists have more reason to heed than they currently recognize. Finally, the blogpost distinguishes between strong and weak forms of longtermist dismissal of climate change and explains what is wrong with both.

(1) Climate change and human survival under runaway scenarios

In runaway scenarios, a factor augmented by global warming increases that warming, which further augments that factor, which further increases warming, and so forth. For example, the atmosphere comes to contain enough greenhouse gas to block thermal radiation from leaving earth, preventing the planet from cooling; plants that might have taken carbon dioxide out of the atmosphere die of heat, then rot, in a vicious cycle that causes greater net emissions and more heat and plant death. The result: much faster and more extreme global warming than under other scenarios, a largely uninhabitable planet, and, very possibly (though not certainty), human extinction. 

Runaway scenarios are far less likely than the most expected scenarios. But their likelihood is not zero, or infinitesimal. Multiple respectable climate scientists have argued that their likelihood is high or, recently, that canonical documents’ silence about them reflects mainly these documents’ optimistic biases. The chance for runaway scenarios is probably smallish. But longtermists, in particular, take smallish chances of complete disasters very seriously, in virtue of the large number that results from multiplying a smallish fraction by an astronomical number. 

(2) Climate change and human survival under the likeliest scenarios

Even under the likeliest scenarios of climate change, there are grave “indirect” risks—risks that emanate from possible human reactions to climate change. Temperatures in New Delhi reached 49°C this summer. In parts of Rajasthan, they reached 51°C a few years ago. If lethal heat and wet-bulb temperatures recur and spread, as they are likely to do, populations will not stay around to die. Most everyone who can will probably migrate. Millions may also migrate in Pakistan when a third of the country is again flooded. Globally, climate change “could force 216 million people... to move within their countries by 2050.” The political instability from such migration could develop in many ways. In these nuclear-armed nations, it could be a pathway to existential risk. Attempted mass migration to nuclear-armed neighbors Pakistan or China is another possible scenario. And climate change also means that “the Indian subcontinent is expected to face a severe water shortage – spiking the risk of Indo-Pakistan downstream river politics escalating into a water war.” A nuclear exchange between any two among India, Pakistan, and China could beget a larger nuclear war, nuclear winter, and civilizational collapse or even extinction. 

It is true that related increases in net-movement across international borders are, globally, far less likely than related movement inside countries. But even internal migration could create political instability that leads to war; and related cross-border migration is likelier in nuclear-armed South Asia than elsewhere, partly because in tropical regions, escaping increased temperatures will tend to require long travel. More broadly, several nations that are already “fragile” and likely to experience extreme temperatures in coming decades are also nuclear-armed (or have the capacity to produce the worst bioweapons). And there are additional pathways for climate change to increase the risk of a nuclear war. Geopolitical developments could go in many different directions following climate change’s upheavals. So confidence that the radical and sometimes abrupt changes wrought by climate change are unlikely to result in catastrophic military developments seems misplaced. 

There is another reason why climate change risks human survival even under the likeliest scenarios. Climate change would interfere with our ability to address extinction risks. States burdened by climate change could not expend efforts on preventing other catastrophic risks, yet “state capacity is important… for coordinating policy for social benefit,” including the prevention of catastrophes. Nations incensed about climate injustices are even less likely to promote global efforts to prevent catastrophes. For example, “Helping to draft the Bioweapons Convention becomes far less tractable if nations are unwilling to cooperate” because of climate change. Pressing needs for development, another factor with which climate change will interfere, were probably the main obstacle to successful completion of negotiations on nuclear disarmament. Finally, thus far, “the absence of nuclear war was less due to a lack of potential causes, than [sic] the global political system’s ability to defuse them.” And climate change could easily undermine that stability. 

In rich countries as well, climate-related reduction in economic growth over the coming decades and centuries will not only suppress quality of life. It would also delay the development of technologies to counter extinction risks. So climate change worsens differential technological progress, an extinction risk.

All in all, indirect effects make major climate change an existential risk even outside runaway scenarios. 

(3) Climate change, life-quality, and longevity under the likeliest scenarios

Even short of extinguishing our species, climate change is highly likely to spread punishing temperatures, rising sea levels, familiar and new diseases, chronic killer storms, fires, and floods, and much else. Importantly, some torturous and lethal developments would “last a very long time.” For example, temperatures are expected to rise and remain high. Hurricanes, floods, and wildfires are expected to remain chronic. In fact, “Some of the ecological effects of climate change get worse over time”. The same could not be said about the direct harms from many of the worst viruses, for example; their morbidity and mortality would disappear in the generation after they cease to circulate.

When longtermists worry about lock-in of bad effects that fall short of extinguishing the species, usually their focus is bad political developments, like long-term dictatorships of by all-powerful humans or AI. There is also interest in long-term good effects, such as humans becoming more compassionate. A far likelier locked in bad effect is the possibility that life becomes worse because climate change impairs our natural habitat—as it already does. 

Something that affects the life-quality and longevity of many or of all future people clearly matters, for the usual longtermist reason. If many or all of the vast numbers of future people live worse lives than future people would have lived, that is very bad indeed, other things being equal. 

At some level, I wish to add, longtermists should treat that vast loss of utility as worse than the same vast total loss of utility from failure to make happy people—people who are never created because the species disappeared earlier. Put differently, when total utility remains constant, lives that are far worse than the lives that would exist otherwise are in some ways even more concerning, to everyone including longtermists, than lives that are far fewer than the lives that would exist otherwise. 

This may seem like a peculiar statement given that many longtermists are totalists who deny precisely that. For totalists, what matters is total utility, and it does not matter whether that number is driven by the average number of utils per person or by the number of persons. But, I wish to insist, even longtermist totalists should treat utility loss driven by the average number of utils per person as worse. They should do so for one of the reasons that leading longtermists give for longtermism. The reason is that when we are unsure of what fundamental course of action morally we should take, part of what ought to guide us are the moral theories that we reject yet have some credence in. 

In our context, even totalists should not have 100% credence in totalism. There is, after all, a respectable debate in population ethics. And much reasoning suggests that we have stronger reasons and stronger obligations to make people happy than we do to make happy people, even when total utility remains constant. Totalists’ rejecting that viewpoint does not mean that they place 0% credence in it. Nor should they.

From a certain respectable non-totalist viewpoint (sometimes inaccurately referred to as “person-affecting” although it need not assume that all reasons and obligations are toward particular persons or require impact on particular persons’ interests), making people happy matters more than making happy people. The chance that this respectable non-totalist viewpoint is accurate should, according to the typical longtermist approach to moral uncertainty, weigh in favor of making people happy. So when other things are equal, it can serve as a thumb on the scale in favor of policies that protect future life-quality and longevity, as compared to ones that create a greater number of happy people.

A pragmatic reason for longtermists to pay special attention to working on solutions that promise to make people happy in the long run is that such solutions are likely to have wider appeal to nonlongtermists. Most of the latter would on reflection dismiss the urgency of creating as many happy people as possible. A chance at broad coalitions is a pragmatic advantage of projects that increase present and future populations’ life-quality and longevity over ones that bring larger future populations into existence—when the numbers are otherwise equal.

A reservation. Some longtermists are optimistic that “Within 100,000 years, the Earth’s natural systems will have scrubbed our atmosphere clean of over 90 percent of the carbon we have released, leaving the climate mostly restored and rebalanced.” While many harms from old and new emissions may last, it is also true that at some point, we may colonize more habitable stars, or develop microchips that are happy even in our sizzling planet. But longtermists take very seriously dictatorships that would last only for very long periods and not forever; and none of these escape routes of the harms of environmental devastation is guaranteed.

In sum, lock-in effects that would make vast numbers of future persons’ lives more brutish and short ought to command a high longtermist priority. Indeed, compared to lock-in effects on future people’s prospects of coming into existence, when effects on total utility are similar, the former should command a higher priority. Many of the most likely effects of climate change may well be of the kind that commands special priority for these reasons. That gives longtermists further reason to take climate change very seriously. 

Strong vs. weak longtermist dismissal of climate change, and why either is wrong

The three types of potential lock-in effects that I described suffice to question what we may call strong longtermist dismissal of climate change, namely, the view that climate change is of no concern from a distinctively longtermist viewpoint (although climate change may remain a concern otherwise). What about the weaker view, that climate change is a major longtermist concern, just not one that should currently attract longtermist resources? 

The weak view could be motivated by the claim that other causes pose even greater longtermist concern, for their even greater chance of leading to extinction or civilizational collapse; or the claim that, unlike some other causes of longtermist concern, climate change is not a neglected problem, because many nonlongtermist funders and researchers are dedicated to fighting climate change. 

The weak view founders as well.  The admittedly greater extinction potential and scantier funding of some other existential risks as broad categories does not mean that the same applies to every subcategory of those risks and to every proposed response. Specific research and tools on climate change will complete well with specific research and tools from generally more-urgent cause areas. 

There is for example vast scope for cost-effective research on climate-change related pathways to x-risk scenarios and on “ranking various climate interventions from a climate x-risk perspective.” More thinking and action are also needed on nonstandard but potentially transformative solutions for climate change, of the type that longtermism’s and effective altruism’s creative heads are uniquely positioned to develop. Consider solutions that most progressives reject because of their serious downsides and risks: geoengineering, reliance on nuclear power for energy, and, more generally, harm-reduction responses to climate change and to its dangerous consequences. 

A general lesson is that in setting global priorities, some interventions from a lower-ranked cause area will beat some from a higher-ranked one. The former might have sufficient promise to be transformative or cheap as to beat interventions from higher bands. 

Conclusion

Longtermists should take climate change very seriously, and more seriously than they currently do, because of multiple potential lock-in bad effects of climate change. Climate change might extinguish the species either under runaway scenarios or under the likeliest scenarios. And under the likeliest scenarios, climate change is very likely to adversely affect human life-quality and longevity, forever or for very long periods, something that should loom larger in longtermist concerns than it currently does. While some cause areas raise even greater existential risks, or are less neglected, some proposed interventions against climate change could compete well with some in higher-ranked cause areas.


[1] For helpful comments, thanks to Barbara Buckinx, Mark Budolfson, Dan Hausman, Adam Lerner, Lucas Stanczyk, and Bridget Williams. 

29

0
0

Reactions

0
0

More posts like this

Comments10
Sorted by Click to highlight new comments since: Today at 3:58 AM

Globally, climate change “could force 216 million people... to move within their countries by 2050.”

This seems like a remarkably small number to me. In 2019 around 7.4 million people moved state within the USA alone (source); over the next 28 years, with 0.4% annual population growth, that is 218 million people-moves. Spread out over the entire world this seems like a quite small amount of migration.

Thanks! Not my numbers - i just quoted others - but there is a big difference between forced mass migration and voluntary targeted migration. 

I fail to be convinced. Many of your arguments seem like fully general arguments about why to worry about anything as a longtermists and thus wash out. For example, you argue

Climate change would interfere with our ability to address extinction risks. States burdened by climate change could not expend efforts on preventing other catastrophic risks

But there's a great many things that fall under the category of impacting the ability of modern civilization to address arbitrary risks. For example, someone could just as easily argue that our failure to produce a communist revolution results in an inability to effectively coordinate on what really matters because states are burdened with excess profit motive. This is a bit of a caricature, sure, but I think it illustrates that you've made too general an argument in that I could sub in arbitrary things and someone will argue it as reasonable.

I'd be much more interested in, for example, pointed evidence that climate change poses an extinction risk in-and-of itself and isn't just another generic source of background risk that is very much not neglected.

Thanks! Sure, various issues might in theory interfere with international cooperation efforts, but as regards climate change, we  see tensions over very large economic stakes on compensation, reparation etc unfolding before our eyes. Just this past month, there were two big illustrations of this. In addition to the UN discussions of a global tax to pay for climate-related  loss and damage, a link on which I included, there was this

As expensive disasters and flooding abound, new tensions are  likely to arise and interfere with the ability to work together on addressing existential risks, through two mechanisms: 

  1. angry parties don't work well together (a recent illustration is how the Ukraine war foiled progress on climate).
  2. parties with acute, time-sensitive needs will sometimes try to force  stronger parties who are relatively indifferent to those needs to address them by making addressing them a condition for cooperation on shared needs. That can infuriate the stronger parties, who feel blackmailed. Cooperation on the shared needs then collapses (that seems to be precisely how acute, time sensitive needs in development assistance brought down recent  discussions of the biological weapons convention - see my link to the 80,000 hours podcast  with Jaime Yassif).  Climate change will often create acute, time sensitive needs (e.g. in flooding relief, in license to immigrate).

I am not sure that your assumption that "Longtermists don't care much about climate change" is true. The main argument you give for this is that EA Funders don't spend lots of money on climate change. However, this does not imply that they don't care about it, it only implies that they think the marginal impact of an additional - say - $1mn spent in climate change is (significantly) lower than spending it on some other priority. The amount that is currently spent on climate change per year by non-longtermist/EA organisations is orders of magnitude higher than what is spent on any of their priorities. So even if the funder thinks that the average $1mn currently spent on climate change would be significantly more impactful than the same amount spent on another priority, since someone has already spent this $1mn, this isn't available to them. They could only spend an additional marginal $1mn on it and they think this is significantly less impactful than spending a marginal $1mn in another area where the low(er) hanging fruits are not yet taken 

Thanks! See my section "Strong vs. weak longtermist dismissal of climate change, and why either is wrong". The "weak" position described in the second paragraph of that section seems to be the one you are alluding to. See also my answer to that position.  

Hi, Nir.

You make some great points. 

Somehow EA folks seem to be good at establishing an emotional distance between climate change and existential or extinction risk. They believe that their attention to direct vs indirect risks somehow justifies denying that climate change is an existential risk. For example, they claim that it's less important than pandemics (although it will contribute to pandemics) or the risk of nuclear war (although it will increase the risk).

I like to see solutions to problems as either individual, group, or systemic. An individual solution to a systemic problem is one that protects just the individual. I'm fairly sure that most people in developed countries adopt an individualist attitude toward the systemic problem of climate change. 

As individuals, we have to play along with the bigger system if we can't change it. I think EA's have a conflict of interest around climate change. It's very hard not to have a conflict of interest around climate change somewhere in how I live. I think that's true for EA's as well. 

Whatever I am invested in as an individual probably contributes to global warming in a significant way when everyone does it collectively. Whether its elements of my lifestyle, my political stance, or my vision of the future (for example, techno-utopianism), it evokes conflicts for me personally, politically, or professionally if I address the root causes of global warming. 

Whether I:

  •  play along with technological determinism (by my definition, marketing that tech companies have a great vision of the future for consumers who use their products)
  • sell myself on techno-utopianism (for example, that a nanotech future will be worth living in or that AI will solve our problems for us)
  • pretend that old talking points are still relevant (for example, that climate change could rather than will have existential and extinction consequences) 
  • speak hopefully about actual efforts to solve climate change (for example, the recent climate change bill that made it into law in the US)

all I'm doing is trying to protect myself as an individual, or maybe some small group that I care about. 

We do have a need for research into tipping points, for better hardware to run higher-fidelity (smaller mesh size) atmospheric and ocean models, for better risk modeling of systemic and cascading risks, etc. We might, for example, develop better models of how geo-engineering efforts could work or fail. So, yeah. 

But what that research will also confirm is something like Carl Sagan talked about in the 1980's to congress, or what climatologists began worrying about publicly in the 1970's, or scientists understood about heat-trapping gases in the 1960's. We will confirm, with ever greater certainty, that we should really do something about GHG's and anthropogenic climate change.

When EA folks say that climate change is not neglected, what they are not saying is that genuine climate change adaptation and mitigation efforts are limited or doomed to failure. What about BECCS?  CCS? Planting trees? Migration assistance for climate refugees? All unfeasible at scale and in time. 

Furthermore, the lack of climate change prevention is why a paper like that Climate Endgame paper would ever get published. Passing 5 tipping points in the short term? That is an utter failure of prevention efforts. Tipping points were not supposed to be passed at all. The assumption that tipping points are in the distant future has kept the discussion of "fighting climate change" a hopeful one. And now that assumption has to be given up.

Didn't EA start with some understanding that a lot of money and energy is wasted in charitable efforts? Well, similar waste must be happening in the climate change arena. Governments are taking action based on silly models of risk or outdated models of causes and so their attention is misdirected and their money is wasted.

So I agree with you, yes, EA folks should take climate change seriously. It could help the situation for EA's to learn that climate change poses an existential and extinction threat this century. Beyond that, I don't know what EA's could really positively accomplish anyway, unless they were willing to do something like fund migration for climate refugees or pay for cooling technologies for the poor or reconstruct infrastructure in countries without an effective government.

Welcome to the forum!

My guess is that you won't get a lot of engagement under this post, since the topic has been hashed out here in prior years and most people have already said everything they'll have to say until there's new information--there's a lot of good conversation if you type "climate change" into the search bar to see how the general consensus was built and what all went into the consensus view. 

I think you're basically right in your points, but they are not enough to say that climate change is nearly as bad as biorisk or AI misalignment. You may get close to nuclear risk, but I'm skeptical of that as well. My main point is that extinction from climate is much more speculative than from the other causes.

Reasons: 

  1. There is some risk of a runaway climate change. However, this risk seems small according to GWWC's article + it would be overconfident to say that humanity can't protect itself against it with future technology. There is also much more time left until we get to > 5° of warming than until the risk of engineered pathogens and powerful AI rises quickly.
  2. Climate change will be very destabilizing. However, it's very hard to predict the long-term consequences of this, so if you're motivated by a longtermist framework, you should focus on tackling the more plausible risks of engineered pathogens and misaligned AI more directly. One caveat here is the perspective of cascading risks, which EA is not taking very seriously at the moment.
  3. The impacts on life quality are not convincing from a longtermist standpoint as I expect them to last much less than 1000 years whereas humanity and its descendants could live for billions of years. I also expect only a tiny fraction of future sentiences to live on earth.

Another thought I often miss in debates on x-risk from climate change is that humans would likely intervene in climate at some stage if it's a serious threat to our economies and even lives. I haven't seen anyone make this point before, but please point me to sources.

If you are still new to EA, you may understand the current position better as you learn more about the pressingness of biorisk and especially AI risk. That said, there is probably room for some funding for climate change from a longtermist perspective, and given the uncertainty surrounding cascading risks, I'd be happy to see a small fraction of longtermist resources directed to this problem.

Thank you, Konstantin, for a closely argued response. I agree with much of what you say (though I would stretch much longer the 1000 years figure). Any disagreement with your conclusion ("there is probably room for some funding for climate change from a longtermist perspective, ... I'd be happy to see a small fraction of longtermist resources directed to this problem") may pertain only to numbers--to the exact size of the "small fraction". I agree, specifically, that it TENDS to be MUCH more urgent to fund AI safety and biosecurity work, from a longtermist perspective. Remember that I ENDORSE the "admittedly greater extinction potential and scantier funding of some other existential risks as broad categories"...

Your point about what one may call the  potential reversibility of climate change or of its worst  sequelae  is definitely worth developing. I have discussed it with others but haven't seen it developed at length in writing. Sometimes it is what longtermists seem to mean when they write that climate change is not a neglected area. But analytically it is  separate from e.g. the claim that others are already  on the case of curbing concurrent emissions (which are therefore  not a neglected area). A related challenge for you: The potential reversibility of a long-term risk is not only a reason to prioritize the prevention of other risks, the onset of which is irreversible and hence more calamitous, over preventing that risk. It is also a reason to prioritize one area of work on that risk, namely, its effective  reversal. Indeed, when I wrote that longtermists should invest in geoengineering,  I had in mind primarily strategies like carbon capture, which could be seen as reversing some harms of our greenhouse gas emissions. 

Nir 

Curated and popular this week
Relevant opportunities