Organization Breakthrough has published a new report that has been getting quite a bit of attention in mainstream media. It argues for an urgent risk reframing of climate research and the IPCC reports, because they don't deal adequately with lower-probability, but higher-impact events.

Summary:

Human-induced climate change is an existential risk to human civilisation: an adverse outcome that will either annihilate intelligent life or permanently and drastically curtail its potential, unless carbon emissions are rapidly reduced.

Special precautions that go well beyond conventional risk management practice are required if the increased likelihood of very large climate impacts — known as “fat tails” — are to be adequately dealt with. The potential consequences of these lower-probability, but higher-impact, events would be devastating for human societies.

The bulk of climate research has tended to underplay these risks, and exhibited a preference for conservative projections and scholarly reticence, although increasing numbers of scientists have spoken out in recent years on the dangers of such an approach.

Climate policymaking and the public narrative are significantly informed by the important work of the IPCC. However, IPCC reports also tend toward reticence and caution, erring on the side of “least drama”, and downplaying the more extreme and more damaging outcomes.

Whilst this has been understandable historically, given the pressure exerted upon the IPCC by political and vested interests, it is now becoming dangerously misleading with the acceleration of climate impacts globally. What were lower- probability, higher-impact events are now becoming more likely.

This is a particular concern with potential climatic tipping points — passing critical thresholds which result in step changes in the climate system — such as the polar ice sheets (and hence sea levels), and permafrost and other carbon stores, where the impacts of global warming are non-linear and difficult to model with current scientific knowledge.

However the extreme risks to humanity, which these tipping points represent, justify strong precautionary management. Under-reporting on these issues is irresponsible, contributing to the failure of imagination that is occurring today in our understanding of, and response to, climate change.

If climate policymaking is to be soundly based, a reframing of scientific research within an existential risk-management framework is now urgently required. This must be taken up not just in the work of the IPCC, but also in the UNFCCC negotiations if we are to address the real climate challenge.

Current processes will not deliver either the speed or the scale of change required.

16

0
0

Reactions

0
0
Comments3


Sorted by Click to highlight new comments since:

Thanks for posting this. I am grateful they published this report, and I hope that their explicit reframing in terms of existential risk will get the EA community's attention.

The EA standpoint so far has been "lots of money is already being thrown at climate change, it's mostly a question of policy now". And that's true. Good ideas are out there: fee-and-dividend carbon pricing, Project Drawdown, etc.; all it takes is political will. Unfortunately, in my experience, many EAs take this to mean that climate change is an issue they can't help with.

It's true that it is difficult finding out which policy approaches are effective and politically viable, and it's also difficult to convince others that the climate needs urgent attention. Therefore, it seems to me that there is still much potential for EA organizations to give better advice to those of us who want to contribute not as donors or full-time researchers, but as citizens doing some "effective activism" in their spare time. If anyone can point me to work that has been or is being done in this direction, I would be grateful.

(Meta: Curious why this post was downvoted at least once. Personally, I'm grateful for this pointer, as I consider it relevant and might not have become aware of the report otherwise. I don't view the linkpost as an endorsement of the content or epistemic stance exhibited by the report.)

[Epistemic status: climate change is outside of my areas of competence, I'm mostly reporting what I've heard from others, often in low-bandwidth conversations. I think their views are stronger overall evidence than my own impressions based on having engaged on the order of 10 hours with climate change from an existential risk perspective.]

FWIW, before having engaged with the case made by that report, I'm skeptical whether climate change is a significant "direct" existential risk. (As opposed to something that hurts our ability to prevent or mitigate other risks, and might be very important for that reason.) This is mostly based on:

  • John Halstead's work on this question, which I found accessible and mostly convincing.
  • My loose impression that 1-5 other people whose reasoning I trust and have engaged more deeply with that question have concluded that climate change is unlikely to be a "direct" extinction risk, and me not being aware of any other case for why climate change might be a particularly large existential risk otherwise (i.e. I don't have seen suggested mechanisms for how/why climate change might permanently reduce the value/quality of the future that seemed significantly more plausible to me than just-so stories one could tell about almost any future development). Unfortunately, I don't think there is a publicly accessible presentation of that reasoning (apart from Halstead's report mentioned above).

FWIW, I'd also guess that the number of EAs with deep expertise on climate change is smaller than optimal. However, I'm very uncertain about this, and I don't see particular reasons why I would have good intuitions about large-scale talent allocation questions (it's even quite plausible that I'm misinformed about the number of EAs that do have deep expertise on climate change).

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel
Relevant opportunities