Andrew Morton

5Joined Dec 2021


Hi Gideon and thanks for the response. Interesting  and important project you are working on... I will follow up 1:1.

Specifically on your response on framing efforts, I think any framing or initiating of contingency planning for the failure of mainstream efforts to avert catastrophe is going to be problematic and unpopular. However that does not mean that it should not be addresed and serious work started. Here's my thoughts on this stream...

In simplistic terms, Plan  A/ global mainstream efforts,  such as CC mitigation, at least in presentation, try to save everyone  and everything everywhere and thus do not explicitly or implicitly exclude anyone (leave no-one behind is actually a crosscutting theme in much multilateral programming). So they are simultaneously politically and morally acceptable and utterly non-feasible in the current geopolitical climate.

 In harsh contrast, true global scale contingency planning efforts  (which are not just another variant of Plan A or just episodic emergency response), have a starting point of inevitable massive future loss or recent actual loss. In all extreme scenarios I look at , most ecosystems, countries and governments will partialy to fully collapse or least be extremely stressed and population crashes (over time) are in the multi-billions.  The very starting point of such planning is deeply unpopular, and always will be for those who fate is forecast..

On top of that, to be truly feasible and credible, contingency planning and preparatory measures need to respect the current and forecast scale of the challenges and the limits of the resources credibly available or forecast.  This in turn inevitably and explicitly narrows  the process down to saving only very specific things (such as knowledge and genetic material) and communities and capacities in specific places.   It also need to take into account foresight timeframes: preparation need to start yesterday, but credible scenarios indicate an irregular and extended process of degradation and  collapse, not doomsday tomorrow, or next year.  2030 - 2100+ appears to be the truly critical period.

In this context, most humans existing today will not be saved or helped by  dark scenario contingency planning efforts, if those scenarios come to pass.  They will have either died of old age beforehand , or not possible to save. 

Our own likely fates  and the nature of the work flow from this point of logic. From a personal level , we individually should assume  that we will either not be able  to, or simply not need to physically get onto this type of species and civilization lifeboat, even if we help build it.  The lifeboats so to speak will be very small, probably located in another countries, not built for years to come and potentially not even designed to physically carry or shelter people. 

This conclusion is actually helpful, because it shifts the moral landscape , narrative and planning objectives from survivalism to broader altruism.

In closing, I feel that true long term global-species level Plan B or similarly labelled and targeted efforts are never going to be universally popular, nor politically mainstream, nor large enough to divert and thereby starve mainstream efforts of funding, attention  or hope. Most people simply do not think in these time frames and scales, nor care enough about the fate of future generations (that are not their direct descendants) to endorse diverting substantial resources to this cause.  

It will only ever be a niche sector. 

PS. All of this is my personal opinion and effort and not at all linked to my UN role.

Forgive me for dropping a new and potentially shallow point in on this discussion. The intellectual stimulation from the different theoretical approaches, thought experiments and models is clear. It is great to stretch the mind and nurture new concepts – but otherwise I question their utility and priority, given our situation today. 

We do not need to develop  pure World A and World B style thought experiments on the application of EA concepts, for want of other opportunities to test the model. We (collectively, globally), have literally dozens of both broad and highly targeted issues where EA style thinking can be targeted and thereby may help unlock solutions. Climate change, ecosystem collapse, hybrid warfare, inequality, misinformation, internal division, fading democracy,  self-regulated tech monopolies, biosecurity, pandemic responses etc...  the individual candidate list is endless… We also need to consider the interconnected nature, and thereby the potential for interconnected solutions.

Surely, a sustained application of the minds demonstrated in this paper and comment train could both help on solutions, and as a bonus, provide some tough real life cases that can be mined for review and advancing the EA intellectual model.

I draw a parallel between the EA community and what I see in the foresight community (which is much older). The latter have developed an array of complex tools and ideas, (overly dependent on graphics rather than logic in some cases). They are intellectually satisfying to an elite, but some 30 years+ after the field evolved, foresight is simply not commonly used in practice. At present it remains a niche specialism – to date generally sold within a management consultancy package, or used in confidential defence wargaming.

With this type of benchmark in mind, I would argue that the value/utility of the EA models/theories/core arguments, (and perhaps even the larger forum) should be in part based on the practical or broader mindset changes that they have demonstrably driven or at least catalysed.

Dear colleagues.

Thanks Seth for this comprehensive effort. 

This is a complex piece of highly personal work, which tries hard to do many different things for a diverse audience in one fluid package. Given this very difficult  target, it understandably only partially succeeds. Nonetheless within it I am sure many will find some new information, perspectives and links. Rather than critique it any further, I would prefer to follow on just one of the many discussion topics it opens up : climate change (CC) and existential risk (XR). My argument is that this all leads to a potentially high value nich role for the EA community.

CC is an XR First point to discuss is whether climate change is an extenstional risk. In summary of my opinion, (and Seth's I believe) it definitely is, and there is a lot of data and analysis on this topic.

From a purely academic perspective, the most authoritative work on climate change  as XR comes from Johan Rockstrom, first presented to world leadership at Davos 2019 Since then the work and underlying theorems, have been critiqued, defended and tuned, but in my opinion remain valid. In contrast the wave of unprecedented climate change linked extreme events of 2019-2021 has undermined the credibility of many conservative climate models and associated viewpoints. Scientists are repeatedly finding change occurring faster and impacts arriving earlier and harder than their models predicted. So, my first and primary point of argument is that climate change IS a priority existential risk – that warrants attention from the EA community.

What is (unsurprisingly) not enunciated well by climate scientists looking at extreme CC are the geopolitical-social-destabilization aspects. This then connects to a view in some XR circles that the likelihood of human species extinction via CC is too low for it to qualify as a XR event: it is a (only?) a temporary setback for global civilization, not a human species killer. Let me challenge this by referral to our current geopolitical and social situation in Q4 2022, with global average temperatures just beyond 1.1C above pre-industrial levels.

Noting the presence of multiple other pressures, interconnections and the exponential increase in impacts with linear temperature increases, just imagine what a 4C+ society would look like. In the worse case scenario, we are looking at small groups of inexperienced unvaccinated hunter gatherer oriented survivors in a forced migration mode heading northward into continually destabilized and badly degraded environments. To me, this looks like a credible route to human extinction – with a much higher probability of occurrence or irreversible initation within the next 100 years than many much discussed XR events (asteroids, supervolcanoes etc).

EA has a potential key niche role in the CC-XR nexus Here is the connection to the EA agenda, and the gap I think that EA oriented thinkers and large scale philanthropists may be able to partly fill. 

The EA community is unique in that it looks very openly at the basic question of where to allocate efforts, starting from an intellectual and humanist base, without many organizational, cultural, political or budgetary constraints. In short, it searches honestly for the gaps and the added value niches.

Over the next decades, literally trillions will be lost on CC associated acute and chronic disasters and similar amounts spent on mitigation and adaptation. However 99%+ of those funds will be raised by and channelled through relatively well established institutions and programmes that (in order to attract mainstream funding) offer incremental solutions within what I call the Global Plan A framework. 

Plan A essentially assumes the continuity and stability of the global economic order, irrespective of the increasing destabilizing factors/stressors. I suggest we look at the 2022 news and then critique this near universal assumption – at present the global order appears to be unravelling. Not everywhere and not tomorrow, but the pulled threads are becoming distressingly visible. As per above the catalyst is not one event or stressor – it is compound: climate change + geopolitics + environmental degradation + population + inequality + ethnic/ideological divides etc…

The <1% minority of activities occurring under what I call the Plan B or collapse anticipation mindset, are to date, generally fragmented and very small and intellectually mixed in quality. In some cases they are simply dangerous and/or antisocial ( e.g. billionaire bunkers, NZ land grabs, survivalist cults etc..). What is happening however that is commendable is a rising wave of very earnest and underfunded efforts focusing on the psychological, sociological and even spiritual elements of coping with CC.

To all of this we can add the global wildcard of stratospheric geoengineering: modifying the irradiance of the atmosphere to temporarily cool the earth without lowering emissions. At present this topic is essentially taboo in multilateral circles but growing exponentially in importance and likelihood. Without going into the details, geoengineering gone wrong is also an XR.

I think that we can collectively and feasibly do better than all of this.  I note two key gaps:

A. Serious in depth public domain work on the climatic, environmental, social and political implications of stratospheric geoengineering.

B. Building a 2nd and 3rd line of defence,  at a low cost  and without giving up on the primary fight against CC and other critical issues. Clearly, 99% of CC and related efforts and funding should remain focused on supporting Plan A . This includes focused work on psychological and sociological issues . However I argue that up to 1% should be incrementally refocused on one or more Plan Bs: the two tiered defence of a) civilization and b) the human species and other key species and genetic material, in a partial or full collapse scenario. Consider it insurance for the human race and all that we value. 

These two topics,   geoengineering and Plan Bs,  are the sort of intellectually anchored, high risk & novelty - high gain items that the EA forum was designed to identify and support. They are niches made for the EA community.

Let me stop here and await feedback on both the original article and this follow up. If you wish to go further and faster on points A and B above, please do contact me direct.

Andrew Morton

Congratulations on a very interesting piece of work, and on the courage to set out ideas on a topic that by its speculative nature will draw significant critique.

Very positive that you decided on a definition for "civilizational collapse", as this is broadly and loosely  discussed without the associated use of common terminology and meaning. 

A suggested further/side topic for work on civilizational collapse and consequences is more detailed work on the hothouse earth scenario (runaway cliamte change leading to 6C+ warming + ocean chemistry changes). 

Compared to most of the scenarios discussed in this piece - the evolution of hothouse earth is a 30 - 200+ year process, rather than an event. In this context, ideas such as usable food stocks, living livestock, healthy seas etc.. are no longer valid. 

 In addition, the current models for warming by 2100 are in the order of 2.7 - 3.5C, without feedback effects. There is a significant body of debate that these levels are more than sufficient to trigger civilizational collapse, but only after cumulative emissions that are so high as to ensure the warming and its impacts continue, potentially also escalating to the hothouse scenario.

In any event, interested in collaboration on this topic, if this is of interest. 

Andrew Morton