AM

Andrew Morton

59 karmaJoined Dec 2021

Comments
7

Interesting discussion, but I suggest in part going back to basics. I feel it would be helpful to mentally divide the nature of what is being discussed and at times hastily tossed into this forum into three general  topics:

A. intellectual diversity and an interesting debate space , which helps us all look deeper into  the real issues EA was initiated to try to address.

B. Governance failures and personnel misconduct : financial and legal red cards and suspicions, personnel scandals, examples of bad and very bad behaviour within or on the fringes of a work environment , paid or unpaid..  

C. Your very personal lives,  and your emotional state today and particularly the minute before you hit the Submit button.

Subject A.  is tricky to simultaneously encourage and keep manageable. Approaches (to vigorous debate, intellectual diversity etc..) that have a good track record are group facilitation, membership guidelines, ethics committees etc.

Subject B is addressed routinely in the rest of the world, through fairly replicable governance measures: rules, sanctions and behavioural norms. Equally applicable to  a thinktank or a construction site.  This approach is needed even more, and legally required, when it comes to managing money.  So for example, having a clear and real separation of roles to avoid a financial conflict of interest in spending donor funds  is not a schism - it is an obligation.

Subject C, in my opinion, does not really belong on a publicly accessible forum, now probably being regularly mined for journalistic content and ammunition for spoilers. Maybe it is needed, but just take it offline into a private forum with the relevant people.

The author is right to point the trend and risk of schism.  We should all be allowed to contribute in territory A - the bigger and more diverse the group the better. Debates on fundamental direction and strategy etc..can improve the outcomes.  It would be a pity if a break up happens simply because of insufficient understanding of the rationale for separation of the three topics noted above. In summary, A is what we are all here for, an investment in B enables this to continue, and C is possibly not really forum business.....

Getting academic here..

The search for impact of improved governance vs governance activity indicators (board hires etc..) will always be tough. This is due to the "prevented disaster" issue:  Success is measured by the absence of incidents. In a young, data poor, secretive or poorly defined sector,  statistical work with public data may end up with void or misleading result.  

In industry , over the last 100+ years, the general trend has been to note the universality of the risks ( as we are all human), the regularity of serious incidents publicly reported and the noted consequences to the organizations involved. In short, prudent organizations invest in both a culture and system of good governance, as a recognised important survival trait.

At the lower level and within legal limits, governance/employee/participant behaviour is a metric to improve. At the CEO/board/key shareholder/donor level, major governance problems are better framed as an existential risk - something to be avoided at all costs via preventative measures.  

So, I do not feel we need to further justify this specific effort: Not all that is worthwhile can be (quantitatively) measured.

Dear all

An interesting thread.

For what is worth, I have  over 15 years experience observing  governance failures at different levels in organization where I work (that shall name anonymous), as well a "bad actor" incident within one of the multiple project specific teams that I have developed and then disbanded or transferred once the work was complete. I fully agree with the analysis and the first part of the response/mitigation measure proposed by Grayden.

In addition, looking the EA sector personnel profiles and reading some of their posts, what strikes me is the amount of intelligence and energy demonstrated, in parallel with apparent limited life experience and in some cases simply a lack of wisdom. This in turn I feel reflects the recent surge in interest in EA. Previously it was a niche and fairly academic , and thus a bit grey haired. Then for a few years it has started to attract a lot of bright young talent, culturally trained for better or worse in part by the tech startup sector.  

The timeless reality is that young energetic people with insufficient mentoring and/or oversight make mistakes, cause or sustain some damage and then learn from this. That happened to myself as well, more than once. The problem is when these young explorers are also leaders, in a position of serious authority or influence and in the public spotlight. Then the fall can be long and the collateral damage can be immense.  

The clean up and recovery, as I know all too well, can take literally years, even for small to medium sized incidents. Major incidents can simply kill off entire teams, initiatives and organizations or leave a permanent mark.

EA right now is sadly in the global reputation doghouse and proving to be an easy target for critics.  A 101 good governance improvement campaign will certainly help, starting at the top as Grayden proposes. I would also suggest that the movement look again at its team profiles and internal training. 

What may work better in the long run is more of a balanced blend of youthful energy, external viewpoints and deeper older experience : neither dampening what makes EA so interesting and potentially useful, nor letting individuals drive the movement off the cliff.

Hi Gideon and thanks for the response. Interesting  and important project you are working on... I will follow up 1:1.

Specifically on your response on framing efforts, I think any framing or initiating of contingency planning for the failure of mainstream efforts to avert catastrophe is going to be problematic and unpopular. However that does not mean that it should not be addresed and serious work started. Here's my thoughts on this stream...

In simplistic terms, Plan  A/ global mainstream efforts,  such as CC mitigation, at least in presentation, try to save everyone  and everything everywhere and thus do not explicitly or implicitly exclude anyone (leave no-one behind is actually a crosscutting theme in much multilateral programming). So they are simultaneously politically and morally acceptable and utterly non-feasible in the current geopolitical climate.

 In harsh contrast, true global scale contingency planning efforts  (which are not just another variant of Plan A or just episodic emergency response), have a starting point of inevitable massive future loss or recent actual loss. In all extreme scenarios I look at , most ecosystems, countries and governments will partialy to fully collapse or least be extremely stressed and population crashes (over time) are in the multi-billions.  The very starting point of such planning is deeply unpopular, and always will be for those who fate is forecast..

On top of that, to be truly feasible and credible, contingency planning and preparatory measures need to respect the current and forecast scale of the challenges and the limits of the resources credibly available or forecast.  This in turn inevitably and explicitly narrows  the process down to saving only very specific things (such as knowledge and genetic material) and communities and capacities in specific places.   It also need to take into account foresight timeframes: preparation need to start yesterday, but credible scenarios indicate an irregular and extended process of degradation and  collapse, not doomsday tomorrow, or next year.  2030 - 2100+ appears to be the truly critical period.

In this context, most humans existing today will not be saved or helped by  dark scenario contingency planning efforts, if those scenarios come to pass.  They will have either died of old age beforehand , or not possible to save. 

Our own likely fates  and the nature of the work flow from this point of logic. From a personal level , we individually should assume  that we will either not be able  to, or simply not need to physically get onto this type of species and civilization lifeboat, even if we help build it.  The lifeboats so to speak will be very small, probably located in another countries, not built for years to come and potentially not even designed to physically carry or shelter people. 

This conclusion is actually helpful, because it shifts the moral landscape , narrative and planning objectives from survivalism to broader altruism.

In closing, I feel that true long term global-species level Plan B or similarly labelled and targeted efforts are never going to be universally popular, nor politically mainstream, nor large enough to divert and thereby starve mainstream efforts of funding, attention  or hope. Most people simply do not think in these time frames and scales, nor care enough about the fate of future generations (that are not their direct descendants) to endorse diverting substantial resources to this cause.  

It will only ever be a niche sector. 

PS. All of this is my personal opinion and effort and not at all linked to my UN role.

Forgive me for dropping a new and potentially shallow point in on this discussion. The intellectual stimulation from the different theoretical approaches, thought experiments and models is clear. It is great to stretch the mind and nurture new concepts – but otherwise I question their utility and priority, given our situation today. 

We do not need to develop  pure World A and World B style thought experiments on the application of EA concepts, for want of other opportunities to test the model. We (collectively, globally), have literally dozens of both broad and highly targeted issues where EA style thinking can be targeted and thereby may help unlock solutions. Climate change, ecosystem collapse, hybrid warfare, inequality, misinformation, internal division, fading democracy,  self-regulated tech monopolies, biosecurity, pandemic responses etc...  the individual candidate list is endless… We also need to consider the interconnected nature, and thereby the potential for interconnected solutions.

Surely, a sustained application of the minds demonstrated in this paper and comment train could both help on solutions, and as a bonus, provide some tough real life cases that can be mined for review and advancing the EA intellectual model.

I draw a parallel between the EA community and what I see in the foresight community (which is much older). The latter have developed an array of complex tools and ideas, (overly dependent on graphics rather than logic in some cases). They are intellectually satisfying to an elite, but some 30 years+ after the field evolved, foresight is simply not commonly used in practice. At present it remains a niche specialism – to date generally sold within a management consultancy package, or used in confidential defence wargaming.

With this type of benchmark in mind, I would argue that the value/utility of the EA models/theories/core arguments, (and perhaps even the larger forum) should be in part based on the practical or broader mindset changes that they have demonstrably driven or at least catalysed.

Dear colleagues.

Thanks Seth for this comprehensive effort. 

This is a complex piece of highly personal work, which tries hard to do many different things for a diverse audience in one fluid package. Given this very difficult  target, it understandably only partially succeeds. Nonetheless within it I am sure many will find some new information, perspectives and links. Rather than critique it any further, I would prefer to follow on just one of the many discussion topics it opens up : climate change (CC) and existential risk (XR). My argument is that this all leads to a potentially high value nich role for the EA community.

CC is an XR First point to discuss is whether climate change is an extenstional risk. In summary of my opinion, (and Seth's I believe) it definitely is, and there is a lot of data and analysis on this topic.

From a purely academic perspective, the most authoritative work on climate change  as XR comes from Johan Rockstrom, first presented to world leadership at Davos 2019 https://www.stockholmresilience.org/research/research-news/2018-08-06-planet-at-risk-of-heading-towards-hothouse-earth-state.html. Since then the work and underlying theorems, have been critiqued, defended and tuned, but in my opinion remain valid. In contrast the wave of unprecedented climate change linked extreme events of 2019-2021 has undermined the credibility of many conservative climate models and associated viewpoints. Scientists are repeatedly finding change occurring faster and impacts arriving earlier and harder than their models predicted. So, my first and primary point of argument is that climate change IS a priority existential risk – that warrants attention from the EA community.

What is (unsurprisingly) not enunciated well by climate scientists looking at extreme CC are the geopolitical-social-destabilization aspects. This then connects to a view in some XR circles that the likelihood of human species extinction via CC is too low for it to qualify as a XR event: it is a (only?) a temporary setback for global civilization, not a human species killer. Let me challenge this by referral to our current geopolitical and social situation in Q4 2022, with global average temperatures just beyond 1.1C above pre-industrial levels. https://www.un.org/en/climatechange/science/key-findings

Noting the presence of multiple other pressures, interconnections and the exponential increase in impacts with linear temperature increases, just imagine what a 4C+ society would look like. In the worse case scenario, we are looking at small groups of inexperienced unvaccinated hunter gatherer oriented survivors in a forced migration mode heading northward into continually destabilized and badly degraded environments. To me, this looks like a credible route to human extinction – with a much higher probability of occurrence or irreversible initation within the next 100 years than many much discussed XR events (asteroids, supervolcanoes etc).

EA has a potential key niche role in the CC-XR nexus Here is the connection to the EA agenda, and the gap I think that EA oriented thinkers and large scale philanthropists may be able to partly fill. 

The EA community is unique in that it looks very openly at the basic question of where to allocate efforts, starting from an intellectual and humanist base, without many organizational, cultural, political or budgetary constraints. In short, it searches honestly for the gaps and the added value niches.

Over the next decades, literally trillions will be lost on CC associated acute and chronic disasters and similar amounts spent on mitigation and adaptation. However 99%+ of those funds will be raised by and channelled through relatively well established institutions and programmes that (in order to attract mainstream funding) offer incremental solutions within what I call the Global Plan A framework. 

Plan A essentially assumes the continuity and stability of the global economic order, irrespective of the increasing destabilizing factors/stressors. I suggest we look at the 2022 news and then critique this near universal assumption – at present the global order appears to be unravelling. Not everywhere and not tomorrow, but the pulled threads are becoming distressingly visible. https://www.un.org/sg/en/content/sg/speeches/2022-09-20/secretary-generals-address-the-general-assembly As per above the catalyst is not one event or stressor – it is compound: climate change + geopolitics + environmental degradation + population + inequality + ethnic/ideological divides etc…

The <1% minority of activities occurring under what I call the Plan B or collapse anticipation mindset, are to date, generally fragmented and very small and intellectually mixed in quality. In some cases they are simply dangerous and/or antisocial ( e.g. billionaire bunkers, NZ land grabs, survivalist cults etc..). What is happening however that is commendable is a rising wave of very earnest and underfunded efforts focusing on the psychological, sociological and even spiritual elements of coping with CC. https://climatepsychologyalliance.org/

To all of this we can add the global wildcard of stratospheric geoengineering: modifying the irradiance of the atmosphere to temporarily cool the earth without lowering emissions. At present this topic is essentially taboo in multilateral circles but growing exponentially in importance and likelihood. Without going into the details, geoengineering gone wrong is also an XR.

I think that we can collectively and feasibly do better than all of this.  I note two key gaps:

A. Serious in depth public domain work on the climatic, environmental, social and political implications of stratospheric geoengineering.

B. Building a 2nd and 3rd line of defence,  at a low cost  and without giving up on the primary fight against CC and other critical issues. Clearly, 99% of CC and related efforts and funding should remain focused on supporting Plan A . This includes focused work on psychological and sociological issues . However I argue that up to 1% should be incrementally refocused on one or more Plan Bs: the two tiered defence of a) civilization and b) the human species and other key species and genetic material, in a partial or full collapse scenario. Consider it insurance for the human race and all that we value. 

These two topics,   geoengineering and Plan Bs,  are the sort of intellectually anchored, high risk & novelty - high gain items that the EA forum was designed to identify and support. They are niches made for the EA community.

Let me stop here and await feedback on both the original article and this follow up. If you wish to go further and faster on points A and B above, please do contact me direct.

Andrew Morton

Congratulations on a very interesting piece of work, and on the courage to set out ideas on a topic that by its speculative nature will draw significant critique.

Very positive that you decided on a definition for "civilizational collapse", as this is broadly and loosely  discussed without the associated use of common terminology and meaning. 

A suggested further/side topic for work on civilizational collapse and consequences is more detailed work on the hothouse earth scenario (runaway cliamte change leading to 6C+ warming + ocean chemistry changes). 

Compared to most of the scenarios discussed in this piece - the evolution of hothouse earth is a 30 - 200+ year process, rather than an event. In this context, ideas such as usable food stocks, living livestock, healthy seas etc.. are no longer valid. 

 In addition, the current models for warming by 2100 are in the order of 2.7 - 3.5C, without feedback effects. There is a significant body of debate that these levels are more than sufficient to trigger civilizational collapse, but only after cumulative emissions that are so high as to ensure the warming and its impacts continue, potentially also escalating to the hothouse scenario.

In any event, interested in collaboration on this topic, if this is of interest. 

Andrew Morton