Hide table of contents

This post is a brief summary of a longer forum post I wrote on systemic cascading risks & their relevance to the long-term future (an EA criticism competition submission). 

-------------------------------

I make three strong claims

(1) The cascading sociopolitical & economic effects of climate change, pandemics, and conflicts are undervalued in the mainstream longtermist community. These systemic cascading risks[1] can be extremely important to the long-term future through shaping the development of powerful technologies in the next 10-30 years.

(2) Institutional resilience is the generalization of the solution to systemic cascading risks. A resilient food, water, energy, and infrastructure nexus are key to ensuring system stability around the necessities to live during a crisis, helping to tractably hedge against all systemic cascading risks at once.

(3) The systemic cascading lens fills the missing gap between current events & longtermism, solving a key question in EA epistemics.

0. What is a systemic cascading risk?

Envisioning our societal structures as a graph with nodes and links, a given systemic cascading risk shocks a subset of societal "nodes" and causes n-th order effects that cascade across systems & magnify in volatile, harmful ways – exploiting underlying systemic flaws and interdependencies. 

I subsequently discuss COVID, the Russia-Ukraine war, and climate change as examples of pandemic, conflict, and environmental systemic cascading risks respectively.

Pandemic and Conflict Risk: COVID & Russia-Ukraine War

COVID-19, alongside the Russian invasion of Ukraine, served as a catalyst for food inflation and undermined perceived institutional legitimacy:

High food inflation levels are expected to last until 2024, further testing fragile states reliant on food imports and triggering social distress. Historically, the absence of adequate food, water, and energy drives political instability – e.g. 1977 & 1984 Egyptian and Moroccan bread riots, 1989 Jordanese protests, 2011 Arab Spring.

Environmental Risk: Climate Change

In the next 30 years, anthropogenic climate change is projected to cause ~216 million internal climate migrants due to heat stress/desertification/land loss[4], ~150 million displaced by sea level rise, ~5 billion people living in moderately water-stressed areas, risks for multi-breadbasket failure and disruption to food supply chains, and related inflationary and poverty effects

These stresses on our societal systems are likely to contribute to[5]

1. How are they relevant to the long-term future?

Much of EA focuses on tail-end risks – like whether a conflict could cause nuclear war or climate change as a direct existential risk[7], rather than whether an event could cascade across systems and make them more fragile and susceptible to other compounding risks.

However, it makes sense to try to stabilize the political conditions that technologies mature in. The next 10-30 years are a path-dependent precipice – both in terms of political instability & long-term technological development. The potential interaction between these two factors is dangerous.

Value Lock-In Relevance

By encoding certain values into powerful technologies, one encodes the sociotechnical nature of a very particular time and place.[8]

There may be more likely paths (path dependency) where the society we become post-crisis will likely miss certain values. Political crisis and fear tends to result in anti-democratic, authoritarian, and violent social values; abundance tends to beget altruism and peace.[9] By enabling AGI values development to occur during a volatile, "traumatizing" time of climate- and crisis-driven scarcity & tension, there is a strong possibility of an AGI that locks in values that are misaligned with humanity in general (e.g. authoritarianism); these may be values we are permanently stuck with.

Existential Risk Relevance

Political factors can significantly impact existential risk calculations. Though the likelihood of specific possibilities are tenuous, speculative, and unknown, we may generally see a driving forward of military AI capabilities research due to increased conflict and arms race dynamics[10], harder-to-implement AI governance situation due to international tensions[11], and/or a multiplying effect on nuclear weapons and bioweapons x-risk.

I contend that allowing political and economic instability to affect existential technologies is a dangerous game to play; systemic cascading risks threaten our ability to develop new technologies safely, competently, and cooperatively.

2. How do we solve systemic cascading risks?

Institutional resilience. The 21st century is revealing how uniquely interconnected and vulnerable our societal systems are; all systemic cascading risks can all be tractably mitigated in tandem by improving system fragility – e.g. by tracking and ensuring the necessities of the commodities to live. 

Food, water, energy, and infrastructure (where housing falls under infrastructure) form the nexus of what societies require for survival, giving us a comprehensive framework to target systemic resiliency interventions toward. Political stability rests on securing this nexus.

Tractable interventions may include resilient & emergency food investment, drought monitoring & resilience, climate vulnerability analyses on supply chains, scaling substitutes for vital food and energy sources to build redundancy, reforming land use & regulations, developing fast & cost-effective refugee shelters, and developing a flexible crisis response team[12]. Modeling & scenario analysis around key supply chain interdependencies can also inform intervention efforts (helping to target interventions towards maximally effective areas) and support risk incentives (accurately projecting second-order consequences can incentivize governments and risk-sensitive organizations toward a coordinated systemic reform/response).

Neglectedness: Climate adaptation[13] and drought monitoring interventions[14] are also relatively neglected by climate capital in the status quo, including by EA climate efforts[15]. Although supply chain resiliency is not neglected broadly, current failure modes indicate there are likely tractable and neglected sub-areas where contingent efforts can produce great value.[16] 

Brennan (2016) comments in their blog post Missing Cause Areas that:

There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”. And while development in Africa is a fiendishly difficult topic, so are wild-animal suffering and preventing existential risk, and effective altruists seem to have mostly approached the latter with an attitude of “challenge accepted”.

Because of how EA meta-cause-areas developed, there currently exists missing layers of nuance between the literal end of the world and current global health & poverty

The path dependency, cascading systemic risk, and values lock-in frameworks fill that gap, capturing the nuances and subtleties of how societal values and current events can shift technology development and contending that 5-30 year timespan institutional and cultural changes are of great importance. 

At its core, the missing link exists because systemic thinking & complexity science are largely overlooked and unexplored in the EA community. The epistemic the EA community embraces – one of evidence-based logic and empiricism – leans heavily on linear, quantifiable, direct effects. Any system of thought that relies on cascading n-th order effects may therefore be largely disregarded and not taken seriously in the community. As qualitative experience, development theories, and academic studies show us – sole linear causation is highly unlikely, and overlooking high-order effects have led to a multitude of failed forecasts and policies. The systemic cascading risk framework helps adequately connect complexity effects with longtermist cause area ranking & provide resolute, tractable solutions to such problems.

  1. ^

    To clarify, I do not intend to discuss systemic cascading GCRs, but rather the broader category of risks of which systemic cascading GCRs may act as a tail-end example of.

  2. ^

    Examples include suspension of rule of law in Tunisia, violent Spanish protests, and South African "Zuma riots" (the worst violence in the country since the end of apartheid). More examples in my longer forum post.

  3. ^

    Examples include an Indonesian palm oil export ban; Malaysian chicken export ban; Indian wheat export ban; Argentinian export cap on corn and wheat. The Sri Lankan debt default, the worst economic crisis since the country's founding, was followed by violent protests and a Presidential ousting.

  4. ^

    Related sub-factors include water scarcity, lower crop productivity, sea level rise & storm surge, and extreme weather events.

  5. ^

    If you're interested in more evidence, feel free to check out this section of my longer forum post.

  6. ^

    Freedom House’s 2021 Democracy Under Siege report seems particularly relevant here.

  7. ^

    Only recently has there been a shift in thinking of climate change within the EA community – from an unlikely, unimportant tail-end direct risk to a possible existential risk multiplier.

  8. ^

    Similar analogies have happened to historical technologies encoded with the values of their time – e.g. racist architectural exclusion and car-centric cul-de-sacs and interstate highway systems in the U.S. 

    Thus, who and why someone creates technology – and their core values – matter.

    This was inspired by William MacAskill's What We Owe the Future.

  9. ^

    Due to international lack of resiliency and cooperation, I'd wager the overall set of social values that are practically available to society after climate catastrophe (for example) are likely on average significantly worse and less likely to provide large utility to a large group of people.

  10. ^

    Powerful countries' militaries – e.g. U.S. DoD – are already preparing counterinsurgency efforts in response to climate terrorism. Armed drone development and applied AI in military intel & decision-making will likely be favored due to cost and effectiveness.

  11. ^

    Sociopolitical tension, the election of politically extreme governments, and the violation of international norms can pose a significant barrier to international cooperation in AGI regulation. Notably, any long-term solution involving AGI governance would likely involve the U.S. and China.

  12. ^

    See Kulveit and Leech's forum post on emergency response teams and their proposal, ALERT.

  13. ^

    Climate adaptation only makes up of ~5% of all climate finance, including both public and private capital flows.

  14. ^

    54% of WMO members have lacking or inadequate drought warning systems (as of 2021).

  15. ^

    EA paradigms for addressing climate change usually fall under GHG emissions reduction and not resilience, including the 80,000 Hours page and Founder’s Pledge’s Climate Change Fund

    To the extent EA resilience work exists, it tends to focus on global catastrophic risk (e.g. nuclear war) and not systemic cascading risk – e.g. ALLFED and Open Phil's grant (May 2020) to Penn State for Research on Emergency Food Resilience.

  16. ^

    For example, the World Bank’s Groundswell report finds that adaptation development, when developed alongside other prevention efforts, can reduce the scale of climate migration by up to 80% – potentially greatly increasing global stability.

  17. ^
Comments5
Sorted by Click to highlight new comments since: Today at 11:39 AM

I think this is a great post and I’m glad you took the time to summarize your longer post!

In my experience, the longtermist/ x-risk community has an implicit attitude of “we can do it better.” “We’re the only ones really thinking about this and we’ll forge our own institutions and interventions.” I respect this attitude a great deal, but I think it causes us to underestimate how powerful the political and economic currents around us are (and how reliant we are on their stability).

It just doesn’t seem that unlikely to me that we come up with some hard-won biosecurity policy or AI governance intervention, and then geopolitical turmoil negates all the intervention’s impact. Technical interventions are a bit more robust, but I’d claim a solid subset of those also require a type of coordination and trust that systemic cascading risks threaten.

I love your thoughts on this.

Need to do more thinking on whether this point is correct, but a lot of what you're saying about forging our own institutions reminds me of Abraham Rowe's forum post on EA critiques:

EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.

I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possible might be.

Despite this, it seems like relatively little effort is put into changing the minds of non-EA funders, and pushing them toward EA donation opportunities, and a lot more effort is put into shaping the prioritization work of a small number of EA thinkers.

It seems like the TLDR of your post is that short term risks are important because they impact the way long term risks are solved.

 

Although people in EA are more concerned about AI and bioextinction (perhaps disproportionately so), I feel like your argument can extend for any argument — any progress one makes, for instance, on disease prevention/malaria nets impacts the same outcome of economic wellbeing & thus transition + resilience against climate change.


 

Thanks a ton for your critique!

your argument can extend for any argument — any progress one makes, for instance, on disease prevention/malaria nets impacts the same outcome of economic wellbeing & thus transition + resilience against climate change.

I think a lot of these arguments remind me of the narrow vs broad intervention framework, where narrow interventions are targeted interventions meant at mitigating a specific type of risk while broad interventions include generally positive interventions like economic wellbeing, malaria nets, etc. that have ripple effects. 

Your point would be that the systemic cascading lens enables us to justify any broad intervention through its nth order impacts.

But my response would be that I'm not necessarily advocating for broad interventions, especially ones that might be perceived as taking time, having unpredictable effects, and often working with very general concepts like "peace" or "education." While I still use n-th order effects to articulate my argument (and express the importance of economic & political systems in longterm risk), I’m arguing for a very narrowly focused intervention – meant to mitigate very specific political risks through securing stable supplies of commodities necessary to live during times of general political crisis & elucidated through the systemic cascading risk framework.

I'd further that systemic cascading risks aren't just defined by ripple effects that go through systems (then, everything would definitionally be a systemic cascading risk or benefit), but rather ripples that increase in magnitude due to system vulnerabilities, helping to further confine the definition to a narrow subset of risks.

Although my critique at large is that EA has failed to connect complexity with longtermism, I'm arguing that the systemic cascading lens fills that gap – enabling specific, tractable, and targeted interventions.

I like the claims and the epistemic bridge, great post.

The "global wheat and barley export reduction" AFAIK was closer to 4% and inflated because of some statistical misrepresentations in the media. Price hikes are accurate though and that only adds to your argument. 4% reduction is big in a system that needs robustness of 97%+.[1]

As a tractable problem, there's more criticism, i.e. it's hard to see exactly what I could do except a drought monitoring system. 5% of global climate investments is still much more than EA funding combined. Other potential solutions seem to be about assisting governments (which governments do pretty well) and creating new infrastructure in underserved areas. However, if we do it at the wrong spot, that long-tail catastrophe will not be alleviated by our solution. However, there are more tractable and neglected ideas in this vein, e.g. quick brainstorm:

  • Increasing global coordination and "friendship", i.e. a political mediation cause area
  • Reducing centralization of core infrastructural planning, e.g. ensuring national independence from EU (see Switzerland's intranational governance design)
  • Creating more institutions like Our World in Data that can inference, monitor, and assist in situations like Covid-19, climate change, drought prediction, etc. (which they are doing an excellent job of with only 25 people)
  • Creating complex systems models of critical infrastructure in the world. The OECD, UN, WHO already does a bit of this but they're not very good. Creating more and better of these might assist in predicting long-tail risks.

And the reason for linear effects focus is that nonlinear dynamical systems are rapt with long-tail, low-probability, catastrophic events which are hard to predict, i.e. nth order effects end up being hard to act on compared to linear effect, hence GiveWell et al. I still think it's highly relevant to include in our work but it's definitely harder to act on.

I like the epistemic bridge - I agree that longtermism is too segregated from the existing ML networks and not embedded in society for reasons that MichelJusten also mentions.

  1. ^

    It might be different from 4% but it is lower than a third of global exports, depending on what "disrupted" means.