Writings mostly about systemic cascading risks.
This comment seems to violate EA forum norms, particularly by assuming very bad faith from the original poster (e.g. "these claims smell especially untrustworthy" and "I don't think these arguments are transparent."). These comments made certainly have very creative interpretations of the original post.
I believe you're aware that signatories such as Anders Sandberg and SJ Beard are not advocating for "folding EA into extinction rebellion" -- an extremely outlandish claim and accusation.
Many of the comments made give untrue interpretations of the original statement: which substantively states that the very young academic field of existential risk has a lot to learn from other academic fields, such as disaster risk literature or science and technology studies. I believe this is a reasonable perspective, hence I agree with the original post.
And it's absolutely possible to have a plurality of ideas from different academic fields while drawing a line for "homophobes, Trump supporters, and people who want China to invade Taiwan".
Thanks a ton for your critique!
your argument can extend for any argument — any progress one makes, for instance, on disease prevention/malaria nets impacts the same outcome of economic wellbeing & thus transition + resilience against climate change.
I think a lot of these arguments remind me of the narrow vs broad intervention framework, where narrow interventions are targeted interventions meant at mitigating a specific type of risk while broad interventions include generally positive interventions like economic wellbeing, malaria nets, etc. that have ripple effects.
Your point would be that the systemic cascading lens enables us to justify any broad intervention through its nth order impacts.
But my response would be that I'm not necessarily advocating for broad interventions, especially ones that might be perceived as taking time, having unpredictable effects, and often working with very general concepts like "peace" or "education." While I still use n-th order effects to articulate my argument (and express the importance of economic & political systems in longterm risk), I’m arguing for a very narrowly focused intervention – meant to mitigate very specific political risks through securing stable supplies of commodities necessary to live during times of general political crisis & elucidated through the systemic cascading risk framework.
I'd further that systemic cascading risks aren't just defined by ripple effects that go through systems (then, everything would definitionally be a systemic cascading risk or benefit), but rather ripples that increase in magnitude due to system vulnerabilities, helping to further confine the definition to a narrow subset of risks.
Although my critique at large is that EA has failed to connect complexity with longtermism, I'm arguing that the systemic cascading lens fills that gap – enabling specific, tractable, and targeted interventions.
Thanks a ton for your kind response (and for being the guy that points something out). :)
"Counterfactual" & "replaceability" work too and essentially mean the same thing, so I'm really choosing which beautiful fruit I prefer in this instance (it doesn't really matter).
I slightly prefer the word contingent because it feels less hypothetical and more like you're pulling a lever for impact in the future, which reflects the spirit I want to create in community building. It also seems reflect uncertainty better: e.g. the ability to shift the path dependence of institutions, the ability to shape long-term trends. Contingency captures how interventions affect the full probability spectrum and time-span, rather than just envisioning a hypothetical alternate history world with and without an intervention in x years. Thus, despite hearing the other phrases, it was the first word that clicked for me, if that makes sense.
Terribly sorry for the late reply! I didn't realize I missed replying to this comment.
I appreciate your kind words, and I think your thoughts are very eloquent and ultimately tackle a core epistemic challenge:
to my knowledge we (EA, but also humanity) don't have many formal tools to deal with complexity/feedback loops/emergence, especially not at a global scale with so many different types of flows ... some time could be spent to scope what we can reasonably incorporate into a methodology that could deal with that complexity.
I recently wrote a new forum post on a framework/phrase I used tying together concepts from complexity science & EA, arguing that it can be used to provide tractable resilience-based solutions to complexity problems.
I love your thoughts on this.
Need to do more thinking on whether this point is correct, but a lot of what you're saying about forging our own institutions reminds me of Abraham Rowe's forum post on EA critiques:
EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possible might be.Despite this, it seems like relatively little effort is put into changing the minds of non-EA funders, and pushing them toward EA donation opportunities, and a lot more effort is put into shaping the prioritization work of a small number of EA thinkers.
EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.
I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possible might be.
Despite this, it seems like relatively little effort is put into changing the minds of non-EA funders, and pushing them toward EA donation opportunities, and a lot more effort is put into shaping the prioritization work of a small number of EA thinkers.
Constantly expanding list of mistakes I made / things I would change in this post (am not editing at the moment because this is an EA criticism contest submission):
Toby Ord wrote similarly that he preferred narrow over broad interventions because they can be targeted and thus most immediately effective without relying on too many casual steps.
I misinterpreted what Toby Ord was saying in The Precipice (page 268). He specifically claimed he preferred narrow/targeted over broad interventions because they can be targeted toward technological risks directly & thus can be expected to accomplish much more, compared to previous centuries. (He also made a neglectedness-based argument for targeted interventions.) I believe it was other people or other things I read (likely where the confusion comes from) that made claims about casual steps using the targeted vs broad framework.
I'm also not arguing for broad interventions, necessarily. As commonly used, the narrow vs broad framework doesn't fully capture my argument for the importance of systemic cascading risk for multiple reasons:
For all those reasons, I'd probably remove this quote.
Refugees: ~216 million climate refugees by 2050 (World Bank Groundswell Report) caused by droughts and desertification, sea-level rise, coastal flooding, heat stress, land loss, and disruptions to natural rainfall patterns
I didn't realize the phrase "climate refugees" implied involuntary cross-border migration and mistaked it for a blanket term for climate migration. Thanks to John Halstead for pointing this one out; through this quote, I unintentionally misrepresented the weight of the evidence.
If I were to edit & rephrase it, it'd look something like: "~216 million internally displaced climate migrants by 2050 (World Bank Groundswell Report), which can give a rough order of magnitude estimate for total cross-border climate migrants and refugees (figures which are much harder to quantify)".
I disagree with the following:
But I doubt you can make a case that’s robustly compelling and is widely agreed upon, enough to prevent the dynamics I worry about above.
Systemic cascading effects and path dependency might be very coherent consequentialist frameworks & catchphrases to resolve a lot of your epistemic concerns (and this is something I want to explore further).
Naive consequentialism might incentivize you to lie to "do whatever it takes to do good", but the impacts of lying can cascade and affect the bedrock institutional culture and systems of a movement. On aggregate, these cascading (second-order) effects will make it more difficult for people to trust each other and work together in honest ways, making the moral calculus not worth it.
Furthermore, this might have a path-dependent effect, analogous to a significant/persistent/contingent effect, where choosing this path encodes certain values in the institution and makes it harder for other community values to arise in the future.
This similarly generalizes to most "overoptimization becomes illogical" problems. Naive consequentialism & low-integrity epistemics rarely make sense in the long run anyways, so it's just a matter of dispelling simplified, naive models of reality and coherently phrasing the importance of epistemics, diversity, and plurality through a consequentialist lens.
Still relatively new to the community, so I might have the wrong view on this - but I'm always remarkably surprised by how openly EAs are willing to discuss flaws in the community & are concerned about solid epistemics within the community.
E.g. recently just posted a submission to the EA criticism contest - and it's difficult for me to imagine any other subgroup which pours $100k into a contest seriously considering and rewarding internal & external criticism about its most fundamental values and community.
I agree with the following statement:
We need the type of system you're talking about, but we also need resiliency built into the system now.
My low-confidence rationale for including a section on modeling, scenario analysis, & its helpfulness to building resiliency is twofold:
1. Targeting & informing on-the-ground efforts: Overlaying accurate climate agriculture projections on top of food trading systems can help us determine which trade flows will be most relied on in the future and target interventions where they would be most effective and neglected - e.g. select between various agriculture interventions in different regions, lobbying for select policies or local food stocks, and tailoring food resilience research/engineering efforts towards countries and situations that will be projected to need it most.
2. Influencing risk-sensitive actors: Having accurate trade flow models can also help determine & project dangerous economic second-order consequences, creating more accurate risk analyses and thus further incentivizing governments and risk-sensitive organizations toward a coordinated systemic reform/response.
Open to have this opinion change.