Computer Science & Economics @ UPenn M&T Program. Co-president of Penn EA. Writings mostly about systemic cascading risks.
Thanks a ton for your critique!
your argument can extend for any argument — any progress one makes, for instance, on disease prevention/malaria nets impacts the same outcome of economic wellbeing & thus transition + resilience against climate change.
I think a lot of these arguments remind me of the narrow vs broad intervention framework, where narrow interventions are targeted interventions meant at mitigating a specific type of risk while broad interventions include generally positive interventions like economic wellbeing, malaria nets, etc. that have ripple effects.
Your point would be that the systemic cascading lens enables us to justify any broad intervention through its nth order impacts.
But my response would be that I'm not necessarily advocating for broad interventions, especially ones that might be perceived as taking time, having unpredictable effects, and often working with very general concepts like "peace" or "education." While I still use n-th order effects to articulate my argument (and express the importance of economic & political systems in longterm risk), I’m arguing for a very narrowly focused intervention – meant to mitigate very specific political risks through securing stable supplies of commodities necessary to live during times of general political crisis & elucidated through the systemic cascading risk framework.
I'd further that systemic cascading risks aren't just defined by ripple effects that go through systems (then, everything would definitionally be a systemic cascading risk or benefit), but rather ripples that increase in magnitude due to system vulnerabilities, helping to further confine the definition to a narrow subset of risks.
Although my critique at large is that EA has failed to connect complexity with longtermism, I'm arguing that the systemic cascading lens fills that gap – enabling specific, tractable, and targeted interventions.
Thanks a ton for your kind response (and for being the guy that points something out). :)
"Counterfactual" & "replaceability" work too and essentially mean the same thing, so I'm really choosing which beautiful fruit I prefer in this instance (it doesn't really matter).
I slightly prefer the word contingent because it feels less hypothetical and more like you're pulling a lever for impact in the future, which reflects the spirit I want to create in community building. It also seems reflect uncertainty better: e.g. the ability to shift the path dependence of institutions, the ability to shape long-term trends. Contingency captures how interventions affect the full probability spectrum and time-span, rather than just envisioning a hypothetical alternate history world with and without an intervention in x years. Thus, despite hearing the other phrases, it was the first word that clicked for me, if that makes sense.
Terribly sorry for the late reply! I didn't realize I missed replying to this comment.
I appreciate your kind words, and I think your thoughts are very eloquent and ultimately tackle a core epistemic challenge:
to my knowledge we (EA, but also humanity) don't have many formal tools to deal with complexity/feedback loops/emergence, especially not at a global scale with so many different types of flows ... some time could be spent to scope what we can reasonably incorporate into a methodology that could deal with that complexity.
I recently wrote a new forum post on a framework/phrase I used tying together concepts from complexity science & EA, arguing that it can be used to provide tractable resilience-based solutions to complexity problems.
At various points in history, some dominant class - say capitalists, men, or white Europeans - have developed a set of concepts for describing and governing social reality which serve their own interests at the expense of overall welfare. As such these concepts come to embody a particular set of values. There are multiple ways this can happen - it could be a deliberate, pernicious act by members of the dominant class; it could be the result of unconscious biases of a group of researchers; or it could be the result of a systematic selection pressure, in which ideas that favour the dominant class are more likely to gain popularity and funding and thus have a wider influence. In any case, these concepts can come to form the foundation for many kinds of institutional formation, such as the constitution of a state, popular theories of economics or political economy, or the frameworks which underlie a technical discipline and resultant technologies. Once the institutional formation becomes a regular, common sense part of society, the values which informed its foundational concepts become locked-in. It takes a substantive (often revolutionary) moment to unlock these values and bring about a different status quo.
I liked the above quote especially, as well as the ending about how historical analysis can help identify suboptimal values and assumptions embedded within EA itself.
Often times, technology development, research & development, and mathematical frameworks (e.g. game theory, microeconomics) are seen as independent of ideology. You make a convincing point that the concept of a value/ideology lock-in within technology has deep historical precedent that must be studied.
Thank you so much for writing this. It was very comprehensive and highlighted how the intersection of social values and technology may be overlooked in EA.
I especially liked how the "societal friction, governance capacity, and democracy" section of the forum post ties together strengthening democracy, inter-group dynamics, disenfranchised groups, and long-term technological development risk through the path dependence framework; it seems like a very relevant & eloquent explanation for government competence that we see play out even in current events.
A common argument is that on the margin, short and medium term AI issues are likely not neglected (as opposed to long-term issues) so one would not be able to make a big impact. I'd especially be curious about targeted, tractable interventions you believe may be worth looking into, where an additional EA on the margin would make a contingent impact or significantly leverage existing resources.
I love your thoughts on this.
Need to do more thinking on whether this point is correct, but a lot of what you're saying about forging our own institutions reminds me of Abraham Rowe's forum post on EA critiques:
EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possible might be.Despite this, it seems like relatively little effort is put into changing the minds of non-EA funders, and pushing them toward EA donation opportunities, and a lot more effort is put into shaping the prioritization work of a small number of EA thinkers.
EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.
I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possible might be.
Despite this, it seems like relatively little effort is put into changing the minds of non-EA funders, and pushing them toward EA donation opportunities, and a lot more effort is put into shaping the prioritization work of a small number of EA thinkers.
Constantly expanding list of mistakes I made / things I would change in this post (am not editing at the moment because this is an EA criticism contest submission):
Toby Ord wrote similarly that he preferred narrow over broad interventions because they can be targeted and thus most immediately effective without relying on too many casual steps.
I misinterpreted what Toby Ord was saying in The Precipice (page 268). He specifically claimed he preferred narrow/targeted over broad interventions because they can be targeted toward technological risks directly & thus can be expected to accomplish much more, compared to previous centuries. (He also made a neglectedness-based argument for targeted interventions.) I believe it was other people or other things I read (likely where the confusion comes from) that made claims about casual steps using the targeted vs broad framework.
I'm also not arguing for broad interventions, necessarily. As commonly used, the narrow vs broad framework doesn't fully capture my argument for the importance of systemic cascading risk for multiple reasons:
For all those reasons, I'd probably remove this quote.
Refugees: ~216 million climate refugees by 2050 (World Bank Groundswell Report) caused by droughts and desertification, sea-level rise, coastal flooding, heat stress, land loss, and disruptions to natural rainfall patterns
I didn't realize the phrase "climate refugees" implied involuntary cross-border migration and mistaked it for a blanket term for climate migration. Thanks to John Halstead for pointing this one out; through this quote, I unintentionally misrepresented the weight of the evidence.
If I were to edit & rephrase it, it'd look something like: "~216 million internally displaced climate migrants by 2050 (World Bank Groundswell Report), which can give a rough order of magnitude estimate for total cross-border climate migrants and refugees (figures which are much harder to quantify)".
I disagree with the following:
But I doubt you can make a case that’s robustly compelling and is widely agreed upon, enough to prevent the dynamics I worry about above.
Systemic cascading effects and path dependency might be very coherent consequentialist frameworks & catchphrases to resolve a lot of your epistemic concerns (and this is something I want to explore further).
Naive consequentialism might incentivize you to lie to "do whatever it takes to do good", but the impacts of lying can cascade and affect the bedrock institutional culture and systems of a movement. On aggregate, these cascading (second-order) effects will make it more difficult for people to trust each other and work together in honest ways, making the moral calculus not worth it.
Furthermore, this might have a path-dependent effect, analogous to a significant/persistent/contingent effect, where choosing this path encodes certain values in the institution and makes it harder for other community values to arise in the future.
This similarly generalizes to most "overoptimization becomes illogical" problems. Naive consequentialism & low-integrity epistemics rarely make sense in the long run anyways, so it's just a matter of dispelling simplified, naive models of reality and coherently phrasing the importance of epistemics, diversity, and plurality through a consequentialist lens.
Still relatively new to the community, so I might have the wrong view on this - but I'm always remarkably surprised by how openly EAs are willing to discuss flaws in the community & are concerned about solid epistemics within the community.
E.g. recently just posted a submission to the EA criticism contest - and it's difficult for me to imagine any other subgroup which pours $100k into a contest seriously considering and rewarding internal & external criticism about its most fundamental values and community.
I agree with the following statement:
We need the type of system you're talking about, but we also need resiliency built into the system now.
My low-confidence rationale for including a section on modeling, scenario analysis, & its helpfulness to building resiliency is twofold:
1. Targeting & informing on-the-ground efforts: Overlaying accurate climate agriculture projections on top of food trading systems can help us determine which trade flows will be most relied on in the future and target interventions where they would be most effective and neglected - e.g. select between various agriculture interventions in different regions, lobbying for select policies or local food stocks, and tailoring food resilience research/engineering efforts towards countries and situations that will be projected to need it most.
2. Influencing risk-sensitive actors: Having accurate trade flow models can also help determine & project dangerous economic second-order consequences, creating more accurate risk analyses and thus further incentivizing governments and risk-sensitive organizations toward a coordinated systemic reform/response.
Open to have this opinion change.
Quick thoughts: People might be a lot more sympathetic to migrants (or refugees) who are of similar cultural backgrounds to them, prompting less social tension and political extremism.
As a notable example, the political effects of Arab vs Ukrainian refugees on Europe are markedly different.