Thanks a ton for your critique!
your argument can extend for any argument — any progress one makes, for instance, on disease prevention/malaria nets impacts the same outcome of economic wellbeing & thus transition + resilience against climate change.
I think a lot of these arguments remind me of the narrow vs broad intervention framework, where narrow interventions are targeted interventions meant at mitigating a specific type of risk while broad interventions include generally positive interventions like economic wellbeing, malaria nets, etc. that have ...
Thanks a ton for your kind response (and for being the guy that points something out). :)
"Counterfactual" & "replaceability" work too and essentially mean the same thing, so I'm really choosing which beautiful fruit I prefer in this instance (it doesn't really matter).
I slightly prefer the word contingent because it feels less hypothetical and more like you're pulling a lever for impact in the future, which reflects the spirit I want to create in community building. It also seems reflect uncertainty better: e.g. the ability to shift the path dependence...
Terribly sorry for the late reply! I didn't realize I missed replying to this comment.
I appreciate your kind words, and I think your thoughts are very eloquent and ultimately tackle a core epistemic challenge:
to my knowledge we (EA, but also humanity) don't have many formal tools to deal with complexity/feedback loops/emergence, especially not at a global scale with so many different types of flows ... some time could be spent to scope what we can reasonably incorporate into a methodology that could deal with that complexity.
...At various points in history, some dominant class - say capitalists, men, or white Europeans - have developed a set of concepts for describing and governing social reality which serve their own interests at the expense of overall welfare. As such these concepts come to embody a particular set of values. There are multiple ways this can happen - it could be a deliberate, pernicious act by members of the dominant class; it could be the result of unconscious biases of a group of researchers; or it could be the result of a systematic selection pressure, in whi
Thank you so much for writing this. It was very comprehensive and highlighted how the intersection of social values and technology may be overlooked in EA.
I especially liked how the "societal friction, governance capacity, and democracy" section of the forum post ties together strengthening democracy, inter-group dynamics, disenfranchised groups, and long-term technological development risk through the path dependence framework; it seems like a very relevant & eloquent explanation for government competence that we see play out even in current eve...
I love your thoughts on this.
Need to do more thinking on whether this point is correct, but a lot of what you're saying about forging our own institutions reminds me of Abraham Rowe's forum post on EA critiques:
...EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.
I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possi
Constantly expanding list of mistakes I made / things I would change in this post (am not editing at the moment because this is an EA criticism contest submission):
1)
Toby Ord wrote similarly that he preferred narrow over broad interventions because they can be targeted and thus most immediately effective without relying on too many casual steps.
I misinterpreted what Toby Ord was saying in The Precipice (page 268). He specifically claimed he preferred narrow/targeted over broad interventions because they can be targeted toward technological risks direc...
I disagree with the following:
But I doubt you can make a case that’s robustly compelling and is widely agreed upon, enough to prevent the dynamics I worry about above.
Systemic cascading effects and path dependency might be very coherent consequentialist frameworks & catchphrases to resolve a lot of your epistemic concerns (and this is something I want to explore further).
Naive consequentialism might incentivize you to lie to "do whatever it takes to do good", but the impacts of lying can cascad...
I agree with the following statement:
We need the type of system you're talking about, but we also need resiliency built into the system now.
My low-confidence rationale for including a section on modeling, scenario analysis, & its helpfulness to building resiliency is twofold:
1. Targeting & informing on-the-ground efforts: Overlaying accurate climate agriculture projections on top of food trading systems can help us determine which trade flows will be most relied on in the future and target interventions where they would be most effective and neglec...
Quick thoughts: People might be a lot more sympathetic to migrants (or refugees) who are of similar cultural backgrounds to them, prompting less social tension and political extremism.
As a notable example, the political effects of Arab vs Ukrainian refugees on Europe are markedly different.
I didn't realize the phrase "climate refugees" implied involuntary cross-border migration and mistaked it for a blanket term for climate migration. Thanks for the catch!
For the sake of fairness for the EA criticism contest, I won't edit the mistake now but maybe after the competition winners have been announced. If I were to edit & rephrase it, it'd look something like:
...~216 million internally displaced climate migrants by 2050 (World Bank Groundswell Report), which can give a rough order of magnitude estimate for total cross-border climate migrants and
Hey Thomas! Love the feedback & follow up form the conversation. Thanks for taking so much time to think this over -- this is really well-researched. :)
In response to your arguments:
1 -> 2 is generally well established by climate literature. I think the quote you provided gives me good reasons for why climate war may not be perfectly rational; however, humans don't act in a perfectly rational way.
There are clear historical correlations that exist between rainfall patterns and civil tensions, expert opinions on climate causing violent con...
Thank you so much for this well-written article. I especially love the calculations on cost-effectiveness & comparison on newborn deaths versus other EA cause areas – your proposal clearly makes sense clearly as an alternate GiveWell cause area from a DALYs perspective.
As a student during the pandemic, I’m quite skeptical of online education – but on the other hand, the unit economics are too good for me to ignore. It only takes one decent, quality course to scale and one can have an outsized return on investment.
Therefore, I’d love to know: how do you...
This is very fair criticism and I agree.
For some reason, when writing order of magnitude, I was thinking about existential risks that may have a 0.1% or 1% chance of happening being multiplied into the 1-10% range (e.g. nuclear war). However, I wasn't considering many of the existential risks I was actually talking about (like biosafety, AI safety, etc) - it'd be ridiculous for AI safety risk to be multiplied from 10% to 100%.
I think the estimate of a great power war increasing the total existential risk by 10% is much more fair than my estimate; bec...
This point has helped me understand the original post more.
I feel that too many times, many EAs take current EA frameworks and ways of thinking for granted instead of questioning those frameworks and actively trying to identify flaws and in-built assumptions. Thinking through and questioning those perspectives is a good exercise in general but also extremely helpful to contribute to the motivating worldview of the community.
Still don't believe that this necessarily means EAs "tend toward the religious" - there are probably several layers of nuance that are...
Hey! I liked certain parts of this post and not other parts of this post. I appreciate the thoughtfulness by which you critique EA through this post.
On your first point about the AI messiah:
I think the key distinction is that there are many reasons to believe this argument about the dangers of an AGI are correct, though. Even if many claims with a similar form are wrong, that doesn't exclude this specific claim from being right.
"Climate scientists keep telling us about how climate change is going to be so disastrous and we need to be prepared. ...
Maintaining that healthy level of debate, disagreement, and skepticism is critical, but harder to do when an idea becomes more popular. I believe most of the early "converts" to AI Safety have carefully weighed the arguments and made a decision based on analysis of the evidence. But as AI Safety becomes a larger portion of EA, the idea will begin to spread for other, more "religious" reasons (e.g., social conformity, $'s, institutionalized recruiting/evangelization, leadership authority).
As an example, I'd put the belief in prediction markets as an E...
Thanks a ton for your comment! I'm planning to write a follow-up EA forum post on cascading and interlinking effects - and I agree with you in that I think a lot of times, EA frameworks only take into account first-order impacts while assuming linearity between cause areas.
Thanks a ton Darren! I'd love to connect with you — and I found the ideas you linked to interesting. Thanks for introducing me to these ideas.
I completely agree with you — I think I ended up focusing on climate change specifically because it is the most clear, well-studied manifestation of "Earth Systems Health" gone wrong and potentially causing existential risk. However, emphasizing a broader need to preserve the stability of Earth's systems is extremely valuable — and encompasses climate change.
Reducing greenhouse gas emissions may be the most imp...
Hey Johannes! I really appreciate the feedback, and I love the work you guys are doing through Founder's Pledge. I appreciate that you also believe sociopolitical existential risk factors are an important element worth consideration.
I wish there was a lot more quantitative evidence on sociopolitical climate risk — I had to lean to a lot of qualitative expert sociopolitical analyses for this forum post. I acknowledge a lot of the scenarios I talk about here lean on the pessimistic side. In scenarios where there is high(er) governmental competence and societ...
Acknowledgements to Esban Kran, Stian Grønlund, Liam Alexander, Pablo Rosado, Sebastian Engen, and many others for providing feedback and connecting me with helpful resources while I was writing this forum post. :-)
Interested in the forthcoming successor to EA Hub - to what extent do EA organizations require software engineers to build these networking platforms? I (and probably many other college student EAs over the summer) would be really interested working on a software engineering project to create a Swapcard-and-EA-hub-but-better.
It'd be cool to gather a team of part-time or interning CS/SWE college students and invest in them, given how much effort and money goes into EA conference events but how difficult and time-consuming post-conference followups are.
I really, really like this approach! I like how this exercise doesn’t box in your thinking - rather, it is a very simple and plain “What do you want to do, now how do you get there?" reflection. It leaves a lot of room for imagination, creativity, and interpretation that will differ based on how you imagine solving your specific cause area.
This comment seems to violate EA forum norms, particularly by assuming very bad faith from the original poster (e.g. "these claims smell especially untrustworthy" and "I don't think these arguments are transparent."). These comments made certainly have very creative interpretations of the original post.
I believe you're aware that signatories such as Anders Sandberg and SJ Beard are not advocating for "folding EA into extinction rebellion" -- an extremely outlandish claim and accusation.
Many of the comments made give untrue interpretations of th... (read more)