The central question being discussed in the current debate is whether marginal efforts should prioritize reducing existential risk or improving the quality of futures conditional on survival. Both are important, both are neglected, though the latter admittedly more so, at least within EA. But this post examines the tractability of shaping the long-term future if humanity survives, and the uncertainty about our ability to do so effectively.
I want to very briefly argue that given the complexity of long-term trajectories, the lack of empirical evidence, and the difficulty of identifying robust interventions, efforts to improve future value are significantly less tractable than reducing existential risk.
We have strong reasons to think we know what the likely sources of existential risk are - as @Sean_o_h's new paper lays out very clearly. The most plausible risks are well known, and we also have at least some paths towards mitigating them, at least in the form of not causing them. On the other hand, if we condition on humanity’s survival, we are dealing with an open-ended set of possible futures that is both not well characterized, and poorly explored. Exploration of futures is also not particularly tractable, given the branching nature and the complexity of the systems being predicted. And this problem is not just about characterizing futures - the tractability of interventions decreases as the system's complexity increases, especially over multi-century timescales. The complexity of socio-technological and moral evolution makes it infeasible, in my view, to shape long-term outcomes with even moderate confidence. It seems plausible that most interventions would have opposite signs in many plausible futures, and we seem unlikely to know the relative probabilities or the impacts.
And despite @William_MacAskill's book on the topic, we have very limited evidence for what works to guide the future - one of the few key criticisms I think should be generally convincing about the entire premise of longtermism. The exception, of course, is avoiding extinction.
And compared to existential risk, where specific interventions may have clear leverage points, such as biosecurity or AI safety, increasing the quality of long-term futures is a vast and nebulous goal. There is no singular control knob for “future value,” making interventions more speculative. So identifying interventions today that will robustly steer the future in a particular direction is difficult because, as noted, we lack strong historical precedent for guiding complex civilizations over thousands of years, and also, the existence of unpredictable attractor states (e.g., technological singularities, value shifts) makes long-term interventions unreliable. Work to change this seems plausibly valuable, but also more interesting than important, as I previously argued.
Do you think "Most of these required community organising and protest as at least part of the process to achieve the concrete change." is that strong a statement? There is a pretty strong correlation between protest/organising and these changes. Elite consense is clearly very important, but I think that the voice of the masses can move the elite to consensus so there's some chicken and egg there. Also to mention a few cases here where I don't think elite consensus was strong at the time of change and their hand's were perhaps forced...
- Access to free HIV treatment (This I'm pretty sure of)
- Civil rights movement
- Women's suffrage
I do find this a tricky issue to keep a scout mindset on here on the forum, as I find EAs in general are unusually against protest and organising compared to other communities I am a part of. My feeling is this is largely because the nature of many EAs is more to be into research, debate and policy rather than social roles like organising and protest.
What makes you think it is overstated? I think its a tricky counterfactual question with a lot of room for conjecture....