A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.
Examples of things that are kind of close but miss the mark
- There are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")
- There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concrete
Perhaps not, but if a movement is happy to use estimates like "our X-risk is 17% this century" to justify working on existential risks and call it the most important thing you can do with your life, but cannot measure how their work actually decreases this 17% figure, they should at the very least reconsider whether their approach is achieving their stated goals.
I think this is misleading, because:
Longtermism has been part of EA since close to its very beginning, and many senior leaders in EA are longtermists.
It's true that global health as an area is newer than AI safety, but given EA GHD isn't taking credit for things that happened before EA existed, like eradicating smallpox, I don't know if this is actually the "main reason".
You might buy into the longtermism argument at a general level ("Future lives matter", "the future is large", we can affect the future"), but update about some of the details, such that you think planning for and affecting the far future is much more intractable or premature than you previously thought. Otherwise, are you saying there's nothing that could happen that would change your mind on whether longtermism was a good use of EA resources?