A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.
Examples of things that are kind of close but miss the mark
- There are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")
- There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concrete
I think it's also easy to make a case that longtermist efforts have increased the x-risk of artificial intelligence, with the money and talent that grew some of the biggest hype machines in AI (Deepmind, OpenAI) coming from longtermist places.
It's possible that EA has shaved a couple counterfactual years off of time to catastrophic AGI, compared to a world where the community wasn't working on it.