A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.
Examples of things that are kind of close but miss the mark
- There are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")
- There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concrete
As I say I don’t think one can “measure” the probability of existential risk. I think one can estimate it through considered judgment of relevant arguments but I am not inclined to do so and I don’t think anyone else should be so inclined either. Any such probability would be somewhat arbitrary and open to reasonable disagreement. What I am willing to do is say things like “existential risk is non-negligible” and "we can meaningfully reduce it”. These claims are easier to defend and are all we really need to justify working on reducing existential risk.
No idea. Even if the answer is a lot and we haven’t made much progress, this doesn’t lead me away from longtermism. Mainly because the stakes are so high and I think we’re still relatively new to all this so I expect us to get more effective over time, especially as we actually get people into influential policy roles.
This may be because I’m slightly hungover but you’re going to have to ELI5 your point here!