A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.
Examples of things that are kind of close but miss the mark
- There are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")
- There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concrete
I disagree - what do you think the likelihood of a civilization ending event from engineered pandemics is, and what do you base this forecast on?
What % of longtermist $ and FTEs do you think are being spent on trying to influence policy versus technical or technological solutions? (I would consider many of these as concrete + legible)
That was me trying to steelman your justification of lack of concrete/legible wins to "longtermism is new" by thinking of clearer ways that longtermism is different to neartermist causes, and that requires looking outside the EA space.