A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.
Examples of things that are kind of close but miss the mark
- There are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")
- There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concrete
Did Superintelligence have a dramatic effect on people like Elon Musk? I can imagine Elon getting involved without it. That involvement might have been even more harmful (e.g. starting an AGI lab with zero safety concerns).
Here's one notable quote about Elon (source), who started college over 20 years before Superintelligence:
Overall, causality is multifactorial and tricky to analyze, so concepts like "causally downstream" can be misleading.
(Nonetheless, I do think it's plausible that publishing Superintelligence was a bad idea, at least in 2014.)