A motivating scenario could be: imagine you are trying to provide examples to help convince a skeptical friend that it is in fact possible to positively change the long-run future by actively seeking and pursuing opportunities to reduce existential risk.
Examples of things that are kind of close but miss the mark
- There are probably decent historical examples where people reduced existential risk but where thoes people didn't really have longtermist-EA-type motivations (maybe more "generally wanting to do good" plus "in the right place at the right time")
- There are probably meta-level things that longtermist EA community members can take credit for (e.g. "get lots of people to think seriously about reducing x risk"), but these aren't very object-level or concrete
Is most of the AI capabilities work here causally downstream of Superintelligence, even if Superintelligence may have been (heavily ?) influenced by Yudkowsky? Both Musk and Altman recommended Superintelligence, altough Altman has also directly said Yudkowsky has accelerated timelines the most:
https://twitter.com/elonmusk/status/495759307346952192?lang=en
https://blog.samaltman.com/machine-intelligence-part-1
https://twitter.com/sama/status/1621621724507938816
If things stayed in the LW/Rat/EA community, that might have been best. If Yudkowsky hadn't written about AI, then there might not be much of an AI safety community at all now (it might just be MIRI quietly hacking away at it, and most of MIRI seems to have given up now), and doom would be more likely, just later. Someone had to write about AI safety publicly to build the community, but writing and promoting a popular book on the topic is much riskier, because you bring it to the attention of uncareful people, including entrepreneurial types.
I guess they might have tried to keep the public writing limited to academia, but the AI community has been pretty dismissive of AI safety, so it might have been too hard to build the community that way.