Disclaimer: This is a short post.
Was recently asked this on twitter and realised I didn't have good answers.
In x-risk I'll include: AI risk, biorisk and nuclear risk. I'll include both technical and governance work, and all the other forms of work that indirectly help produce these, and work of other forms that helps reduce x-risk. I'll also include work that reduces s-risk.
Answers I can think of are increased field-building and field-building capacity, (some) increased conceptual clarity of research, and increased capital available. None of which are very legible; there is a sense in which they feel like they're precursors to real work, than real work itself. I can't point to actual percentage point reductions in x-risk today that EA is responsible for. I can't point to projects that exist in the real world where a layman can look at them for ten seconds and realise they're significant and useful results. Whereas I can do the same for work in global health, for instance. And many non-EA movements also have achievement they can point at.
(This is not to discount the value of such work, or to claim EA is currently acting suboptimally. It is possible that the optimal thing to is spend years on illegible work, before obtaining legible results.)
Am I correct on my view of current achievements or is there something I'm missing? I would also love to be linked to other resources.
I wouldn't sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.