Disclaimer: This is a short post.
Was recently asked this on twitter and realised I didn't have good answers.
In x-risk I'll include: AI risk, biorisk and nuclear risk. I'll include both technical and governance work, and all the other forms of work that indirectly help produce these, and work of other forms that helps reduce x-risk. I'll also include work that reduces s-risk.
Answers I can think of are increased field-building and field-building capacity, (some) increased conceptual clarity of research, and increased capital available. None of which are very legible; there is a sense in which they feel like they're precursors to real work, than real work itself. I can't point to actual percentage point reductions in x-risk today that EA is responsible for. I can't point to projects that exist in the real world where a layman can look at them for ten seconds and realise they're significant and useful results. Whereas I can do the same for work in global health, for instance. And many non-EA movements also have achievement they can point at.
(This is not to discount the value of such work, or to claim EA is currently acting suboptimally. It is possible that the optimal thing to is spend years on illegible work, before obtaining legible results.)
Am I correct on my view of current achievements or is there something I'm missing? I would also love to be linked to other resources.

I might've slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.
The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN's Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage "rogue nations" (Iran) from developing nukes.
That being said, I don't think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war. Also, it's not clear that my performance on my contribution to the contract actually increased the strength of the deterrent to Iran. However, if (a descendant of) my model ends up being used by NATO, perhaps I helped out by decreasing the chance of a false positive.
Disclaimer: This was before I had ever heard of EA. Still, I've always been somewhat EA-minded, so maybe you can attribute this to proto-EA reasoning. When I was working on the project, I remember telling myself that even a very small reduction in the odds of a nuclear war happening meant a lot for the future of mankind.
Thank you for you work!