There has been a relationship and active discussions between people in the relevant parts of the UN , and researchers at Xrisk orgs (FHI, CSER, FLI and others) including myself and (as noted above) Toby Ord for 5+ years. I believe the SG's taken an active interest. I'm not sure what is appropriate for me to say on a public forum, but I'd be happy to discuss offline.
I think there are a few other considerations that may point in the direction of slightly higher salaries (or at least, avoiding very low salaries). EA skews young in age as a movement, but this is changing as people grow up 'with it' or older people join. I think this is good. It's important to avoid making it more difficult for people to join/remain involved who have other financial obligations that come in a little later in life, e.g.- child-rearing- supporting elderly parentsRelatedly, lower salaries can be easier to accept for longer for people who come from wealthier backgrounds and have better-off social support networks or expectations of inheritance etc (it can feel very risky if one is only in a position to save minimally, and not be able to build up rainy day funds for unexpected financial needs otherwise).
Very cool, thanks! Relatedly, you might be interested in this literature-scanning approach for Xrisk/GCR literature - doesn't provide cool graphs like this, but scans the literature being released for papers with potential relevance to GCR using an ML 'recommendation engine' trained based on assessments of papers by various researchers in the field. You can sign up for a monthly digest of papers.https://www.sciencedirect.com/science/article/pii/S0016328719303702?via%3Dihubhttps://www.x-risk.net/methods/
FYI, more info about the scenario game available here:https://intelligencerising.org/Writeup of game from participants:https://www.lesswrong.com/posts/ywKhqjgdKuHoQohYL/takeaways-from-the-intelligence-rising-rpg
[edit; just found their donation page and they recieved considerable OpenPhil support, so not a candidate based on the OP's criteria].One Day Sooner seem like a candidate. I don't know if they've received EA support, although at least one EA (David Manheim) works with them. I think they've done good work in bringing attention and legitimacy to the idea of human challenge trials in a pandemic situation, which seems plausibly like a very important thing for future pandemics.https://www.1daysooner.org/
With regard to the people mentioned, neither are forum regulars, and my understanding is that neither have plans for continued collaborations with Phil.
I appreciate that these kinds of moderation decisions can be difficult, but I also don't agree with the warning to Halstead. And if it is to be given, then I am uncomfortable that Halstead has been singled out - it would seem consistent to apply the same warning to me, as I supported Halstead's claims, and added my own, both without providing evidence.
+1; BERI have been a brilliant support. Strongly recommend applying!
I don't know how to embed snapshots, but anyone who wishes is welcome to type "phil torres" into linkedin or email me for the snapshots I've just taken right now - it brings up "Researcher at Centre for the Study of Existential Risk, University of Cambridge". As I say, it's unclear if this is deliberate - it may well be an oversight, but it has contributed to the mistaken external impression that Phil Torres is or was research staff at CSER.