All of Shri_Samson's Comments + Replies

2Hauke Hillebrandt1y
Cheers- edited the original :)
MichaelA's Shortform

This is probably too broad but here's Open Philanthropy's list of case studies on the History of Philanthropy which includes ones they have commissioned, though most are not done by EAs with the exception of Some Case Studies in Early Field Growth by Luke Muehlhauser.

Edit: fixed links

Yeah, I think those are relevant, thanks for mentioning them! It looks like the links lead back to your comment for some reason (I think I've done similar in the past). So, for other readers, here are the links I think you mean: 1 [], 2 [] . (Also, FWIW, I think if an analysis is by a non-EA by commissioned by an EA, I'd say that essentially counts as an "EA analysis" for my purposes. This is because I expect that such work's "precise focuses or methodologies may be more relevant to other EAs than would be the case with [most] non-EA analyses".)
Candidate Scoring System, Fifth Release

I realize this is a bit late, but all of the 1drive links say the item "might not exist or is no longer available", then ask to sign in to a Microsoft account.

Sorry for my own lateness. I have removed all old versions. The most recent PDF report can be found here (I am keeping this as a permanent link):!At2KcPiXB5rkyABaEsATaMrRDxwj [!At2KcPiXB5rkyABaEsATaMrRDxwj] It contains permanent links to the excel model, public draft etc. Now when I make a new version, I save over the previous version while keeping the same link.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

It's happened a few times at our local meetup (South Bay EA) that we get someone new who says something like “okay I’m a fairly good ML student who wants to decide on a research direction for AI Safety.” In the past we've given fairly generic advice like "listen to this 80k podcast on AI Safety" or "apply to AIRCS". One of our attendees went on to join OpenAI's safety team after this advice, and gave us some attribution for it. While this probably makes folks a little better off, it feels like we could do better for them.

If you had to give someone more concrete object-level advice on how to get started AI safety what would you tell them?

I’m a fairly good ML student who wants to decide on a research direction for AI Safety.

I'm not actually sure whether I think it's a good idea for ML students to try to work on AI safety. I am pretty skeptical of most of the research done by pretty good ML students who try to make their research relevant to AI safety--it usually feels to me like their work ends up not contributing to one of the core difficulties, and I think that they might have been better off if they'd instead spent their effort trying to become really good at ML in... (read more)

Wonderful post by Holly, thank you for sharing. To answer Aaron's OP question, to me it just feels good in the same way that making good decisions in a game or winning a game feels good, except in a deeper more rewarding sense (with games the good feeling can quickly fade when I realize that winning the game has trivial real-world value) because I think that doing EA is essentially the life game that actually matters according to our values. It feels like I'm doing the right thing. Note that I get my warm fuzzies from striving to do good in an EA sense. To the extent that I realize that an act of helping someone is not optimal for me to do in an EA sense, I feel less good about doing it.
A contact person for the EA community

Thank you, Julia, for making the EA movement feel like an actual community by and for human beings.