The Coursera link is broken, I suspect you mean this course:Writing in the Sciences | Coursera
This is probably too broad but here's Open Philanthropy's list of case studies on the History of Philanthropy which includes ones they have commissioned, though most are not done by EAs with the exception of Some Case Studies in Early Field Growth by Luke Muehlhauser.
Edit: fixed links
I realize this is a bit late, but all of the 1drive links say the item "might not exist or is no longer available", then ask to sign in to a Microsoft account.
It's happened a few times at our local meetup (South Bay EA) that we get someone new who says something like “okay I’m a fairly good ML student who wants to decide on a research direction for AI Safety.” In the past we've given fairly generic advice like "listen to this 80k podcast on AI Safety" or "apply to AIRCS". One of our attendees went on to join OpenAI's safety team after this advice, and gave us some attribution for it. While this probably makes folks a little better off, it feels like we could do better for them.
If you had to give someone more concrete object-level advice on how to get started AI safety what would you tell them?
I’m a fairly good ML student who wants to decide on a research direction for AI Safety.
I'm not actually sure whether I think it's a good idea for ML students to try to work on AI safety. I am pretty skeptical of most of the research done by pretty good ML students who try to make their research relevant to AI safety--it usually feels to me like their work ends up not contributing to one of the core difficulties, and I think that they might have been better off if they'd instead spent their effort trying to become really good at ML in... (read more)
This is expressed in a similar way in Holly Elmore's blog post: We are in triage every second of every day.
Thank you, Julia, for making the EA movement feel like an actual community by and for human beings.