Hi Ryan - in terms of the Fellowship, I have a lot of thoughts about what we're trying to do, which feel better suited to "musing, with uncertainty" than "writing an internet comment", so let me know if you want to call/chat about it some time? But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly.
We didn't look into these specifically. We'd welcome additional research to investigate what their programs are and whether there's room for more funding!
Thanks for this Peter. We've done some work on "6.) Are there places EAs should donate that focus on coronavirus response that are particularly promising to donate to, relative to existing charities EAs like?"
Thanks for the post! A donation group I'm in just published a similar analysis of our own giving: https://forum.effectivealtruism.org/posts/opdMXibKjkoL69s96/prioritizing-covid-19-interventions-and-individual-donations
We think Johns Hopkins CHS looks less good right now than this post suggests, and DMI is one of our recommendations.
Look for lessons from the Open Philanthropy AI Fellowship program. [...] There are likely aspects of its operations, including how it sources and selects candidates, that could help other organizations become more diverse.
Daniel Dewey and I run this program! Please reach out if you’re hiring for an EA org or running a fellowship program and want to bounce ideas off us. We would be delighted to chat.
I also want to clarify that the Open Phil AI Fellowship is a scholarship program for PhD students, so the students are not employees or staff.
Catherine here, I work for Open Phil on the technical AI program area. I’m not going to comment fully on our entire case for the Open Phil AI Fellows program, but I want to just address some things that seem wrong to me here:
“early-career AI safety researchers”
The OpenPhil AI PhD Fellows are mostly not early-career “AI safety” researchers. (see the fellowship description here)
The pool of AI safety-oriented PhD students across the world is a stronger cohort in total than any of these particular groups (because it includes them), and not much weaker on average.
I don’t think this would be true, even if the “it includes them” claim were true. I think you need much more evidence to justify a claim that “a larger set containing X is not much weaker on average than the set X itself”.
there are more students from top schools moving into AI safety than econ, philosophy, and GCBRs
? I think you’re claiming there are more grad-school-bound undergrads-from-top-schools, total, aspiring to be “AI safety researchers” than to be economists? This seems definitely false to me. Am I misunderstanding?
How to tell when it's time to leave the private sector for non-profits?
Look at their job postings. Do you even plausibly fit the job postings? Do you want the job? If so, apply.
Me, I think? I recall lamenting about how the "game of telephone" implied by memetic dynamics reduces any nuanced message to about 4 or 5 words.
(In general & broadly: you're welcome to name me as an "inspired by" conversation partner without asking. If you're interested in paraphrasing my views, you can check the paraphrase with me.)
Thanks for this - the concept of "network bandwidth" is a helpful way to conceive of bottlenecks that are similar to "mentorship bandwidth" but also include limited access to e.g. personal face-to-face conversations with people who are enmeshed in the oral tradition & community that comprises any direct work area.