In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.
The ideas I think could have the highest impact are:
1. Government placements/secondments in key GHW areas (e.g. international development), and
2. Expanded (ultra) high-net-worth ([U]HNW) advising
Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.
I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.
I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space!
Introduction
I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here).
Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion.
At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in
Thanks that is super helpful although some downvotes could have come from what might be perceived as a slightly infantilizing tone - haha! (no offense taken as you are right that the information is really accessible but I guess I am just a bit surprised that this is not more often mentioned on the podcasts I listen to, or perhaps I have just missed several EAF posts on this).
Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive.
I guess the good thing is then as AI grows they will have more money to put towards making it safe - it might not be all bad.
FYI, weirdly timely podcast episode just out from FLI.