In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.
The ideas I think could have the highest impact are:
1. Government placements/secondments in key GHW areas (e.g. international development), and
2. Expanded (ultra) high-net-worth ([U]HNW) advising
Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.
I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.
I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space!
Introduction
I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here).
Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion.
At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in
It seems unclear to me that the level of CO2 emissions from one model being greater than one car necessarily implies that AI is likely to have an outsized impact on climate change. I think there's some missing calculations here about number of models, number of cars, how much additional marginal CO2 is being created here not accounted for by other segments, and how much marginal impact on climate change is to be expected from the additional CO2 from AI models. That in hand, we could potentially assess how much additional risk there is from AI in the short term on climate change.
It looks like some people downvoted you, and my guess is that it may have to do with the title of the post. It's a strong claim, but also not as informative as it could be, doesn't mention anything to do with climate change or GHGs, for instance.
Similarly, one could be concerned that the rapid economic growth that AI are expected to bring about could cause a lot of GHG emissions unless somehow we (or they) figure out how to use clean energy instead.
I think you may have forgotten to add a hyperlink?
Yes my apologies! I've added the necessary corrections.
Thanks.
The highest estimate they find is for Neural Architecture Search, which they estimated as emitting 313 tons of C02 after training for over 30 years. This suggests to me that they're using an inappropriate hardware choice! Additionally, the work they reference - here - does not seem to be the sort of work you'd expect to see widely used. Cars emit a lot of CO2 because everyone has one; most people have no need to search for new transformer architecture. The answers from one search could presumably be used for many applications.
Most of the models they train produce dramatically lower estimates.
I also don't really understand how their estimates for renewable generation for the cloud companies are so low. Amazon say they were 50% renewable in 2018, but the paper only gives them 18% credit, and Google say they are CO2 neutral now. It makes sense that they should look quite efficient, given that cloud datacenters are often located near geothermal or similar power sources. This 18% is based on a Greenpeace report which I do not really trust.
Finally, I found this unintentionally very funny:
This whole paragraph is totally different to the rest of the paper. It appears in the conclusion section, but isn't really concluding from anything in the main body - it appears the authors simply wanted to share some left wing opinions at the end. But this 'conclusion' is exactly backwards - if training models is bad for the environment, it is good to prevent too many people doing it! And if cloud computing is more environmentally friendly than buying your own GPU, it is good that people are forced into using it!
Overall this paper was not very convincing that training models will be a significant driver of climate change. And there is compelling reason to be less worried about climate change than AGI. So I don't think this was very convincing that the main AI risk concern is the secondary effect on climate change.