Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than over our full prior history. We’ve more than doubled the size of our team (to ~110), nearly doubled our annual giving (to >$750M), and added five new program areas.
As our track record and volume of giving have grown, we are seeing more of our impact in the world. Across our focus areas, our funding played a (sometimes modest) role in some of 2023’s most important developments:
- We were among the supporters of the clinical trials that led to the World Health Organization (WHO) officially recommending the R21 malaria vaccine. This is the second malaria vaccine recommended by WHO, which expects it to enable “sufficient vaccine supply to benefit all children living in areas where malaria is a public health risk.” Although the late-stage clinical trial funding was Open Philanthropy’s first involvement with R21 research, that isn’t the case for our new global health R&D program officer, Katharine Collins, who invented R21 as a grad student.
- Our early commitment to AI safety has contributed to increased awareness of the associated risks and to early steps to reduce them. The Center for AI Safety, one of our AI grantees, made headlines across the globe with its statement calling for AI extinction risk to be a “global priority alongside other societal-scale risks,” signed by many of the world’s leading AI researchers and experts. Other grantees contributed to many of the year’s other big AI policy events, including the UK’s AI Safety Summit, the US executive order on AI, and the first International Dialogue on AI Safety, which brought together scientists from the US and China to lay the foundations for future cooperation on AI risk (à la the Pugwash Conferences in support of nuclear disarmament).
- The US Supreme Court upheld California’s Proposition 12, the nation’s strongest farm animal welfare law. We were major supporters of the original initiative and helped fund its successful legal defense.
- Our grantees in the YIMBY (“yes in my backyard”) movement — which works to increase the supply of housing in order to lower prices and rents — helped drive major middle housing reforms in Washington state and California’s legislation streamlining the production of affordable and mixed-income housing. We’ve been the largest national funder of the YIMBY movement since 2015.
We’ve also encountered some notable challenges over the last couple of years. Our available assets fell by half and then recovered half their losses. The FTX Future Fund, a large funder in several of our focus areas, including pandemic prevention and AI risks, collapsed suddenly and left a sizable funding gap in those areas. And Holden Karnofsky — my friend, co-founder, and our former CEO — stepped down to work full-time on AI safety.
Throughout these changes, we’ve remained devoted to our mission of helping others as much as we can with the resources available to us. But it’s a good time to step back and reflect.
The rest of this post covers:
- Brief updates on grantmaking from each of our 12 programs.
- Our leadership changes over the past year.
- Our chaotic macro environment over the last couple of years.
- How that led us to revise our priorities, and specifically to expand our work to reduce global catastrophic risks.
- Other lessons we learned over the past year.
- Our plans for the rest of 2024.
Because it feels like we have more to share this year, this post is longer and aims to share more than I have the last two years. I’m curious to hear what you think of it — if you have feedback, you can find me on Twitter/X at @albrgr or email us at info@openphilanthropy.org.
You can read the rest of this post at Open Philanthropy's website.
Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.
In theory, you can imagine OpenPhil wanting to fund their "fair share" of a project, evenly split across all other interested grantmakers. But it seems harmful and inefficient to wait for other grantmakers to confirm or deny, so "I'll give you 100%, but lower that to 50% if another grantmaker is later willing to go in as well" seems a more efficient version.
I can also imagine that they eg think a project is good if funded up to $100K, but worse if funded up to $200K (eg that they'd try to scale too fast, as has happened with multiple AI Safety projects that I know of!). If OpenPhil funds $100K, and the counterfactual is $0, that's a good grant. But if SFF also provides $100K, that totally changes the terms, and now OpenPhil's grant is actively negative (from their perspective).
I don't know what the right social norms here are, and I can see various bad effects on the ecosystem from this behaviour in general - incentivising grantees to be dishonest about whether they have other funding, disincentivising other grantmakers from funding anything they think OpenPhil might fund, etc. I think Habryka's suggestion of funging, but not to 100% seems reasonable and probably better to me.