I wanted to share this update from Good Ventures (Cari and Dustin’s philanthropy), which seems relevant to the EA community.
Tl;dr: “while we generally plan to continue increasing our grantmaking in our existing focus areas via our partner Open Philanthropy, we have decided to exit a handful of sub-causes (amounting to less than 5% of our annual grantmaking), and we are no longer planning to expand into new causes in the near term by default.”
A few follow-ups on this from an Open Phil perspective:
- I want to apologize to directly affected grantees (who've already been notified) for the negative surprise here, and for our part in not better anticipating it.
- While this represents a real update, we remain deeply aligned with Good Ventures (they’re expecting to continue to increase giving via OP over time), and grateful for how many of the diverse funding opportunities we’ve recommended that they’ve been willing to tackle.
- An example of a new potential focus area that OP staff had been interested in exploring that Good Ventures is not planning to fund is research on the potential moral patienthood of digital minds. If any readers are interested in funding opportunities in that space, please reach out.
- Good Ventures has told us they don’t plan to exit any overall focus areas in the near term. But this update is an important reminder that such a high degree of reliance on one funder (especially on the GCR side) represents a structural risk. I think it’s important to diversify funding in many of the fields Good Ventures currently funds, and that doing so could make the funding base more stable both directly (by diversifying funding sources) and indirectly (by lowering the time and energy costs to Good Ventures from being such a disproportionately large funder).
- Another implication of these changes is that going forward, OP will have a higher bar for recommending grants that could draw on limited Good Ventures bandwidth, and so our program staff will face more constraints in terms of what they’re able to fund. We always knew we weren’t funding every worthy thing out there, but that will be even more true going forward. Accordingly, we expect marginal opportunities for other funders to look stronger going forward.
- Historically, OP has been focused on finding enough outstanding giving opportunities to hit Good Ventures’ spending targets, with a long-term vision that once we had hit those targets, we’d expand our work to support other donors seeking to maximize their impact. We’d already gotten a lot closer to GV’s spending targets over the last couple of years, but this update has accelerated our timeline for investing more in partnerships and advising other philanthropists. If you’re interested, please consider applying or referring candidates to lead our new partnerships function. And if you happen to be a philanthropist looking for advice on how to invest >$1M/year in new cause areas, please get in touch.
Sorry, maybe I missed something, where did I imply you have a history of lying? I don't currently believe that Open Phil or you have a history of lying. I think we have disagreements on dimensions of integrity beyond that, but I think we both care deeply about not lying.
I don't really know what you mean by this. I don't want carte blanche in how I spend money. I just want to be evaluated on my impact on actual AI risk, which is a priority we both share. You don't have to approve of everything I do, and indeed think allowing people to choose their means by which to achieve a long-term goal, is one of the biggest reasons for historical EA philanthropic success (as well as a lot of the best parts of Silicon Valley).
A complete blacklist of a whole community seems extreme, and rare, even for non-EA philanthropists. Let Open Philanthropy decide whether they think what we are doing helps with AI risk, or evaluate it yourself if you have the time. Don't blacklist work associated with a community on the basis of a disagreement about its optimal structure. You absolutely do not have to be part of a rationality community to fund it, and if you are right about its issues, that will be reflected in its lack of impact.
I don't really think this is a good characterization of the rationality community. It is true that the rationality community engages in heavy decoupling, where we don't completely dismiss people on one topic, because they have some socially shunned opinions on another topic, but that seems very importantly different than inviting everyone who fits that description "into the fold". The rationality community has a very specific epistemology and is overall, all things considered, extremely selective in who it assigns lasting respect to.
You might still object to that, but I am not really sure what you mean by the "inviting into the fold" here. I am worried you have walked away with some very skewed opinions though some unfortunate tribal dynamics, though I might also be misunderstanding you.
As an example, I think OP was in a position to substantially reduce the fallout from FTX, both by a better follow-up response, and by having done more things in advance to prevent things like FTX.
And indeed as far as I can tell the people who had the biggest positive effect on the reputation of the ecosystem in the context of FTX are the ones most negatively impacted by these changes to the funding landscape.
It doesn't seem very hard to imagine different ways that OP grantmaking could have substantially changed whether FTX happened in the first place, or at least the follow-up response to it.
I feel like an underlying issue here is something like "you feel like you have to personally defend or engage with everything that OP funds".
You of course know better what costs you are incurring, but my sense is that you can just give money to things you think are good for the world, and this will overall result in more political capital, and respect, than the world where you limit yourselves to only the things you can externally justify or expend other resources on defending. The world can handle billionaires spending billions of dollars on yachts and luxury expenses in a way that doesn't generally influence their other resources much, which I think suggests the world can handle billionaires not explaining or defending all of their giving-decisions.
My guess is there are lots of things at play here that I don't know about or understand, and I do not want to contribute to the degree to which you feel like every philanthropic choice you make comes with social costs and reduces your non-financial capital.
I don't want to drag you into a detailed discussion, though know that I am deeply grateful for some of your past work and choices and donations, and if you did ever want to go into enough detail to make headway on these disagreements, I would be happy to do so.