Hide table of contents

Good Ventures, the foundation that supports the Open Philanthropy Project, has made a series of grants to psychedelic research organizations:

These grants are relatively small compared to the foundation's overall grantmaking capacity, but seem to indicate that Good Ventures has a clear & consistent interest in supporting psychedelic research.

There isn't any record of these grants on the Open Phil site.

Seems like these grants could be neatly housed under Open Phil's "Scientific Research" cause area, perhaps in the "Other Scientific Research" portfolio.

I'm curious about why there's a separation between Good Ventures' psychedelic grantmaking & the grants it makes through Open Phil.

(It's possible that this is simply an oversight, though given what I know about Open Phil's processes I'm guessing it's an intentional separation.)

New Answer
New Comment


2 Answers sorted by

From Good Ventures' grantmaking approach page:

In 2018, Good Ventures funded $164 million in grants recommended by the Open Philanthropy Project, including $74 million to GiveWell’s top charities, standout charities, and incubation grants. (These grants generally appear in both the Good Ventures and Open Philanthropy Project grants databases.)
Good Ventures makes a small number of grants in additional areas of interest to the foundation. Such grants totaled around $19 million in 2018. Check out Our Portfolio and Grants Database to learn more about the grants we've made so far.
Kit
16
0
0

As an aside, I wouldn't say that any Good Ventures things are 'housed under Open Phil'. I'd rather say that Open Phil makes recommendations to Good Ventures. i.e. Open Phil is a partner to Good Ventures, not a subsidiary.

Technically, I've therefore answered a different question to the one you asked: I've answered the question 'why aren't these grants on the Open Phil website'.

There's an unanswered question here of why Good Ventures makes grants that OpenPhil doesn't recommend, given that GV believes in the OpenPhil approach broadly. But I guess I don't find it that surprising that they do so. People like to do more than one thing?

4
Milan Griffes
Makes sense. I'm particularly curious about the psychedelic research grants, because it seems like those both could be neatly housed under Open Phil's "Other Scientific Research" portfolio.

Thanks!

I just flipped through the Good Ventures grants database & spot-checked ~30 of their 2018 grants.

Every grant I checked was made under the aegis of Open Phil, except for the aforementioned psychedelic grants & these grants to Alzheimer's research: 1, 2, 3, 4

The same question comes up for the Alzheimer's grants – seems like they could be neatly placed in Open Phil's other scientific research portfolio, but weren't.

I asked about this on the most recent Open Phil open thread. Michael Levine replied:

Hi Milan – thanks for the question. You’re right that this was an intentional separation. While the vast majority of Good Ventures grants are also Open Phil grants and appear in both databases, there are a couple of causes – these grants are one, and Alzheimer’s research is another – where Good Ventures has made grants that aren’t in Open Phil focus areas. These grants didn’t go through the cause selection process that we think of as the special sauce that makes something an Open Phil grant.
Hope this is clarifying.

I followed up with:

Thanks for the speedy reply!

Could you say a little more about the conditions under which Good Ventures decides to make grants outside of the Open Phil branding?

I'm particularly curious about the psychedelic & Alzheimer's research grants, because it seems like those both could be neatly housed under Open Phil's "Other Scientific Research" portfolio.

Michael Levine replied:

Hi Milan – there’s not much more to say here. The grants in question aren’t housed under our Other Scientific Research portfolio because we didn’t recommended them, because they didn’t go through our standard prioritization and investigation process. Most of Good Ventures’ giving is based on recommendations from Open Phil and GiveWell, but Good Ventures has made and will continue to make occasional other grants as they see fit. We think that’s perfectly normal and expect that the same thing would occur if and when we partner closely
... (read more)
4
Aaron Gertler 🔸
I can imagine a couple of scenarios: a) GV asked Open Phil if they had the capacity to look into psychedelics/Alzheimer's, and Open Phil said "no" b) GV asked Open Phil for shallow investigations of those areas, and the results weren't promising enough for Open Phil to want to continue, but weren't so un-promising that GV gave up c) GV has some research capacity independent of Open Phil, and decided to use it on these causes (maybe because Dustin/Cari see them as personally motivating/"warm fuzzies", even if they are potentially high-impact) ...there are plenty of other possibilities I haven't had time to think of, but some combination of (a) and (c) feels pretty likely to me. (This is entirely speculative; I have no special insight into the relationship between GV and Open Phil.)
2
Milan Griffes
And Michael replied:

It's Dustin and Cari's money, so it's their decision what to do with it.

Comments7
Sorted by Click to highlight new comments since:

Have you attempted to contact GV or OpenPhil directly about this?

Any forum post absorbs hours of time and attention from the community, so I support there being a norm of getting questions answered by emailing the group that probably knows the answer, where doing so is possible.

My current model is that formal EA orgs are deluged with incoming email, which makes email a pretty noisy channel.

I would reply to an email asking something like this about 75% of the time within 1-2 weeks, and suspect the same is true of most other orgs.

Admittedly the answer might be only a few sentences, and might be 'sorry I don't know try asking X.'

But it seems worth trying in the first instance. :)

But asking privately only gives one person the answer, instead of many. I'm a bit surprised by your response - I had expected that the group who knows the answer usually has better things to do than answer random emails, while there are a lot of individuals who probably have knowledge like this whose time isn't as valuable.

In my experience, formal EA orgs tend to respond to questions of this kind reasonably quickly (I'm deliberately only thinking of cases from before I actually worked for CEA). GiveWell and Open Phil in particular usually respond to comments on their blog posts within days.

I asked about it on Open Phil's most recent open thread.

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Recent opportunities in Community
58
John Salter
· · 4m read
2