Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than over our full prior history. We’ve more than doubled the size of our team (to ~110), nearly doubled our annual giving (to >$750M), and added five new program areas.

As our track record and volume of giving have grown, we are seeing more of our impact in the world. Across our focus areas, our funding played a (sometimes modest) role in some of 2023’s most important developments:

  • We were among the supporters of the clinical trials that led to the World Health Organization (WHO) officially recommending the R21 malaria vaccine. This is the second malaria vaccine recommended by WHO, which expects it to enable “sufficient vaccine supply to benefit all children living in areas where malaria is a public health risk.” Although the late-stage clinical trial funding was Open Philanthropy’s first involvement with R21 research, that isn’t the case for our new global health R&D program officer, Katharine Collins, who invented R21 as a grad student.
  • Our early commitment to AI safety has contributed to increased awareness of the associated risks and to early steps to reduce them. The Center for AI Safety, one of our AI grantees, made headlines across the globe with its statement calling for AI extinction risk to be a “global priority alongside other societal-scale risks,” signed by many of the world’s leading AI researchers and experts. Other grantees contributed to many of the year’s other big AI policy events, including the UK’s AI Safety Summit, the US executive order on AI, and the first International Dialogue on AI Safety, which brought together scientists from the US and China to lay the foundations for future cooperation on AI risk (à la the Pugwash Conferences in support of nuclear disarmament).
  • The US Supreme Court upheld California’s Proposition 12, the nation’s strongest farm animal welfare law. We were major supporters of the original initiative and helped fund its successful legal defense.
  • Our grantees in the YIMBY (“yes in my backyard”) movement — which works to increase the supply of housing in order to lower prices and rents — helped drive major middle housing reforms in Washington state and California’s legislation streamlining the production of affordable and mixed-income housing. We’ve been the largest national funder of the YIMBY movement since 2015.

We’ve also encountered some notable challenges over the last couple of years. Our available assets fell by half and then recovered half their losses. The FTX Future Fund, a large funder in several of our focus areas, including pandemic prevention and AI risks, collapsed suddenly and left a sizable funding gap in those areas. And Holden Karnofsky — my friend, co-founder, and our former CEO — stepped down to work full-time on AI safety

Throughout these changes, we’ve remained devoted to our mission of helping others as much as we can with the resources available to us. But it’s a good time to step back and reflect.

The rest of this post covers:

  • Brief updates on grantmaking from each of our 12 programs.
  • Our leadership changes over the past year.
  • Our chaotic macro environment over the last couple of years.
  • How that led us to revise our priorities, and specifically to expand our work to reduce global catastrophic risks.
  • Other lessons we learned over the past year.
  • Our plans for the rest of 2024.

Because it feels like we have more to share this year, this post is longer and aims to share more than I have the last two years. I’m curious to hear what you think of it — if you have feedback, you can find me on Twitter/X at @albrgr or email us at info@openphilanthropy.org.

 

You can read the rest of this post at Open Philanthropy's website.

137

0
1

Reactions

0
1

More posts like this

Comments24
Sorted by Click to highlight new comments since: Today at 10:19 AM

Hello again Alex,

You discuss the allocation of funds across your 2 main areas, global health and wellbeing (GHW) and global catastrophic risks (GCR), but (as before) you do not say anything about the allocation across animal and human interventions in the GHW portfolio. I assume you do not think the funding going towards animal welfare interventions should be greatly increased, but I would say you should at least be transparent about your views.

For reference, I estimate the cost-effectiveness of corporate campaigns for chicken welfare is 13.6 DALY/$ (= 0.01*1.37*10^3), i.e. 680 (= 13.6/0.02) times Open Philanthropy's bar. I got that multiplying:

  • The cost-effectiveness of GiveWell's top charities of 0.01 DALY/$ (50 DALY per 5 k$), which is half of Open Philanthropy's bar of 0.02 DALY/$.
  • My estimate for the ratio between cost-effectiveness of corporate campaigns for chicken welfare and GiveWell's top charities of 1.37 k (= 1.71*10^3/0.682*2.73/5):
    • calculated corporate campaigns for broiler welfare increase neaterm welfare 1.71 k times as cost-effectively as the lowest cost to save a life among GiveWell’s top charities then of 3.5 k$, respecting a cost-effectiveness of 0.286 life/k$ (= 1/(3.5*10^3)).
    • The current mean reciprocal of the cost to save a life of GiveWell’s 4 top charities is 0.195 life/k$ (= (3*1/5 + 1/5.5)*10^-3/4), i.e. 68.2 % (= 0.195/0.286) as high as the cost-effectiveness I just mentioned.
    • The ratio of 1.71 k in the 1st bullet respects campaigns for broiler welfare, but Saulius estimated ones for chicken welfare (broilers or hens) affect 2.73 (= 41/15) as many chicken-years.
    • OP thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis”.

Hi Vasco,

Thanks for asking these questions.

I work on Open Phil's communications team. Regarding how Open Phil thinks about allocating between human and animal interventions, this comment from Emily (the one you linked in your own comment) is the best summary of our current thinking.

Thanks for the update, Alex!

Could you elabotate on the influence of Cari and Dustin on your grantmaking (see what I have highlighted below), ideally by giving concrete examples?

As CEO, I [Alex] work more closely with Cari Tuna and Dustin Moskovitz, our primary funders and founding board members, than I had in the past. Dustin and especially Cari were very involved at the founding of Open Philanthropy — our grant approval process in the very early days was an email to Cari. But their level of day-to-day involvement has ebbed and flowed over time. Cari, in particular, has recently had more appetite to engage, which I’m excited about because I find her to be a wise and thoughtful board president and compelling champion for Open Philanthropy and our work. Dustin has also been thinking more about philanthropy and moral uncertainty recently, as reflected in this essay he posted last month.

It’s worth noting that their higher level of engagement means that some decisions that would have been made autonomously by our staff in the recent past (but not in the early days of the organization) will now reflect input from Cari and Dustin. Fundamentally, it has always been the case that Open Philanthropy recommends grants; we’re not a foundation and do not ultimately control the distribution of Cari and Dustin’s personal resources, though of course they are deeply attentive to our advice and we all expect that to continue to be the case. All things considered, I think Cari and Dustin have both managed to be involved while also offering an appropriate — and very welcome — level of deference to staff, and I expect that to continue.

I really appreciated this report, it seemed one of the most honest and open communications to come out of Open Philanthropy, and it helped me connect with your priorities and vision. A couple of specific things I liked.

I appreciated the comment about the Wytham Abby purchase, recognising the flow on effects Open Phil decisions can have on the wider community, and even just acknowledging a mistake - something which is both difficult and uncommon in leadership.

"But I still think I personally made a mistake in not objecting to this grant back when the initial decision was made and I was co-CEO. My assessment then was that this wasn’t a major risk to Open Philanthropy institutionally, so it wasn’t my place to try to stop it. I missed how something that could be parodied as an “effective altruist castle” would become a symbol of EA hypocrisy and self-servingness, causing reputational harm to many people and organizations who had nothing to do with the decision or the building."

I also liked the admission on slow movement on lead exposure. I had wondered why I hadn't been hearing more on that front given the huge opportunities there and the potential for something like the equivalent of a disease "elimination" with a huge effect on future generations. From what I've seen, my instinct is that it had potential to perhaps be a more clear/urgent/cost-effective focus than other Open Phil areas like air quality.

All the best for this year!

From the linked report:

We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage. 

Here's a story I recently heard from someone I trust:

An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF). When OpenPhil found out about this, they rolled back the amount of money the would pay to this project, buy the exact amount that this project was promised by SFF, rendering the SFF grant meaningless. 

I don't think this is ok behaviour, and definitely not what you do to get more funders involved. 

 

Is some context I'm missing here? Or has there been some misunderstanding? Or is this as bad as it looks?

 

I'm not going to name either the source or the project publicly (they can name themselves if they want to), since I don't want to get anyone else in to trouble, or risk their chances of getting OpenPhil funding. I also want to make clear that I'm writing this on my own initiativ. 

There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all, and I think this sort of things are worth calling out. 

 

Edit: I've partly misunderstood what happened. See comment below for clarification. My apologies. 

I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises. 

I apologise and I will try to be more careful in the future. 

One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.

Below the story from someone who was involved. They have asked to stay anonymous, please respect this. 

The short version of the story is: (1) we applied to OP for funding, (2) late 2022/early-2023 we were in active discussions with them, (3) at some point, we received 200k USD via the SFF speculator grants, (4) then OP got back confirming that they would fund is with the amount for the "lower end" budget scenario minus those 200k.

My rough sense is similar to what e.g. Oli describes in the comments. It's roughly understandable to me that they didn't want to give the full amount they would have been willing to fund without other funding coming in. At the same time, it continues to feel pretty off to me that they let the SFF specultor grant 1:1 replace their funding, without even talking to SFF at all -- since this means that OP got to spend a counterfactual 200k on other things they liked, but SFF did not get to spend additional funding on things they consider high priority.

One thing I regret on my end, in retrospect, is not pushing harder on this, including clarifying to OP that the SFF funding we received was partially uncoined, i.e. it wasn't restricted to funding only the specific program that OP gave us (coined) funding for. But, importantly, I don't think I made that sufficiently clear to OP and I can't claim to know what they would have done if I had pushed for that more confidently.

[I work at Open Philanthropy] Hi Linda–-- thanks for flagging this. After checking internally, I’m not sure what project you’re referring to here; generally speaking, I agree with you/others in this thread that it's not good to fully funge against incoming funds from other grantmakers in the space after agreeing to fund something, but I'd want to have more context on the specifics of the situation.

It totally makes sense that you don’t want to name the source or project, but if you or your source would feel comfortable sharing more information, feel free to DM me or ask your source to DM me (or use Open Phil’s anonymous feedback form). (And just to flag explicitly, we would/do really appreciate this kind of feedback.)

I've asked for more information and will share what I find, as long as I have permission to do so.

Flagging that I have also heard about this case.

Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.

In theory, you can imagine OpenPhil wanting to fund their "fair share" of a project, evenly split across all other interested grantmakers. But it seems harmful and inefficient to wait for other grantmakers to confirm or deny, so "I'll give you 100%, but lower that to 50% if another grantmaker is later willing to go in as well" seems a more efficient version.

I can also imagine that they eg think a project is good if funded up to $100K, but worse if funded up to $200K (eg that they'd try to scale too fast, as has happened with multiple AI Safety projects that I know of!). If OpenPhil funds $100K, and the counterfactual is $0, that's a good grant. But if SFF also provides $100K, that totally changes the terms, and now OpenPhil's grant is actively negative (from their perspective).

I don't know what the right social norms here are, and I can see various bad effects on the ecosystem from this behaviour in general - incentivising grantees to be dishonest about whether they have other funding, disincentivising other grantmakers from funding anything they think OpenPhil might fund, etc. I think Habryka's suggestion of funging, but not to 100% seems reasonable and probably better to me.

Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.


Thanks for sharing. 
 

What the other grantmaker (the one who gave your y) though of this?

Where they aware of your OpenPhil grant when they offered you funding?

Did OpenPhil role back your grant because you did not have use for more than X or some other reason?

I got the OpenPhil grant only after the other grant went through (and wasn't thinking much about OpenPhil when I applied for the other grant). I never thought to inform the other grant maker after I got the OpenPhil grant, which maybe I should have in hindsight out of courtesy?

This was covering some salary for a fixed period of research, partially retroactive, after an FTX grant fell through. So I guess I didn't have use for more than X, in some sense (I'm always happy to be paid a higher salary! But I wouldn't have worked for a longer period of time, so I would have felt a bit weird about the situation)

Given the order of things, and the fact that you did not have use for more money, this seems indeed reasonable. Thanks for the clarification.

I understand posting this here, but for following up specific cases like this, especially second hand I think it's better to first contact OpenPhil before airing it publicly. Like you mentioned there is likely to be much context here we don't have, and it's hard to have a public discussion without most of the context.

"There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all"

That's a fair comment I understand the importance of overcoming the bent toward inaction, but I feel like even sending this exact message you posted here to OpenPhil first might have been a better start to the conversation.

And even if it was to be posted, I think it may be better to come from the people directly involved Even if pseudo anonymously (open Phil would know who it was probably) rather than a third party.

I say this with fairly low confidence. I appreciate the benefits of transparency as well and I appreciate overcoming the inertia of doing nothing as well, which I agree is probably worse.

Thanks for the comment, Nick!

I think it's better to first contact OpenPhil [OP] before airing it publicly

I tend to agree. At least based on my experience, people at OP are reasonably responsive. Here are my success rates privately contacting people at OP[1] ("successful attempts[2]"/"attempts[3]"):

  • All: 52.4 % (22/42).
  • Aaron Gertler: 100 % (1/1).
  • Ajeya Cotra: 0 (0/1).
  • Andrew Snyder-Beattie: 100 % (1/1).
  • Alexander Berger: 20 % (1/5).
  • Ben Stewart: 100 % (1/1).
  • Cash Callaghan: 0 (0/1).
  • Claire Zabel: 0 (0/3).
  • Damon Binder: 0 (0/3).
  • Derek Hopf: 100 % (2/2).
  • Harshdeep Singh: 0 (0/1).
  • Heather Youngs: 0 (0/2).
  • Holden Karnofsky: 100 % (3/3).
  • Howie Lempel: 100 % (1/1).
  • Jacob Trefethen: 0 (0/1).
  • James Snowden: 0 (0/1).
  • Jason Schukraft: 100 % (2/2).
  • Lewis Bollard: 80 % (4/5).
  • Luca Righetti: 100 % (2/2).
  • Luke Muehlhauser: 0 (0/1).
  • Matt Clancy: 0 (0/1).
  • Philip Zealley: 100 % (1/1).
  • Rossa O’Keeffe-O’Donovan: 100 % (1/1).
  • Will Sorflaten: 100 % (2/2).
  1. ^

    Last updated on 22 April 2024.

  2. ^

    At least 1 reply.

  3. ^

    Counting as a single attempt multiple ones respecting the same topic.

There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.

By posting this publicly I already found out that they did the same to Neal Nanda. Neal though that in his case he though this was "extremely reasonable". I'm not sure why and I've just asked some follow up questions.

I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it's not bad, and don't think OpenPhil is dong something wrong for not responding to more email. They have other important work to do. But, I also have other important work to do. I'm also not doing anything wrong by not spending extra time figuring out who at their staff to contact and send a private email, which according to your data, has a 55% chance ending up ignored.

There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.

I agree. I was not clear. I meant that, for this case, I think "public criticism after private criticism" > "public criticism before private criticism" > "public criticism without private criticism" > "private criticism without public criticism". So I am glad you commented if the alternative was no comment at all.

I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it's not bad, and don't think OpenPhil is dong something wrong for not responding to more email.

Yes, I would say the response rate is good enough to justify getting in touch (unless we are talking about people who consistently did not reply to past emails). At the same time, I actually think people at Open Phil might be doing something wrong by not replying to some of my emails assuming they read them, because it is possible to reply to an email in 10 s. For example, by saying something like "Thanks. Sorry, but I do not plan to look into this.". I guess people assume this is as bad or worse than no reply, but I would rather have a short reply, so I suppose I should clarify this in future emails.

If this was for any substantial amount of money I think it would be pretty bad, though it depends on the relative size of the OP grants and SFF grants. 

I think most of the time you should just let promised funding be promised funding, but there is a real and difficult coordination problem here. The general rule I follow when I have been a recommender on the SFF or Lightspeed Grants has been that when I am coordinating with another funder, and we both give X dollars a year but want to fund the organization to different levels (let's call them level A for me and level B for them), that then I will fund the organization for A/2 and they will fund the organization for B/2, for a total funding of halfway between A and B.

So in such a situation, if I heard that another funder had taken an organization I had already funded for the full amount of A, to the full of level B, then I think it's not unreasonably for me to reduce my excess funding by half and make sure the organization doesn't have more than (A/2 + B/2) funding. 

However, fully funging against incoming funds seems quite bad and creates really annoying fundraising dynamics. 

Thanks for sharing, Linda!

After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF).

I very much agree Open Phil breaking a promise to provide funding would be bad. However, I assume Open Phil asked about alternative sources of funding in the application, and I wonder whether the promise to provide funding was conditional on the other sources not being successful.

Thanks for writing and sharing this Alexander – I thought it was an unusually helpful and transparent post.

The decline of our available assets should disproportionately affect funding for GHW relative to GCR because we think that opportunities in our GHW portfolio vary less in terms of expected cost-effectiveness. That is, we think GHW opportunities are more closely clustered around the “bar” we use to define which grants meet our standards for cost-effectiveness.

I wonder whether the 2nd sentence above means you have cost-effectiveness estimates of your GHW grants. If so, I think it would be good if you shared them for transparency. I appreciate justifying well your estimates would take time, but I assume sharing some estimates with little context would be better than sharing no estimates. I also believe you have great researchers which could quickly provide adequate context for the estimates.

From the linked post: 

As a result of our internal process, we decided to keep that new higher bar, while also aiming to roughly double our GCR spending over the next few years — if we can find sufficiently cost-effective opportunities. 

At first glance, this seems potentially 'wildly conservative' to me, if I think of what this implies for the AI risk mitigation portion of the funding and how this intersects with (shortening) timelines [estimates]. 

My impression from looking briefly at recent grants is that probably <= 150M$ was spent by Open Philanthropy on AI risk mitigation during the past year. A doubling of AI risk spending would imply <= 300M$ / year.

AFAICT (including based on non-public conversations / information), at this point, median forecasts for something like TAI / AGI are very often < 10 years, especially from people who have thought the most about this question. And a very respectable share of those people seem to have < 5 year medians.

Given e.g. https://www.bloomberg.com/billionaires/profiles/dustin-a-moskovitz/, I assume, in principle, Open Philanthropy could spend > 20B$ in total. So 150M$ [/ year] is less than 1% of the total portfolio and even 300M$ [/ year] would be < 2%.

X-risk estimates from powerful AI vs. from other sources often have AI take more than half of the total x-risk (e.g. estimates from 'The Precipice' have AI take ~10% of ~17% for x-risk during the next ~100 years).

Considering all the above, the current AI risk mitigation spending plans seem to me far too conservative.

I also personally find it pretty unlikely that there aren't decent opportunities to spend > 300M$ / year (and especially > 150M$ / year), given e.g. the growth in the public discourse about AI risks; and that some plans could potentially be [very] scalable in how much funding they could take in, e.g. field-building, non-mentored independent research, or automated AI safety R&D.  

Am I missing something (obvious) here?

(P.S.: my perspective here might be influenced / biased in a few ways here, given my AI risk mitigation focus, and how that intersects / has intersected with Open Philanthropy funding and career prospects.)

Re: why our current rate of spending on AI safety is "low." At least for now, the main reason is lack of staff capacity! We're putting a ton of effort into hiring (see here) but are still not finding as many qualified candidates for our AI roles as we'd like. If you want our AI safety spending to grow faster, please encourage people to apply!

There is also the theoretical possibility of disbursing a larger number of $ per hour of staff capacity.

Curated and popular this week
Relevant opportunities