I work on Open Philanthropy’s AI Governance and Policy team, but I’m writing this in my personal capacity – several senior employees at Open Phil have argued with me about this!

This is a brief-ish post addressed to people who are interested in making high-impact donations and are already concerned about potential risks from advanced AI. Ideally such a post would include a case that reducing those risks is an especially important (and sufficiently tractable and neglected) cause area, but I’m skipping that part for time and will just point you to this 80,000 Hours problem profile for now.

  • Contrary to a semi-popular belief that donations in global catastrophic risks merely “funge” with major donors, there are several ways for individual donors, including those giving small amounts, to reduce global catastrophic risks from AI. These include donating to:
    • Work that would be less impactful if they were funded by the major funders, or if it were majority-funded by those funders, or would generally benefit from greater funding diversity for reasons of organizational health and independence.
    • Work that major funders won’t be able to discover, evaluate, and/or fund quickly enough, e.g. time-sensitive events, individual projects, or career transitions.
    • Work that encounters legal restrictions on size of donation, like political campaigns, political action committees/donor networks.
    • Work in sub-areas that major funders have decided not to fund.
  • You can donate to that kind of work either directly (by giving to the organizations or individuals) or indirectly (by giving through funds like the AI Risk Mitigation Fund, the LTFF, Longview’s Emerging Challenges Fund, or JueYan Zhang’s AI Safety Tactical Opportunities Fund.
    • Advantages to giving directly:
      • You can give to political campaigns/PACs/donor networks as well as 501(c)(4) lobbying/advocacy organizations, which the funds might not be able to do, though I’m not sure about all of them. (For political candidates, this probably means not giving in December 2024 and saving for future opportunities.)
      • Some funds might pose reputational issues for some especially reputation-sensitive recipients.
      • You can move especially quickly for things in the “time-sensitive event/project/transition” category.
      • You don’t have to defer to someone else’s judgment (and can help ease the grant evaluation capacity bottleneck!).
    • Advantages to giving indirectly:
      • Giving to the funds, assuming they have 501(c)(3) status or the non-US equivalent, might have more favorable tax implications than giving to individuals or lobbying/advocacy orgs (though I am not a lawyer or accountant and this is not legal/financial advice!)
      • It’s very quick, and you can defer to a professional grantmaker’s judgment rather than spending time/bandwidth on evaluating opportunities yourself.
      • You can give on a more predictable schedule (rather than e.g. saving up for especially good opportunities).
    • (I’ll take this opportunity to flag that the team I work on at Open Philanthropy is eager to work with more external philanthropists to find opportunities that align with their giving preferences, especially if you’re looking to give away $500k/yr or more.)
  • There are some reasons to think that people who work in AI risk reduction especially should make (some/most of) their donations within their field.
    • Because of their professional networks, they are more likely to encounter giving opportunities that funders may not hear about, or hear about in time, or have the capacity to investigate.
    • Because of their expertise, they are better able than most individual donors to evaluate and compare both direct opportunities and the funds.
    • However, people who work in that field may be less inclined to donate within AI risk reduction, perhaps because they want to “hedge” due to moral uncertainty/worldview diversification, to signal their good-faith altruism to others (and/or themselves) by donating to more “classic” cause areas like global health or animal welfare, or to maintain their own morale. I won’t be able to do justice to the rich literature on these points here (and admit to not having really done my homework on it). Instead, I’ll just:
      • Point out that, depending on their budget, donors might be able to do that hedging/signaling with some but not all of their donations. This is basically a call for “goal factoring”: e.g. you could ask how big of a donation would it take to satisfice those goals and donate the rest to AI risk interventions.
      • Throw a couple other points that I haven’t seen discussed in my limited reading of the literature into a footnote.[1]

Edited to add a couple more concrete ideas for where to donate:

  • For donors looking to make a fast, relatively robust, and tax-deductible donation, Epoch is a great option.
    • I think their research has significantly improved the evidence base and discourse around the trajectory of AI, which seem like really important inputs to how society handles the attendant risks.
    • According to a conveniently timed thread from Epoch's founder Jaime Sevilla today, marginal small-dollar funding would go towards additional data insights and short reports, which sounds good to me.
    • Jaime adds that they are "starved for feedback" and that a short email about why you're supporting them would be especially useful (though I think "Trevor's forum post said so" would be less helpful than what he has in mind -- bolstering my claim that AI professionals are comparatively advantaged to donate!).
  • I also have some personal-capacity opinions about policy advocacy and political campaigns and would be happy to chat about these privately if you reach out to my Forum account, but won't have the time to chat with everyone, so please only do so if you're planning to give away ~$25k or more in the next couple years.
  1. ^

    First, a meta point: I think people sometimes accept the above considerations “on vibes.” But, for people who agree that reducing AI risks is the most pressing cause (as in, the most important, neglected, and tractable) and with my earlier argument that there are good giving opportunities in AI risk reduction at current margins, especially for people who work in that field, their views imply that their donation is a decision with nontrivial stakes. They might actually be giving up a lot of prima facie impact in exchange for more worldview diversification, signaling, and morale. I know this does not address the above considerations, and it could still be a good trade; I’m basically just saying, those considerations have to turn out to be valid and pretty significant in order to outweigh the consequentialist advantages of AI risk donations.

    Second, I think it’s coherent for individual people to be uncertain that AI risk is the best thing to focus on (on both empirical and normative levels) while still thinking it’s better to specialize, including in one’s donations. That’s because worldview diversification seems to me like it makes more sense at larger scales, like the EA movement or Open Philanthropy’s budget, and less at the scale of individuals and small donors. Consider the limits in either direction: it seems unlikely that individuals should work multiple part-time jobs in different cause areas instead of picking one in which to develop expertise and networks, and it seems like a terrible idea for all of society to dedicate their resources to a single problem. There’s some point in between where the costs of scaling an effort, and the diminishing returns of more resources thrown at the problem, start to outweigh the benefits of specialization. I think individuals are probably on the “focus on one thing” side of that point.

Comments5


Sorted by Click to highlight new comments since:

I think this post makes some great points, thanks for sharing! :) And I think it's especially helpful to hear from your perspective as someone who does grantmaking at OP.

I really appreciate the addition of concrete examples. In fact, I would love to hear more examples if you have time — since you do this kind of research as your job I'm sure you have valuable insights to share, and I expect that you can shift the donations of readers. I'd also be curious to hear where you personally donate, but no pressure, I totally understand if you'd prefer to keep that private.

Work in sub-areas that major funders have decided not to fund

I feel like this is an important point. Do you have any specific AI risk reduction sub-areas in mind?

Thanks, glad to hear it's helpful!

  • Re: more examples, I co-sign all of my teammates' AI examples here -- they're basically what I would've said. I'd probably add Tarbell as well.
  • Re: my personal donations, I'm saving for a bigger donation later; I encounter enough examples of very good stuff that Open Phil and other funders can't fund, or can't fund quickly enough, that I think there are good odds that I'll be able to make a really impactful five-figure donation over the next few years. If I were giving this year, I probably would've gone the route of political campaigns/PACs.
  • Re: sub-areas, there are some forms of policy advocacy and moral patienthood research for which small-to-medium-size donors could be very helpful. I don't have specific opportunities in mind that I feel like I can make a convincing public pitch for, but people can reach out if they're interested.

Adding to the list of funds: Effektiv-spenden.org recently launched their AI safety fund.

If you are so inclined, individual donors can make a big difference to PauseAI US as well (more here: https://forum.effectivealtruism.org/posts/YWyntpDpZx6HoaXGT/please-vote-for-pauseai-us-in-the-donation-election)

We’re the highest voted AI risk contender in the donation election, so vote for us while there’s still time!

Seems like a good place to remind people of the Nonlinear Network, where donors can see a ton of AI safety projects with room for funding, see what experts think of different applications, sort by votes and intervention, etc. 

Curated and popular this week
 ·  · 17m read
 · 
TL;DR Exactly one year after receiving our seed funding upon completion of the Charity Entrepreneurship program, we (Miri and Evan) look back on our first year of operations, discuss our plans for the future, and launch our fundraising for our Year 2 budget. Family Planning could be one of the most cost-effective public health interventions available. Reducing unintended pregnancies lowers maternal mortality, decreases rates of unsafe abortions, and reduces maternal morbidity. Increasing the interval between births lowers under-five mortality. Allowing women to control their reproductive health leads to improved education and a significant increase in their income. Many excellent organisations have laid out the case for Family Planning, most recently GiveWell.[1] In many low and middle income countries, many women who want to delay or prevent their next pregnancy can not access contraceptives due to poor supply chains and high costs. Access to Medicines Initiative (AMI) was incubated by Ambitious Impact’s Charity Entrepreneurship Incubation Program in 2024 with the goal of increasing the availability of contraceptives and other essential medicines.[2] The Problem Maternal mortality is a serious problem in Nigeria. Globally, almost 28.5% of all maternal deaths occur in Nigeria. This is driven by Nigeria’s staggeringly high maternal mortality rate of 1,047 deaths per 100,000 live births, the third highest in the world. To illustrate the magnitude, for the U.K., this number is 8 deaths per 100,000 live births.   While there are many contributing factors, 29% of pregnancies in Nigeria are unintended. 6 out of 10 women of reproductive age in Nigeria have an unmet need for contraception, and fulfilling these needs would likely prevent almost 11,000 maternal deaths per year. Additionally, the Guttmacher Institute estimates that every dollar spent on contraceptive services beyond the current level would reduce the cost of pregnancy-related and newborn care by three do
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 1m read
 · 
Need help planning your career? Probably Good’s 1-1 advising service is back! After refining our approach and expanding our capacity, we’re excited to once again offer personal advising sessions to help people figure out how to build careers that are good for them and for the world. Our advising is open to people at all career stages who want to have a positive impact across a range of cause areas—whether you're early in your career, looking to make a transition, or facing uncertainty about your next steps. Some applicants come in with specific plans they want feedback on, while others are just beginning to explore what impactful careers could look like for them. Either way, we aim to provide useful guidance tailored to your situation. Learn more about our advising program and apply here. Also, if you know someone who might benefit from an advising call, we’d really appreciate you passing this along. Looking forward to hearing from those interested. Feel free to get in touch if you have any questions. Finally, we wanted to say a big thank you to 80,000 Hours for their help! The input that they gave us, both now and earlier in the process, was instrumental in shaping what our advising program will look like, and we really appreciate their support.