Here’s the funding gap that gets me the most emotionally worked up:

In 2020, the largest philanthropic funder of nuclear security, the MacArthur Foundation, withdrew from the field, reducing total annual funding from $50m to $30m.

That means people who’ve spent decades building experience in the field will no longer be able to find jobs.

And $30m a year of philanthropic funding for nuclear security philanthropy is tiny on conventional terms. (In fact, the budget of Oppenheimer was $100m, so a single movie cost more than 3x annual funding to non-profit policy efforts to reduce nuclear war.)

And even other neglected EA causes, like factory farming, catastrophic biorisks and AI safety, these days receive hundreds of millions of dollars of philanthropic funding, so at least on this dimension, nuclear security is even more neglected. 

I agree that a full accounting of neglectedness should consider all resources going towards the cause (not just philanthropic ones), and that 'preventing nuclear war' more broadly receives significant attention from defence departments. However, even considering those resources, it still seems similarly neglected as biorisk.

And the amount of philanthropic funding still matters because certain important types of work in the space can only be funded by philanthropists (e.g. lobbying or other policy efforts you don't want to originate within a certain national government).

There's also almost no funding taking an approach more inspired by EA, which suggests there could be interesting gaps for someone with that mindset.

All this is happening exactly as nuclear risk seems to be increasing. There are credible reports that Russia considered the use of nuclear weapons against Ukraine in autumn 2022. China is on track to triple its arsenal. North Korea has at least 30 nuclear weapons. 

More broadly, we appear to be entering an era of more great power conflict and potentially rapid destabilising technological change, including through advanced AI and biotechnology.

The Future Fund was going to fill this gap with ~$10m per year. Longview Philanthropy hired an experienced grantmaker in the field, Carl Robichaud, as well as Matthew Gentzel. The team was all ready to get started.

But the collapse of FTX meant that didn’t materialise.

Moreover, Open Philanthropy decided to raise their funding bar, and focus on AI safety and biosecurity, so it hasn’t stepped in to fill it either.

Longview’s program was left with only around $500k to allocate on Nuclear Weapons Policy in 2023, and has under $1m on hand now.

Giving Carl and Matthew more like $3 million (or more) a year seems like an interesting niche that a group of smaller donors could specialise in.

This would allow them to pick the low hanging fruit among opportunities abandoned by MacArthur – as well as look for new opportunities, including those that might have been neglected by the field to date.

I agree it’s unclear how tractable policy efforts are here, and I haven’t looked into specific grants, but it still seems better to me to have a flourishing field of nuclear policy than not. I’d suggest talking to Carl about the specific grants they see at the margin (carl@longview.org).

I’m also not sure, given my worldview, that this is even more effective than funding AI safety or biosecurity, so I don’t think Open Philanthropy is obviously making a mistake by not funding it. But I do hope someone in the world can fill this gap.

I’d expect it to be most attractive to someone who’s more sceptical about AI safety, but agrees the world underrates catastrophic risks (or reduce the chance of all major cities blowing up for common sense reasons). It could also be interesting as something that's getting less philanthropic attention than AI safety, and as something a smaller donor could specialise in and play an important role in. If that might be you, it seems well worth looking into.

If you’re interested, you can donate to Carl and Matthew’s fund here:

If you have questions or are considering a larger grant, reach out to: carl@longview.org

To learn more, you might also enjoy 80,000 Hours’ recent podcast with Christian Ruhl.

This was adapted from a post on benjamintodd.substack.com. Subscribe there to get all my posts.

101

5
2

Reactions

5
2

More posts like this

Comments17


Sorted by Click to highlight new comments since:

In 2020, we at SoGive were excited about funding nuclear work for similar reasons. We thought that the departure of the MacArthur foundation might have destructive effects which could potentially be countered with an injection of fresh philanthropy.

We spoke to several relevant experts. Several of these were with (unsurprisingly) philanthropically funded organisations tackling the risks of nuclear weapons. Also unsurprisingly, they tended to agree that donors could have a great opportunity to do good by stepping in to fill gaps left by MacArthur. 

There was a minority view that this was not as good an idea as it seemed. This counterargument was MacArthur had left for (arguably) good reasons. Namely that after throwing a lot of good money after bad, they had not seen strong enough impact for the money invested. I understood these comments to be the perspectives of commentators external to MacArthur (i.e. I don't think anyone was saying that MacArthur themselves believed this, and we didn't try to work out whether MacArthur themselves believed this).

Under this line of thinking, some "creative destruction" might be a positive. On the one hand, we risk losing some valuable institutional momentum, and perhaps some talented people. On the other hand, it allows for fresh ideas and approaches. 

Thanks that's helpful background!

I agree tractability of the space is the main counterargument, and MacArthur might have had good reasons to leave. Like I say in the post, I'd suggest people think about this issue carefully if you're interested in giving to this area.

It's worth separating two issues:

  1. MacArthur's longstanding nuclear grantmaking program as a whole
  2. MacArthur's late 2010s focus on weapons-usable nuclear material specifically

The Foundation had long been a major funder in the field, and made some great grants, e.g. providing support to the programs that ultimately resulted in the Nunn-Lugar Act and Cooperative Threat Reduction (See Ben Soskis's report). Over the last few years of this program, the Foundation decided to make a "big bet" on "political and technical solutions that reduce the world’s reliance on highly enriched uranium and plutonium" (see this 2016 press release), while still providing core support to many organizations. The fissile materials focus turned out to be badly-timed, with Trump's 2018 withdrawal from the JCPOA and other issues. MacArthur commissioned an external impact evaluation, which concluded that "there is not a clear line of sight to the existing theory of change’s intermediate and long-term outcomes" on the fissile materials strategy, but not on general nuclear security grantmaking ("Evaluation efforts were not intended as an assessment of the wider nuclear field nor grantees’ efforts, generally. Broader interpretation or application of findings is a misuse of this report.") 

Often comments like the ones Sanjay outlined above (e.g. "after throwing a lot of good money after bad, they had not seen strong enough impact for the money invested") refer specifically to the evaluation report of the fissile materials focus.

My understanding is that the Foundation's withdrawal from the field as a whole (not just the fissile materials bet of the late 2010s) coincided with this, but was ultimately driven by internal organizational politics and shifting priorities, not impact.

I agree with Sanjay that "some 'creative destruction' might be a positive," but I think that this actually makes it a great time to help shape grantees' priorities to refocus the field's efforts back on GCR-level threats, major war between the great powers, etc. rather than nonproliferation. 

It seems like you might be under-weighing the cumulative amount of resources - even if you have some pretty heavy decay rate (which it's unclear you should -- usually we think of philanthropic investments compounding over time), avoiding nuclear war was a top global priority for decades, and it feels like we have a lot of intellectual and policy "legacy infrastructure" from that.

I agree people often overlook that (and also future resources).

I think bio and climate change also have large cumulative resources.

But I see this as a significant reason in favour of AI safety, which has become less neglected on an annual basis recently, but is a very new field compared to the others.

Also a reason in favour of the post-TAI causes like digital sentience.

I think Oppenheimer was a missed opportunity to raise money for the space. I would have liked it if Universal had pledged to donate 10% of their profits from the film to organizations advancing nuclear security.

Hi Ben,

I would be curious to understand why you continue to focus exclusively on philanthropic funding. I think a 100 % reduction in philantropic funding would only be a 1.16 % (= 0.047/4.04) relative reduction in total funding:

  • According to Founders Pledge's report on nuclear risk, "total philanthropic nuclear security funding stood at about $47 million per year ["between 2014 and 2020"]".
  • Based on 80,000 Hours’ profile on nuclear war, I estimate total funding is 4.04 G$, which I got from the mean of a lognormal distribution with 5th and 95th percentile equal to the lower and upper bound on the profile of 1 and 10 G$.

Focussing on a large relative reduction of a minor fraction of the funding makes it look like neglectedness increased a lot, but this is not the case based on the above. I think it is better to consider spending from other sources because these also contribute towards decreasing risk. In addition, I would not weight spending by cost-effectiveness (and much less give 0 weight to spending not aligned with effective altruism[1]), as this is what one is trying to figure out when using spending/neglectedness as an heuristic.

More importantly, I think you had better focus on assessing cost-effectiveness of representative promising interventions rather than funding:

  • Cost-effectiveness is what one ultimately cares about.
  • Cost-effectiveness can be relatively easily estimated for interventions aiming to decrease global catastrophic risk, which requires saving lives in expectation.
  • You think differences in cost-effectiveness across areas are much more significant than ones across interventions within an area:
    • "Perhaps the top 2.5% of measurable interventions within a cause area are actually 3–10 times better than the mean of measurable interventions [...]".
    • "[...] in terms of effectiveness, it’s more important to choose the right broad area to work in than it is to identify the best solution within a given area".
  • The level of funding is subject to quite arbitrary boundaries around what is considered nuclear security.

Likewise for the other 4 80,000 Hours' most pressing problems. For example, I assume the funding of and number of people working on AI safety is pretty sensible to what is considered safety instead of capabilities, and it looks like there is not a clear distiction between the 2.

Christian Ruhl estimated that doubling nuclear risk reduction spending (which he mentions was 32.1 M$ in 2021) would save a life for 1.55 k$, which corresponds to a cost-effectiveness around 3.23 (= 5/1.55) times that of GiveWell's top charities. I think corporate campaigns for chicken welfare are 1.44 k times as cost-effective as GiveWell's top charities, and therefore 446 (= 1.44*10^3/3.23) times as cost-effective as what Christian got for doubling nuclear risk reduction spending.

I think it makes sense to evaluate interventions which aim to decrease nuclear risk in terms of lives saved (or similar) instead of reductions in extinction risk:

  • I estimated a nearterm annual risk of human extinction from nuclear war of 5.93*10^-12, which is astronomically low.
  • Interventions decreasing the probability of a given relative reduction in population or economic activity (which is how global catastrophic risk is usually defined) still have to save lives in expectation. So one could simply determine their impact in terms of lives saved, but weight more heavily lives saved at a lower population size.
    • As a side note, I tried this weighting lives saved by the reciprocal of population size, and concluded that saving lives at higher population sizes is more cost-effective assuming the ratio between the initial and final population follows a power law.
  1. ^

    I do not think you are doing this here, but I seem to recall cases where only the amount of spending coming from sources aligned with effective altruism was highlighted.

I don't focus exclusively on philanthropic funding. I added these paragraphs to the post to clarify my position:

I agree that a full accounting of neglectedness should consider all resources going towards the cause (not just philanthropic ones), and that 'preventing nuclear war' more broadly receives significant attention from defence departments. However, even considering those resources, it still seems similarly neglected as biorisk.

And the amount of philanthropic funding still matters because certain important types of work in the space can only be funded by philanthropists (e.g. lobbying or other policy efforts you don't want to originate within a certain national government).

I'd add that if if there's almost no EA-inspired funding in a space, there's likely to be some promising gaps by someone applying that mindset.

In general, it's a useful approximation to think of neglectedness as a single number, but the ultimate goal is to find good grants, and to do that it's also useful to break down neglectedness into different types of resources, and consider related heuristics (e.g. that there was a recent drop).

--

Causes vs. interventions more broadly is a big topic. The very short version is that I agree doing cost-effectiveness estimates of specific interventions is a useful input into cause selection. However, I also think the INT framework is very useful. One reason is it seems more robust. Another reason is that in many practical planning situations that involve accumulating expertise over years (e.g. choosing a career, building a large grantmaking programme) it seems better to focus on a broad cluster of related interventions.

E.g. you could do a cost-effectiveness estimate of corporate campaigns and determine ending factory farming is most cost-effective. But once you've spent 5 years building career capital in that factory farming, the available interventions or your calculations about them will likely very different.

Thanks for clarifying, Ben!

I'd add that if if there's almost no EA-inspired funding in a space, there's likely to be some promising gaps by someone applying that mindset.

Agreed, although my understanding is that you think the gains are often exagerated. You said:

Overall, my guess is that, in an at least somewhat data-rich area, using data to identify the best interventions can perhaps boost your impact in the area by 3–10 times compared to picking randomly, depending on the quality of your data.

Again, if the gain is just a factor of 3 to 10, then it makes all sense to me to focus on cost-effectiveness analyses rather than funding.

In general, it's a useful approximation to think of neglectedness as a single number, but the ultimate goal is to find good grants, and to do that it's also useful to break down neglectedness into different types of resources, and consider related heuristics (e.g. that there was a recent drop).

Agreed. However, deciding how much to weight a given relative drop in a fraction of funding (e.g. philanthropic funding) requires understanding its cost-effectiveness relative to other sources of funding. In this case, it seems more helpful to assess the cost-effectiveness of e.g. doubling philanthropic nuclear risk reduction spending instead of just quantifying it.

Causes vs. interventions more broadly is a big topic. The very short version is that I agree doing cost-effectiveness estimates of specific interventions is a useful input into cause selection. However, I also think the INT framework is very useful. One reason is it seems more robust.

The product of the 3 factors in the importance, neglectedness and tractability framework is the cost-effectiveness of the area, so I think the increased robustness comes from considering many interventions. However, one could also (qualitatively or quantitatively) aggregate the cost-effectiveness of multiple (decently scalable) representative promising interventions to estimate the overall marginal cost-effectiveness (promisingness) of the area.

Another reason is that in many practical planning situations that involve accumulating expertise over years (e.g. choosing a career, building a large grantmaking programme) it seems better to focus on a broad cluster of related interventions.

I agree, but I did not mean to argue for deemphasising the concept of cause area. I just think the promisingness of areas had better be assessed by doing cost-effectiveness analyses of representative (decently scalable) promising interventions.

E.g. you could do a cost-effectiveness estimate of corporate campaigns and determine ending factory farming is most cost-effective.

To clarify, the estimate for the cost-effectiveness of corporate campaigns I shared above refers to marginal cost-effectiveness, so it does not directly refer to the cost-effectiveness of ending factory-farming (which is far from a marginal intervention).

But once you've spent 5 years building career capital in that factory farming, the available interventions or your calculations about them will likely very different.

My guess would be that the acquired career capital would still be quite useful in the context of the new top interventions, especially considering that welfare reforms have been top interventions for more than 5 years[1]. In addition, if Open Philanthropy is managing their funds well, (all things considered) marginal cost-effectiveness should not vary much across time. If the top interventions in 5 years were expected to be less cost-effective than the current top interventions, it would make sense to direct funds from the worst/later to the best/earlier years until marginal cost-effectiveness is equalised (in the same way that it makes sense to direct funds from the worst to best interventions in any given year).

  1. ^

    Open Phil granted 1 M$ to The Humane League's cage free campaigns in 2016, 7 years ago. Saulius Šimčikas' analysis of corporate campaigns looks into ones which happened as early as 2005, 19 years ago.

That means people who’ve spent decades building experience in the field will no longer be able to find jobs.


Hot-take: I'd likely be less excited about people with decades in the field vs. new blood given that things seem stuck.

I'd also be interested in that. Maybe worth adding that the other grantmaker, Matthew, is younger. He graduated in 2015 so is probably under 32.

Or you might like to look into Christian's grantmaking at Founders Pledge: https://80000hours.org/after-hours-podcast/episodes/christian-ruhl-nuclear-catastrophic-risks-philanthropy/

Executive summary: Despite the increasing risks of nuclear war, philanthropic funding for nuclear security has significantly decreased, presenting a critical funding gap that smaller donors could potentially fill.

Key points:

  1. Annual philanthropic funding for nuclear security has dropped from $50m to $30m due to the MacArthur Foundation's withdrawal from the field in 2020.
  2. Nuclear security receives less funding compared to other neglected EA causes like factory farming, catastrophic biorisks, and AI safety.
  3. Nuclear risk seems to be increasing with reports of Russia considering nuclear weapons against Ukraine, China's expanding arsenal, and North Korea's possession of at least 30 nuclear weapons.
  4. The collapse of FTX prevented the Future Fund from filling the funding gap, and Open Philanthropy has decided to focus on AI safety and biosecurity instead.
  5. Providing $3 million or more per year to experienced grantmakers like Carl Robichaud and Matthew Gentzel at Longview Philanthropy could help address the funding gap and support important nuclear policy efforts.
  6. This funding opportunity may be particularly attractive to donors who are skeptical about AI safety but agree that the world underrates catastrophic risks.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Thank you for drawing attention to this funding gap! Really appreciate it

Curated and popular this week
 ·  · 16m read
 · 
Applications are currently open for the next cohort of AIM's Charity Entrepreneurship Incubation Program in August 2025. We've just published our in-depth research reports on the new ideas for charities we're recommending for people to launch through the program. This article provides an introduction to each idea, and a link to the full report. You can learn more about these ideas in our upcoming Q&A with Morgan Fairless, AIM's Director of Research, on February 26th.   Advocacy for used lead-acid battery recycling legislation Full report: https://www.charityentrepreneurship.com/reports/lead-battery-recycling-advocacy    Description Lead-acid batteries are widely used across industries, particularly in the automotive sector. While recycling these batteries is essential because the lead inside them can be recovered and reused, it is also a major source of lead exposure—a significant environmental health hazard. Lead exposure can cause severe cardiovascular and cognitive development issues, among other health problems.   The risk is especially high when used-lead acid batteries (ULABs) are processed at informal sites with inadequate health and environmental protections. At these sites, lead from the batteries is often released into the air, soil, and water, exposing nearby populations through inhalation and ingestion. Though data remain scarce, we estimate that ULAB recycling accounts for 5–30% of total global lead exposure. This report explores the potential of launching a new charity focused on advocating for stronger ULAB recycling policies in low- and middle-income countries (LMICs). The primary goal of these policies would be to transition the sector from informal, high-pollution recycling to formal, regulated recycling. Policies may also improve environmental and safety standards within the formal sector to further reduce pollution and exposure risks.   Counterfactual impact Cost-effectiveness analysis: We estimate that this charity could generate abou
sawyer🔸
 ·  · 2m read
 · 
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic. People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology. To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as
Dorothy M.
 ·  · 5m read
 · 
If you don’t typically engage with politics/government, this is the time to do so. If you are American and/or based in the U.S., reaching out to lawmakers, supporting organizations that are mobilizing on this issue, and helping amplify the urgency of this crisis can make a difference. Why this matters: 1. Millions of lives are at stake 2. Decades of progress, and prior investment, in global health and wellbeing are at risk 3. Government funding multiplies the impact of philanthropy Where things stand today (February 27, 2025) The Trump Administration’s foreign aid freeze has taken a catastrophic turn: rather than complying with a court order to restart paused funding, they have chosen to terminate more than 90% of all USAID grants and contracts. This stunningly reckless decision comes just 30 days into a supposed 90-day review of foreign aid. This will cause a devastating loss of life. Even beyond the immediate deaths, the long-term consequences are dire. Many of these programs rely on supply chains, health worker training, and community trust that have taken years to build, and which have already been harmed by U.S. actions in recent weeks. Further disruptions will actively unravel decades of health infrastructure development in low-income countries. While some funding may theoretically remain available, the reality is grim: the main USAID payment system remains offline and most staff capable of restarting programs have been laid off. Many people don’t believe these terminations were carried out legally. But NGOs and implementing partners are on the brink of bankruptcy and insolvency because the government has not paid them for work completed months ago and is withholding funding for ongoing work (including not transferring funds and not giving access to drawdowns of lines of credit, as is typical for some awards). We are facing a sweeping and permanent shutdown of many of the most cost-effective global health and development programs in existence that sa