Hide table of contents

Carl Robichaud mentioned in his EAGxVirtual talk that the nuclear risk space is funding constrained. Dylan Matthews has also written about this at Vox.

There also seems to be a consensus that nuclear risk is higher than it has been in the recent past - with the Russia/Ukraine war, and China building up its nuclear arsenal.

I would have expected the EA machine by now to have churned out a list of recommendations for where people can donate to help mitigate nuclear risk. But I haven't been able to find anything on the forum.

So where should I donate? Has something already been written up that I have just missed?

20

0
0

Reactions

0
0
New Answer
New Comment


6 Answers sorted by

Longview’s nuclear weapons fund and Founders Pledge’s Global Catastrophic Risks Fund (disclaimer: I manage the GCR Fund). We recently published a long report on nuclear war and philanthropy that may be useful, too. Hope this helps!

thank you! Exactly what I was looking for

Hi Luke,

Note Carl Robichaud is a fund manager of the Nuclear Weapons Policy Fund, which you can donate to. You may want to check Global Catastrophic Nuclear Risk: A Guide for Philanthropists. Personally:

I encourage funders who have been supporting efforts to decrease nuclear risk (improving prevention, response or resilience) to do the following. If they aim to:

  • Decrease the risk of human extinction, or improve the longterm future, support interventions to decrease AI risk by donating to the Long-Term Future Fund (LTFF), as I personally do with my donations.
  • Increase nearterm welfare, support interventions to improve farmed animal welfare by donating to the Animal Welfare Fund, or ACE’s Recommended Charity Fund.
  • Increase nearterm human welfare with high confidence, and put low weight on effects on animals, support interventions in global health and development by donating to GiveWell’s Top Charities Fund.
  • Continue in the nuclear space, support Longview’s Nuclear Weapons Policy Fund, which “directs funding to under-resourced and high-leverage opportunities to reduce the threat of large-scale nuclear warfare”. It is the only fund solely focussed on nuclear risk, and aligned with effective altruism I am aware of, and I like the 4 components of their grantmaking strategy:
    • Understanding the new nuclear risk landscape.
    • Reduce the likelihood of accidental and inadvertent nuclear war.
    • Educate policymakers on these issues.
    • Strengthen fieldwide capacity.

These are my personal recommendations at the margin. I am not arguing for interventions decreasing nuclear risk to receive zero resources, nor for all these to be funded via Longview’s Nuclear Weapons Policy Fund.

I agree with Giving What We Can’s recommendation for most people to donate to expert-managed funds, and have not recommended any specific organisations above.

I would suggest the Back from the Brink campaign in the United States (www.preventnuclearwar.org) or the International Campaign to Abolish Nuclear Weapons (https://www.icanw.org/) 

Both organizations are bringing a grassroots advocacy approach to push for multilateral efforts to prevent nuclear war. Grassroots advocacy is the most critically underfunded sector in the nuclear security space. 

I've been looking for an answer to exactly this, in light of the Vox article; best answers I've come up w/ so far:
* Nuclear Threat Initiative

* Center for Arms Control and Non-proliferation

* Arms Control Association

All of these organizations are primarily advocacy-based; but they've also served as a kind of "government-employee-waiting/training-area", for when US Administrations were not amenable to movement on arms control.

I've also looked at the Nuclear Weapons Policy Fund, but have had trouble figuring out who/what it grants to and its theory of change; I'd appreciate any material folks have found!

Comments5
Sorted by Click to highlight new comments since:

I hope some of the other commenters have answers for you, but tbh, I don't think the limitation here is donations.

This problem seems wildly intractable, but we could be wrong.

Instead, I suspect the limitation would be more gather a group of intelligent, persistent and creative EA's to dedicate serious time to rethinking this whole issue from the ground up in case there's anything that has been missed. I wouldn't put high odds on this turning up much, but it seems worth a shot.

Apologies to Luke if this comment isn't helpful. If that's the case, just let me know. Happy to remove if I'm taking the conversation off-course.

I don't think it's taking it off course! Thanks for your perspective

I disagree that the problem of nuclear war is wildly intractable - people have been dealing with the issue more-or-less successfully for 80 years. And based on the Vox article, we are in a time where nuclear issues are relatively more important and more neglected than they were say 20 years ago.

To think that there's no organization that can have a meaningful impact on in this time seems unlikely to me. To believe that I think you'd have to believe that no organization in the past 80 years has had much impact on nuclear issues (maybe you do think that and could convince me).

I think that a group of EA's thinking about the field from the ground up certainly could help - but don't agree with what I take to be your implication that the only practical way for EAs to have impact on the issue approach the issue afresh. There are so many organizations, academics, and parts of government already focused on nuclear issues. It is a topic that is directly related to national security, which is arguably the most important thing to every government in the world.

I love the EA framework, but I do think there's a tendency for us to think "well nobody has really thought about this issue sensibly until we came along. Good think we're here now." Some amount of arrogance/confidence can be good, but I don't think nuclear security is an issue where this applies.

I wasn’t claiming that the current organisations haven’t had an impact, but that they haven’t really provided a path to solving this issue. Then again, maybe “solving” is a mistaken frame.

Hi Chris,

I just wanted to note I do not think downvoting comments like yours is ideal (-7 karma in 2 votes excluding my and your upvotes):

  • Disagreement can be signalled with disagreement votes.
  • Downvotes could be used to decrease the visibility of your comment relative to others, but as of now there are no others.

On the other hand, I think your comment would benefit from more context. I only upvoted given it has negative karma, which I think should mostly be reserved for comments made in bad faith, or with bad tone.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A