At Founders Pledge, we just launched a new addition to our funds: the Global Catastrophic Risks Fund. This post gives a brief overview of the fund.

Key Points

  • The fund will focus on global catastrophic risks with a special emphasis on risk pathways through international stability and great power relations.
  • The fund’s shorter giving timelines are complementary to our investing-to-give Patient Philanthropy Fund — we are publishing a short write-up on this soon.
  • The fund is designed to offer high-impact giving opportunities for both longtermists and non-longtermists who care about catastrophic risks (see section on “Our Perspective” in the Prospectus).
  • You can find more information — including differences and complementarity with other funds and longtermist funders — in our Fund Prospectus.

Overview

The GCR Fund will build on Founders Pledge’s recent research into great power conflict and risks from frontier military and civilian technologies, with a special focus on international stability — a pathway that we believe shapes a number of the biggest risks facing humanity — and will work on:

  • War between great powers, like a U.S.-China clash over Taiwan, or U.S.-Russia war;
  • Nuclear war, especially emerging threats to nuclear stability, like vulnerabilities of nuclear command, control, and communications;
  • Risks from artificial intelligence (AI), including risks from both machine learning applications (like autonomous weapon systems) and from transformative AI;
  • Catastrophic biological risks, such as naturally-arising pandemics, engineered pathogens, laboratory accidents, and the misuse of new advances in synthetic biology; and
  • Emerging threats from new technologies and in new domains.

Moreover, the Fund will support field-building activities around the study and mitigation of global catastrophic risks, and methodological interventions, including new ways of studying these risks, such as probabilistic forecasting and experimental wargaming. The focus on international security is a current specialty, and we expect the areas of expertise of the fund to expand as we build capacity.

Current and Future Generations

This Fund is designed both to tackle threats to humanity’s long-term future and to take action now to protect every human being alive today. We believe both that some interventions on global catastrophic risks can be justified on a simple cost-benefit analysis alone, and also that safeguarding the long-term future of humanity is among the most important things we can work on (and that in practice, they often converge). Whether or not you share our commitment to longtermism or believe that reducing existential risks is particularly important, you may still be interested in the Fund for the simple reason that you want to help prevent the deaths and suffering of millions of people.

To illustrate this, the Fund may support the development of confidence-building measures on AI — like an International Autonomous Incidents Agreement — with the aim of both mitigating the destabilizing impact of near-term military AI applications, as well as providing a focal point for long-termist AI governance. Some grants will focus mainly on near-termist risks; others mainly on longtermist concerns.

Like our other Funds, this will be a philanthropic co-funding vehicle designed to enable us to pursue a number of grantmaking opportunities, including:

  • Active grantmaking, working with organizations to shape their plans for the future;
  • Seeding new organizations and projects with high expected value;
  • Committing to multi-year funding to give stability to promising projects and decrease their fundraising costs;
  • Filling small funding gaps that fall between the cracks of traditional philanthropy;
  • Pooling donations to support projects beyond the reach of individual donors;
  • Partnering and collaborating with other funders and organizations;
  • Making expert-advised grants by working with domain experts and current and former policymakers; and
  • Responding to dynamic opportunities, like short-lived policy windows.

One particular use case we want to highlight is an improved ability to partner and collaborate with other funders. There are several scenarios when individual donors are unable to support a project, including when funding requirements are very large, when grant recipients would prefer multiple donors, and when a diverse group of supporters sends an important signal about an organization to the public.

We see the GCR Fund as a complement to existing efforts and envision collaborating across the funding landscape. For example, we are in touch with our partners at various longtermist grantmaking organizations and envision potentially co-funding certain projects with them. Moreover, the GCR Fund is complementary to the Patient Philanthropy Fund, a Founders Pledge-incubated project to invest and grow philanthropic resources until they are needed most — donors to both funds can allocate their giving according to their beliefs and predictions about the optimal timing of philanthropy. For more on the timing of giving, see Founders Pledge’s report on Investing to Give. Tom Barnes and I will publish a separate post on giving timelines and the complementarity of the Funds soon.

We are actively raising funds with Founders Pledge members and outside donors. You can find us on the Founders Pledge website, on every.orgGiving What We Can, and get in touch with me directly if you want to learn more.

Further Reading

 

About Founders Pledge

Founders Pledge is a community of over 1,700 tech entrepreneurs finding and funding solutions to the world’s most pressing problems. Through cutting-edge research, world-class advice, and end-to-end giving infrastructure, we empower members to maximize their philanthropic impact by pledging a meaningful portion of their proceeds to charitable causes. Since 2015, our members have pledged over $8 billion and donated more than $800 million globally. As a nonprofit, we are grateful to be community-supported. Together, we are committed to doing immense good. founderspledge.com

49

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 4:14 AM

This is a great development. I look forward to seeing especially nuclear security projects get off the ground. A core part of our altruistic ambition depends on how well we are able to manage into the global order. If we have EA oriented ideas/projects at the forefront of international stability interventions, I can imagine how much potential impact that means. Congratulations!