Hide table of contents

Introduction

The Longtermism Fund is pleased to announce that we will be providing grants to the following organisations in our first-ever grantmaking round:

  • Center for Human-Compatible Artificial Intelligence ($70,000 USD)
  • SecureBio ($60,000 USD)
  • Rethink Priorities' General Longtermism Team ($30,000 USD)
  • Council on Strategic Risks' nuclear weapons policy work ($15,000 USD)

These grants will be paid out in January 2023;  the amounts were chosen based on what the Fund has received in donations as of today.

In this payout report, we will provide more details about the grantmaking process and the grantees the fund is supporting.[1] This report was written by Giving What We Can, which is responsible for the fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking. 

Read more about the Longtermism Fund here and about funds more generally here.

The grantmaking process

Longview actively investigates high-impact funding opportunities for donors looking to improve the long-term future. This means the grants for the Longtermism Fund are generally decided by:

  1. Longview’s general work (which is not specific to the Longtermism Fund) evaluating the most cost-effective funding opportunities. This involves thousands of hours of work each year from their team. Read more about why we trust Longview as a grantmaker.
  2. Choosing among those opportunities based on the scope of the Fund.

In addition to this, the Fund decided to support a diverse range of longtermist causes this grantmaking round. Our current plan is to provide grant reports approximately every six months.

The scope of the Fund

The scope of the Longtermism Fund is to support organisations that are:

  • Reducing existential and catastrophic risks.
  • Promoting, improving, and implementing key longtermist ideas.

In addition, the fund aims to support organisations with a compelling and transparent case in favour of their cost-effectiveness that most donors interested in longtermism will understand, and/or that would benefit from being funded by a large number of donors. 

Why the fund is supporting organisations working on a diverse range of longtermist causes

There are several major risks to the long-term future that need to be addressed, and they differ in their size, potential severity, and how many promising solutions are available and in need of additional funding. However, there is a lot of uncertainty about which of these risks are most cost-effectively addressed by the next philanthropic dollar, even among expert grantmakers. 

Given this, the Fund aimed to:

  1. Provide funding to organisations working across a variety of high-impact longtermist causes, including:
  2. Allocate an amount of funding to highly effective organisations working on these causes representative of how the fund might deploy resources across areas in the future.

A benefit of this approach is that the grants highlight a diverse range of approaches to improving the long-term future. 

We expect the Fund to take a similar approach for the foreseeable future, but we could imagine it changing if there is a persistent and notable difference in the cost-effectiveness in the funding opportunities between causes. At this point, so long as those opportunities are within the Fund’s scope, funding the most cost-effective opportunities will likely outweigh supporting a more diverse range of causes. 

Grantees

This section will provide further information about the grantees and a specific comment from Longview on why they chose to fund the organisation and how they expect the funding to be used. Each page links to either a charity page, where you can learn more about the grantees and support them through a direct donation, or to a public writeup to describe the grantees’ work in more depth.

The Center for Human-Compatible Artificial Intelligence (CHAI) — $70,000

CHAI is a research institute focused on developing AI that is aligned with human values and will ultimately improve the long-term future for all beings. Their work includes conducting research on alignment techniques such as value learning, as well as engaging with policymakers and industry leaders to promote the development of safe and beneficial AI.

Longview: “We recommended a grant of $70,000 to CHAI on the basis of CHAI’s strong track record of training and enabling current and future alignment researchers, and building the academic AI alignment field. These funds will go toward graduate students, affiliated researchers and infrastructure to support their research.”

Learn more about The Center for Human-Compatible Artificial Intelligence.

SecureBio — $60,000

SecureBio is led by Prof. Kevin Esvelt of the MIT Sculpting Evolution Group — he is one of the leading researchers in the field of synthetic biology. In addition to contributing to developments in CRISPR, and inventing CRISPR-based gene drives, his work has been published in Nature and Science, and covered in The New York TimesThe New YorkerThe AtlanticPBS and NPR. SecureBio is working on some of the most promising biosecurity projects for tackling the scenarios most likely to cause permanent damage to the world, such as preventing bad actors from abusing DNA synthesis, infrastructure for detecting the next pandemic very early, and advanced PPE and air sanitisation.

Longview: “We recommended a grant of $60,000 to SecureBio on the basis of their intense focus on the most extreme biological risks and their track record of developing promising approaches to tackling such risks. These funds will go toward research scientists, an air sanitisation trial and setting up a policy unit.”

Learn more about SecureBio.

Rethink Priorities' General Longtermism Team — $30,000

The General Longtermism Team at Rethink Priorities works on improving strategic clarity about how to improve the long-term future, such as whether and how much to invest in new longtermist causes. They have written about their past work and future plans in a recent EA Forum post.

Longview: “We recommended a grant of $30,000 to Rethink Priorities’ research and development work in a range of established and experimental existential risk areas on the basis of Rethink Priorities’ track record of generating decision-relevant research across a range of causes, and the promise of their research directions in existential risk areas. These funds will go toward researchers (including fellows) and research support.”

Learn more about Rethink Priorities’ General Longtermist Team.

The Council on Strategic Risks’ nuclear weapons policy work — $15,000

The Council on Strategic Risks is an organisation that focuses on reducing the risk of catastrophic events, including nuclear war. Their work on nuclear issues includes conducting research on nuclear proliferation and deterrence, engaging with policymakers to promote nuclear disarmament, and raising awareness about the risks of nuclear conflict.

Longview: “We recommended a grant of $15,000 to the Council on Strategic Risks’ nuclear programmes on the basis of their focus on the most extreme nuclear risks, ability to propose concrete policy actions which would reduce the risk of escalation into nuclear war, and strong networks within U.S. national and international security communities. These funds will go toward policy development and advocacy, as well as fellowships.”

Learn more about The Council on Strategic Risks.

Conclusion

We are grateful to the support of the 295 donors who have raised over $175,000 USD in the fund’s first few months. We are excited to support the work of these organisations and believe that their activities are likely to have a significant positive impact on the long-term future. Please feel free to ask any questions in the comments below. [2]
 

  1. ^

    Note, this post was briefly published with incorrect grant sizes which have since been amended. 

  2. ^

     We may take some time to reply given that many staff are on holidays, and some questions may require coordinating across different organisations and time-zones to answer correctly. 

Comments6


Sorted by Click to highlight new comments since:
Larks
34
11
0

Thank you for providing this writeup! I think it is very helpful for donors whenever funds do this.

Agreed, strong upvote to the OP from me. Really appreciate the transparency here.

(Disclaimer: I'm involved in one of the orgs included in these grants, but haven't played any part in fundraising.)

I feel excited by this fund and the selection of grantees!

Thank you to everyone involved and the donors who are focusing resources on these under-served issues! We're thrilled that  CSR was included & we're doing our best to move the needle on reducing risks of nuclear conflict.  

Thanks for your support for Rethink Priorities! And thanks for the helpful write-up.

As one of the donors in supporting certain longtermism causes, I'm thankful for your transparency and seeing these up-to-date figures.  I hope to see more in-depth reasoning behind future longtermism grants and the outcome of its impact next year.

Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 2m read
 · 
Summary: The NAO will increase our sequencing significantly over the next few months, funded by a $3M grant from Open Philanthropy. This will allow us to scale our pilot early-warning system to where we could flag many engineered pathogens early enough to mitigate their worst impacts, and also generate large amounts of data to develop, tune, and evaluate our detection systems. One of the biological threats the NAO is most concerned with is a 'stealth' pathogen, such as a virus with the profile of a faster-spreading HIV. This could cause a devastating pandemic, and early detection would be critical to mitigate the worst impacts. If such a pathogen were to spread, however, we wouldn't be able to monitor it with traditional approaches because we wouldn't know what to look for. Instead, we have invested in metagenomic sequencing for pathogen-agnostic detection. This doesn't require deciding what sequences to look for up front: you sequence the nucleic acids (RNA and DNA) and analyze them computationally for signs of novel pathogens. We've primarily focused on wastewater because it has such broad population coverage: a city in a cup of sewage. On the other hand, wastewater is difficult because the fraction of nucleic acids that come from any given virus is very low,[1] and so you need quite deep sequencing to find something. Fortunately, sequencing has continued to come down in price, to under $1k per billion read pairs. This is an impressive reduction, 1/8 of what we estimated two years ago when we first attempted to model the cost-effectiveness of detection, and it makes methods that rely on very deep sequencing practical. Over the past year, in collaboration with our partners at the University of Missouri (MU) and the University of California, Irvine (UCI), we started to sequence in earnest: We believe this represents the majority of metagenomic wastewater sequencing produced in the world to date, and it's an incredibly rich dataset. It has allowed us to develop
Linch
 ·  · 6m read
 · 
Remember: There is no such thing as a pink elephant. Recently, I was made aware that my “infohazards small working group” Signal chat, an informal coordination venue where we have frank discussions about infohazards and why it will be bad if specific hazards were leaked to the press or public, accidentally was shared with a deceitful and discredited so-called “journalist,” Kelsey Piper. She is not the first person to have been accidentally sent sensitive material from our group chat, however she is the first to have threatened to go public about the leak. Needless to say, mistakes were made. We’re still trying to figure out the source of this compromise to our secure chat group, however we thought we should give the public a live update to get ahead of the story.  For some context the “infohazards small working group” is a casual discussion venue for the most important, sensitive, and confidential infohazards myself and other philanthropists, researchers, engineers, penetration testers, government employees, and bloggers have discovered over the course of our careers. It is inspired by taxonomies such as professor B******’s typology, and provides an applied lens that has proven helpful for researchers and practitioners the world over.  I am proud of my work in initiating the chat. However, we cannot deny that minor mistakes and setbacks may have been made over the course of attempting to make the infohazards widely accessible and useful to a broad community of people. In particular, the deceitful and discredited journalist may have encountered several new infohazards previously confidential and unleaked: * Mirror nematodes as a solution to mirror bacteria. "Mirror bacteria," synthetic organisms with mirror-image molecules, could pose a significant risk to human health and ecosystems by potentially evading immune defenses and causing untreatable infections. Our scientists have explored engineering mirror nematodes, a natural predator for mirror bacteria, to
Recent opportunities in Building effective altruism