Longview Philanthropy and Giving What We Can would like to announce a new fund for donors looking to support longtermist work: the Longtermism Fund.

In this post, we outline the motivation behind the fund, reasons you may (or may not) choose to donate using it, and some questions we expect donors may have. 

What work will the Longtermism Fund support?

The fund supports work that:

The Longtermism Fund aims to be a strong donation option for a wide range of donors interested in longtermism. The fund focuses on organisations that:

  • Have a compelling and transparent case in favour of their cost effectiveness that most donors interested in longtermism will understand; and/or
  • May benefit from being funded by a large number of donors (rather than one specific organisation or donor) — for example, organisations promoting longtermist ideas to the broader public may be more effective if they have been democratically funded.

There are other funders supporting longtermist work in this space, such as Open Philanthropy and the FTX Future Fund. The Longtermism Fund's grantmaking is managed by Longview Philanthropy, which works closely with these other organisations, and is well positioned to coordinate with them to efficiently direct funding to the most cost-effective organisations. 

The fund will make grants approximately once each quarter. To give donors a sense of the kind of work within the fund’s scope, here are some examples of organisations the fund would likely give grants to if funds were disbursed today:

  • The Johns Hopkins Center for Health Security (CHS) — CHS is an independent research organisation working to improve organisations, systems, and tools used to prevent and respond to public health crises, including pandemics.
  • Council on Strategic Risks (CSR) — CSR analyses and addresses core systemic risks to security. In its nuclear weapons policy work, CSR focuses on identifying nuclear systems and policies with the greatest potential to cause escalation into nuclear war (for example, nuclear-armed cruise missiles) and seeks to address them by working with key decision-makers. 
  • Centre for Human-Compatible Artificial Intelligence (CHAI) — CHAI is a research organisation aiming to shift the development of AI away from potentially dangerous systems we could lose control over, and towards provably safe systems that act in accordance with human interests even as they become increasingly powerful.
  • Centre for the Governance of AI (GovAI) — GovAI is a policy research organisation that aims to build “a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI.”

The vision behind the Longtermism Fund

We think that longtermism as an idea and movement is likely to become significantly more mainstream — especially with Will MacAskill’s soon-to-be-released book, What We Owe The Future, and popular creators becoming more involved in promoting longtermist ideas. But what’s the call to action?

For many who want to contribute to longtermism, focusing on their careers (perhaps by pursuing one of 80,000 Hours’ high-impact career paths) will be their best option. But for many others — and perhaps for most people — the most straightforward and accessible way to contribute is through donations

Our aim is for the Longtermism Fund to make it easier for people to support highly effective organisations working to improve the long-term future. Not only do we think that the money this fund will move will have significant impact, we also think the fund will provide another avenue for the broader community to engage with and implement these ideas. In turn, this makes it more likely that the value of future generations features in discussions with friends, voting choices, and careers.

And we think it’s worth being ambitious. GiveWell now moves hundreds of millions of dollars each year, with over a hundred thousand individual donors having contributed. In the best case, this fund can follow a similar trajectory, becoming a significant part of the longtermist funding ecosystem. 

Why donate to the Longtermism Fund?

We think there are three main reasons to support this fund:

  1. You want to reduce the chance of catastrophic and existential risks, thereby safeguarding the long-term future of humanity.
  2. The fund is managed by expert grantmakers, informed by years of research, who can help maximise the impact of your donation.
  3. By supporting a fund, not only are you donating as part of a community, but it’s also highly efficient: grantmakers can coordinate with organisations to ensure they receive the funding they can effectively use.

We discuss the above considerations in more depth on the Longtermism Fund page

What’s the difference between the Longtermism Fund and the Long-Term Future Fund?

We think the Long-Term Future Fund (LTFF) from EA Funds is an excellent donation opportunity for donors with a lot of context on effective altruism and longtermism, but being accessible or legible to the broader public is not integral to the fund’s grantmaking — intentionally so. Instead, the LTFF has primarily worked within the niche of providing small to medium grants to individuals or early organisations. Often, this involves supporting researchers early in their careers, or highly targeted outreach efforts promoting longtermism. 

While we think this is extremely impactful, we expect many donors (especially those who are newer to the longtermist community) will prefer to support larger organisations whose work requires less context to understand. The Longtermism Fund aims to support those donors. We think there’s room for a new fund which takes into account the legibility of its grants, and puts greater emphasis on ensuring the reasoning behind each grant is explained in a way that will make sense to people with varying levels of context. Both funds will be supported by the Giving What We Can donation platform (formerly run by EA Funds). 

Along with the other EA Funds, the LTFF has shown the ‘fund’ model can be highly successful, with LTFF being the most popular longtermist donation option of all Giving What We Can members. We hope that the Longtermism Fund can continue this success, and potentially reach an even wider pool of donors. 

Won’t all the fund’s grants be highly fungibile? 

Fungibility and donor coordination is a complicated topic. In many cases, major funders will react to Longtermism Fund grants by making smaller donations to the recipient organisations — this makes the donations ‘fungible’. We don’t see this as a major issue, for the following reasons: 

  • If grants given by the Longtermism Fund end up freeing up resources of other funders working in this space, we see that as a good thing. However, we think it’s important to flag to donors that if their values are not aligned with these other funders (e.g., Longview’s other work, Open Philanthropy and the FTX Future Fund) they may not want to donate to the fund.
  • While in the early stages, the fund’s grants are likely to be fungible with other funders' work, this may change over time. As the amount of money the fund disperses grows, so does the amount of research and grantmaking efforts it makes sense to allocate to the fund. It’s possible in the medium or long-run, this fund will build the capacity to do its own grantmaking, thereby finding new opportunities to support that — but for the fund — may not otherwise received funding.
  • While thinking at the margin is a powerful tool, so is coordination. We expect many grantees to prefer being funded by a large pool of individual donors, rather than by a single philanthropic foundation. We think in an optimal funding ecosystem, individual donors would support those kinds of organisations, while other funders could focus efforts on more niche areas where they have a better fit as a funder. We hope the Longtermism Fund can help push the funding ecosystem further in that direction.
  • There is in fact a substantial amount of work being done that is highly impactful, but doesn’t meet the current bar for cost effectiveness to be funded. For example, only 4% of the applications to the FTX Future Fund’s 2022 application round were accepted. When more funding is available, that bar can lower, thereby funding even more work. So to the extent this fund might increase the total amount of funding available, it will also genuinely be funding projects that otherwise may not have been funded.
  • Funds are an excellent way for individual donors to coordinate via expert grantmakers to maximise their personal counterfactual impact. We discuss some of the advantages to the fund model on the Longtermism Fund page.

So overall, we don’t think the concerns around fungibility significantly undermine the cost effectiveness of donating to this fund. And we think that even with the large amount of funding currently available, small donations still have a significant impact from a longtermist perspective.

Calls to action

We anticipate donors may have some questions about the Longtermism Fund — if there are any we miss, please ask in the comments, or reach out to michael[dot]townsend[at]givingwhatwecan.org. More information is also available on Giving What We Can’s website.

If you want to support the fund, donate and share it with others you think would be interested!


 

280

18 comments, sorted by Click to highlight new comments since: Today at 4:58 AM
New Comment

Tl;dr the Longtermism Fund aims to be a widely accessible call-to-action to accompany longtermism becoming more mainstream 😍

Great TL;DR! (I love comments like this <3 )

I'm happy this exists and I like the logo!

Also, do they/you intend to release writeups, in the style of EA Funds?

We’ll release payout reports each quarter. The exact format/style hasn’t yet been determined, but we’re aiming to explain the reasoning behind each grant to donors. 

Love to see this type of collaboration ❤️💚

While we think this is extremely impactful, we expect many donors (especially those who are newer to the longtermist community) will prefer to support larger organisations whose work requires less context to understand. The Longtermism Fund aims to support those donors. We think there’s room for a new fund which takes into account the legibility of its grants, and puts greater emphasis on ensuring the reasoning behind each grant is explained in a way that will make sense to people with varying levels of context.

Not sure how I feel about this. Seems like this might make longtermism more scalable, and the cost of screening-off some opportunities. Do you expect the best opportunities to be above or below your bar for legibility? Do other people (e.g., from the LTFF or OpenPhil) agree with your view here? Personally I have some intuitions that it might be below.

Seems like this might make longtermism more scalable, and the cost of screening-off some opportunities.

The cost is lower than it naively looks because if the grantmakers are skilled, they should be able to understand what makes for a great-but-potentially-illegible grant, and forward it to other grantmakers.

I do agree with GWWC here and have been involved in some of the strategic decision-making that lead to launching this new fund. I'm excited to have a donation option that is less weird than LTFF for longtermists but still (like GWWC) see a lot of value in both donation opportunities existing.

I think that excellent but illegible projects already have (in my probably biased opinion) good funding options through both the LTFF and the FTX regranting program.

Thanks for your questions!

  • As Linch suggests, opportunities that seem promising but aren’t sufficiently legible can be referred to other funders to investigate.
  • We reached out to staff at Open Philanthropy about setting up this fund, and received positive feedback. The EA Funds team (with input from LTFF grant managers at the time) had also previously considered setting up a “Legible Longtermism Fund” — my understanding is the reason they didn’t was due to lack of capacity, but they were in favour of the idea.
  • Whether the best opportunities are sufficiently legible is an interesting question:
    • It may depend on whether you look at it in terms of cost-effectiveness, or total benefit:
      • In pure cost-effectiveness terms:
        • I think I may share your intuitions that some of the smaller grants the Long-Term Future Fund makes might be more cost-effective than the typical grant I expect the Longtermism Fund to make (though, it’s difficult to evaluate this in advance of the Longtermism Fund making grants!).
        • Though, we anticipate the Longtermism Fund’s requirement for legibility might, in some cases, be beneficial to cost-effectiveness. For example, we anticipate some organisations to prefer receiving grants from the Longtermism Fund (as it’s democratically funded and highly legible) than other funders. Per his comment, Caleb (from EA Funds) and a reviewer from OP share this view.
      • In total benefit terms:
        • My intuition, informed by just double-checking Open Phil’s and FTX FF’s respective grants databases, is that a significant amount of longtermist grantmaking goes to work that would be sufficiently legible for this fund to support.
        • There therefore seems to me to be plenty of sufficiently legible work to support.

My bottomline view is the effect of the fund will be to:

  • Increase the total amount of funding going to longtermist work. This may be especially important if longtermism manages to scale up significantly and funding requirements increase (e.g., successful megaprojects).
  • Changing the proportion of funding to legible/illegible opportunities provided by individual donors/large funders (i.e., the proportion of funding going to legible work provided by individual donors will increase).
  • Provide a funder that may be favourable to grantees who want to be funded by something democratically supported/highly legible.
  • I don’t think it’s ‘screening off’ opportunities that don’t fit meet its legibility requirement will make it more difficult for those organisations to receive funding.

Worth noting that I’m speaking as a Researcher at GWWC, whereas Longview is primarily responsible for grantmaking. 

Provide a funder that may be favourable to grantees who want to be funded by something democratically supported/highly legible.

FWIW this is the most exciting ToC to me. In general (and speaking very coarsely) I think grantmakers should be optimizing to identify new vehicles to allow more great grants to be given, rather than e.g. better evaluations or improvements of existing opportunities, or fundraising.

Thanks Michael!

The fund is managed by expert grantmakers, informed by years of research, who can help maximise the impact of your donation.

Can you say a bit more about them and about the rationale behind their selection? They have short blurbs on the page, but they are pretty short. Not sure if this is correct, though, none seem to have that much of a background in AI/biosecurity. I have the impression that AI grants in particular can be gnarly to evaluate.

As mentioned on the page, the fund’s grantmaking will be informed by all of Longview’s work, and therefore everyone in their team plays a role. The fund managers listed on the page are especially likely to contribute. For work outside their focus areas, such as in AI and Bio, the grants will be heavily informed by others with expertise in those areas (including the work of other organisations, like Open Philanthropy and FTX FF). 

Will you be taking open applications from organizations looking for funding?

At this stage, we won’t be taking applications from organizations looking to apply for funding. I’ll add this question and response to the FAQ — thanks for asking! This is something we plan to review within the first year.