Hide table of contents

Longview Philanthropy and Giving What We Can would like to announce a new fund for donors looking to support longtermist work: the Longtermism Fund.

In this post, we outline the motivation behind the fund, reasons you may (or may not) choose to donate using it, and some questions we expect donors may have. 

What work will the Longtermism Fund support?

The fund supports work that:

The Longtermism Fund aims to be a strong donation option for a wide range of donors interested in longtermism. The fund focuses on organisations that:

  • Have a compelling and transparent case in favour of their cost effectiveness that most donors interested in longtermism will understand; and/or
  • May benefit from being funded by a large number of donors (rather than one specific organisation or donor) — for example, organisations promoting longtermist ideas to the broader public may be more effective if they have been democratically funded.

There are other funders supporting longtermist work in this space, such as Open Philanthropy. The Longtermism Fund's grantmaking is managed by Longview Philanthropy, which works closely with these other organisations, and is well positioned to coordinate with them to efficiently direct funding to the most cost-effective organisations. 

The fund will make grants approximately once each quarter. To give donors a sense of the kind of work within the fund’s scope, here are some examples of organisations the fund would likely give grants to if funds were disbursed today:

  • The Johns Hopkins Center for Health Security (CHS) — CHS is an independent research organisation working to improve organisations, systems, and tools used to prevent and respond to public health crises, including pandemics.
  • Council on Strategic Risks (CSR) — CSR analyses and addresses core systemic risks to security. In its nuclear weapons policy work, CSR focuses on identifying nuclear systems and policies with the greatest potential to cause escalation into nuclear war (for example, nuclear-armed cruise missiles) and seeks to address them by working with key decision-makers. 
  • Centre for Human-Compatible Artificial Intelligence (CHAI) — CHAI is a research organisation aiming to shift the development of AI away from potentially dangerous systems we could lose control over, and towards provably safe systems that act in accordance with human interests even as they become increasingly powerful.
  • Centre for the Governance of AI (GovAI) — GovAI is a policy research organisation that aims to build “a global research community, dedicated to helping humanity navigate the transition to a world with advanced AI.”

The vision behind the Longtermism Fund

We think that longtermism as an idea and movement is likely to become significantly more mainstream — especially with Will MacAskill’s soon-to-be-released book, What We Owe The Future, and popular creators becoming more involved in promoting longtermist ideas. But what’s the call to action?

For many who want to contribute to longtermism, focusing on their careers (perhaps by pursuing one of 80,000 Hours’ high-impact career paths) will be their best option. But for many others — and perhaps for most people — the most straightforward and accessible way to contribute is through donations

Our aim is for the Longtermism Fund to make it easier for people to support highly effective organisations working to improve the long-term future. Not only do we think that the money this fund will move will have significant impact, we also think the fund will provide another avenue for the broader community to engage with and implement these ideas. In turn, this makes it more likely that the value of future generations features in discussions with friends, voting choices, and careers.

And we think it’s worth being ambitious. GiveWell now moves hundreds of millions of dollars each year, with over a hundred thousand individual donors having contributed. In the best case, this fund can follow a similar trajectory, becoming a significant part of the longtermist funding ecosystem. 

Why donate to the Longtermism Fund?

We think there are three main reasons to support this fund:

  1. You want to reduce the chance of catastrophic and existential risks, thereby safeguarding the long-term future of humanity.
  2. The fund is managed by expert grantmakers, informed by years of research, who can help maximise the impact of your donation.
  3. By supporting a fund, not only are you donating as part of a community, but it’s also highly efficient: grantmakers can coordinate with organisations to ensure they receive the funding they can effectively use.

We discuss the above considerations in more depth on the Longtermism Fund page

What’s the difference between the Longtermism Fund and the Long-Term Future Fund?

We think the Long-Term Future Fund (LTFF) from EA Funds is an excellent donation opportunity for donors with a lot of context on effective altruism and longtermism, but being accessible or legible to the broader public is not integral to the fund’s grantmaking — intentionally so. Instead, the LTFF has primarily worked within the niche of providing small to medium grants to individuals or early organisations. Often, this involves supporting researchers early in their careers, or highly targeted outreach efforts promoting longtermism. 

While we think this is extremely impactful, we expect many donors (especially those who are newer to the longtermist community) will prefer to support larger organisations whose work requires less context to understand. The Longtermism Fund aims to support those donors. We think there’s room for a new fund which takes into account the legibility of its grants, and puts greater emphasis on ensuring the reasoning behind each grant is explained in a way that will make sense to people with varying levels of context. Both funds will be supported by the Giving What We Can donation platform (formerly run by EA Funds). 

Along with the other EA Funds, the LTFF has shown the ‘fund’ model can be highly successful, with LTFF being the most popular longtermist donation option of all Giving What We Can members. We hope that the Longtermism Fund can continue this success, and potentially reach an even wider pool of donors. 

Won’t all the fund’s grants be highly fungibile? 

Fungibility and donor coordination is a complicated topic. In many cases, major funders will react to Longtermism Fund grants by making smaller donations to the recipient organisations — this makes the donations ‘fungible’. We don’t see this as a major issue, for the following reasons: 

  • If grants given by the Longtermism Fund end up freeing up resources of other funders working in this space, we see that as a good thing. However, we think it’s important to flag to donors that if their values are not aligned with these other funders (e.g., Longview’s other work and Open Philanthropy) they may not want to donate to the fund.
  • While in the early stages, the fund’s grants are likely to be fungible with other funders' work, this may change over time. As the amount of money the fund disperses grows, so does the amount of research and grantmaking efforts it makes sense to allocate to the fund. It’s possible in the medium or long-run, this fund will build the capacity to do its own grantmaking, thereby finding new opportunities to support that — but for the fund — may not otherwise received funding.
  • While thinking at the margin is a powerful tool, so is coordination. We expect many grantees to prefer being funded by a large pool of individual donors, rather than by a single philanthropic foundation. We think in an optimal funding ecosystem, individual donors would support those kinds of organisations, while other funders could focus efforts on more niche areas where they have a better fit as a funder. We hope the Longtermism Fund can help push the funding ecosystem further in that direction.
  • There is in fact a substantial amount of work being done that is highly impactful, but doesn’t meet the current bar for cost effectiveness to be funded. For example, only 4% of the applications to the Future Fund’s 2022 application round were accepted. When more funding is available, that bar can lower, thereby funding even more work. So to the extent this fund might increase the total amount of funding available, it will also genuinely be funding projects that otherwise may not have been funded.
  • Funds are an excellent way for individual donors to coordinate via expert grantmakers to maximise their personal counterfactual impact. We discuss some of the advantages to the fund model on the Longtermism Fund page.

So overall, we don’t think the concerns around fungibility significantly undermine the cost effectiveness of donating to this fund. And we think that even with the large amount of funding currently available, small donations still have a significant impact from a longtermist perspective.

Calls to action

We anticipate donors may have some questions about the Longtermism Fund — if there are any we miss, please ask in the comments, or reach out to michael[dot]townsend[at]givingwhatwecan.org. More information is also available on Giving What We Can’s website.

If you want to support the fund, donate and share it with others you think would be interested!


 

Comments20


Sorted by Click to highlight new comments since:
[anonymous]72
0
1

Tl;dr the Longtermism Fund aims to be a widely accessible call-to-action to accompany longtermism becoming more mainstream 😍

Great TL;DR! (I love comments like this <3 )

I'm happy this exists and I like the logo!

Also, do they/you intend to release writeups, in the style of EA Funds?

We’ll release payout reports each quarter when we disburse funds (likely bi-annually). The exact format/style hasn’t yet been determined, but we’re aiming to explain the reasoning behind each grant to donors. 

 

Love to see this type of collaboration ❤️💚

While we think this is extremely impactful, we expect many donors (especially those who are newer to the longtermist community) will prefer to support larger organisations whose work requires less context to understand. The Longtermism Fund aims to support those donors. We think there’s room for a new fund which takes into account the legibility of its grants, and puts greater emphasis on ensuring the reasoning behind each grant is explained in a way that will make sense to people with varying levels of context.

Not sure how I feel about this. Seems like this might make longtermism more scalable, and the cost of screening-off some opportunities. Do you expect the best opportunities to be above or below your bar for legibility? Do other people (e.g., from the LTFF or OpenPhil) agree with your view here? Personally I have some intuitions that it might be below.

Seems like this might make longtermism more scalable, and the cost of screening-off some opportunities.

The cost is lower than it naively looks because if the grantmakers are skilled, they should be able to understand what makes for a great-but-potentially-illegible grant, and forward it to other grantmakers.

Good point!

I do agree with GWWC here and have been involved in some of the strategic decision-making that lead to launching this new fund. I'm excited to have a donation option that is less weird than LTFF for longtermists but still (like GWWC) see a lot of value in both donation opportunities existing.

I think that excellent but illegible projects already have (in my probably biased opinion) good funding options through both the LTFF and the FTX regranting program.

Thanks for your questions!

  • As Linch suggests, opportunities that seem promising but aren’t sufficiently legible can be referred to other funders to investigate.
  • We reached out to staff at Open Philanthropy about setting up this fund, and received positive feedback. The EA Funds team (with input from LTFF grant managers at the time) had also previously considered setting up a “Legible Longtermism Fund” — my understanding is the reason they didn’t was due to lack of capacity, but they were in favour of the idea.
  • Whether the best opportunities are sufficiently legible is an interesting question:
    • It may depend on whether you look at it in terms of cost-effectiveness, or total benefit:
      • In pure cost-effectiveness terms:
        • I think I may share your intuitions that some of the smaller grants the Long-Term Future Fund makes might be more cost-effective than the typical grant I expect the Longtermism Fund to make (though, it’s difficult to evaluate this in advance of the Longtermism Fund making grants!).
        • Though, we anticipate the Longtermism Fund’s requirement for legibility might, in some cases, be beneficial to cost-effectiveness. For example, we anticipate some organisations to prefer receiving grants from the Longtermism Fund (as it’s democratically funded and highly legible) than other funders. Per his comment, Caleb (from EA Funds) and a reviewer from OP share this view.
      • In total benefit terms:
        • My intuition, informed by just double-checking Open Phil’s and FTX FF’s respective grants databases, is that a significant amount of longtermist grantmaking goes to work that would be sufficiently legible for this fund to support.
        • There therefore seems to me to be plenty of sufficiently legible work to support.

My bottomline view is the effect of the fund will be to:

  • Increase the total amount of funding going to longtermist work. This may be especially important if longtermism manages to scale up significantly and funding requirements increase (e.g., successful megaprojects).
  • Changing the proportion of funding to legible/illegible opportunities provided by individual donors/large funders (i.e., the proportion of funding going to legible work provided by individual donors will increase).
  • Provide a funder that may be favourable to grantees who want to be funded by something democratically supported/highly legible.
  • I don’t think it’s ‘screening off’ opportunities that don’t fit meet its legibility requirement will make it more difficult for those organisations to receive funding.

Worth noting that I’m speaking as a Researcher at GWWC, whereas Longview is primarily responsible for grantmaking. 

Provide a funder that may be favourable to grantees who want to be funded by something democratically supported/highly legible.

FWIW this is the most exciting ToC to me. In general (and speaking very coarsely) I think grantmakers should be optimizing to identify new vehicles to allow more great grants to be given, rather than e.g. better evaluations or improvements of existing opportunities, or fundraising.

Thanks Michael!

The fund is managed by expert grantmakers, informed by years of research, who can help maximise the impact of your donation.

Can you say a bit more about them and about the rationale behind their selection? They have short blurbs on the page, but they are pretty short. Not sure if this is correct, though, none seem to have that much of a background in AI/biosecurity. I have the impression that AI grants in particular can be gnarly to evaluate.

As mentioned on the page, the fund’s grantmaking will be informed by all of Longview’s work, and therefore everyone in their team plays a role. The fund managers listed on the page are especially likely to contribute. For work outside their focus areas, such as in AI and Bio, the grants will be heavily informed by others with expertise in those areas (including the work of other organisations, like Open Philanthropy and FTX FF). 

You published a grant report about a small 2022 grants round; do you know when you plan to do the next round?

Yes, we are aiming to publish this next week, and it should include an explanation on the delay. (Also thanks for checking in on this - the accountability is helpful.)

Will you be taking open applications from organizations looking for funding?

At this stage, we won’t be taking applications from organizations looking to apply for funding. I’ll add this question and response to the FAQ — thanks for asking! This is something we plan to review within the first year. 

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f