Hide table of contents

The Long-Term Future Fund (LTFF) makes small, targeted grants with the aim of improving the long-term trajectory of humanity. We are currently fundraising to cover our grantmaking budget for the next 6 months. We would like to give donors more insight into how we prioritize different projects, so they have a better sense of how we plan to spend their marginal dollar. Below, we’ve compiled fictional but representative grants to illustrate what sort of projects we might fund depending on how much we raise for the next 6 months, assuming we receive grant applications at a similar rate and quality to the recent past. 

Our motivations for presenting this information are a) to provide transparency about how the LTFF works, and b) to move the EA and longtermist donor communities towards a more accurate understanding of what their donations are used for. Sometimes, when people donate to charities (EA or otherwise), they may wrongly assume that their donations go towards funding the average, or even more optimistically, the best work of those charities. However, it is usually more useful to consider the marginal impact for the world that additional dollars would buy. By offering illustrative examples of the sort of projects we might fund at different levels of funding, we hope to give potential donors a better sense of what their donations might buy, depending on how much funding has already been committed. We hope that this post will help improve the quality of thinking and discussions about charities in the EA and longtermist communities.

For donors who believe that the current marginal LTFF grants are better than marginal funding of all other organizations, please consider donating! Compared to the last 3 years, we now have both a) unusually high quality and quantity of applications and b) unusually low amount of donations, which means we’ll have to raise our bar substantially if we do not receive additional donations. This is an especially good time to donate, as donations are matched 2:1 by Open Philanthropy (OP donates $2 for every $1 you donate). That said, if you instead believe that marginal funding of another organization is (between 1x and 3x, depending on how you view marginal OP money) better than current marginal LTFF grants, then please do not donate to us, and instead donate to them and/or save the money for later.

Background on the LTFF

  • We are committed to improving the long-term trajectory of civilization, with a particular focus on reducing global catastrophic risks.
  • We specialize in funding early stage projects rather than established organizations.
  • From March 2022 to March 2023, we received 878 applications and funded 263 as grants, worth ~$9.1M dollars total (average $34.6k/grant). To our knowledge, we have made more small grants in this time period than any other longtermist- or EA- motivated funder.
  • Other funders in this space include Open PhilanthropySurvival and Flourishing Fund, and recently Lightspeed Grants and Manifund.
  • Historically, ~40% of our funding has come from Open Phil. However, we are trying to become more independent of Open Phil. As a temporary stopgap measure, Open Phil is matching donations to LTFF 2:1 instead of granting to us directly.
  • 100% of money we fundraise for LTFF qua LTFF goes to grantees; we fundraise separately and privately for operational costs.
  • We try to be very willing to fund weird things that the grantmakers’ inside views believe are really impactful for the long-term future.
  • You can read more about our work at our website here, or in our accompanying payout report here.

Methodology for this analysis

At the LTFF, we assign each grant application to a Principal Investigator (PI) who assesses its potential benefits, drawbacks, and financial cost. The PI scores the application from -5 to +5. Subsequently, other fund managers may also score it. The grant gets approved if its average score surpasses the funding threshold, which historically varied from 2.0 to 2.5, but is currently at 2.9. 

Here's how we created the following list of fictional grants:

  • Caleb ranked all LTFF grant applications from the past six months according to their average scores.
  • Caleb calculated the total cost of funding all above-threshold grants as a function of the funding threshold, starting with the highest scoring grant and adding costs as the threshold decreases.
  • Caleb grouped the applications based on the cumulative budget required for them to surpass the threshold.
  • Caleb and Linch randomly selected grants from each group.
  • Linch modified and blended grants to form representative fictitious grants based on brief descriptions.

This process is highly qualitative and is intended to demonstrate the types of projects we'd fund at various donation levels. The final ranking likely does not represent the views of any individual fund manager very well.

This analysis has weaknesses, including that:

  • Our current grant scoring system lacks precision except at levels close to the funding threshold. When scoring applications, we generally aim to determine the probability a grant reaches the threshold, not to track explicit expected cost-effectiveness. If a grant is clearly above or below the threshold, fund managers won’t score it as precisely.
  • In this post, we offer limited information on each hypothetical applicant's suitability for each hypothetical project, but in reality, both applicant quality and applicant-project fit significantly influence our application assessment.
  • For the analysis, we conservatively assume that the quality of applications won't improve even if funding surpasses expectations. 

Caveat for grantseekers

This article is primarily aimed at donors, not grantees. We believe that the compatibility between an applicant and their proposed project, including personal interest and enthusiasm, plays a crucial role in the project’s success. Therefore, we discourage tailoring your applications to match the higher tiers of this list; we do not expect this to increase either your probability of getting funded or the project’s eventual impact conditional upon funding.

Grant tiers

Our primary aim in awarding grants is to optimize the trajectory of the long-term future. To that end, grantmakers try to evaluate each grant according to their subjective worldviews of whether spending $X on the grant is a sufficiently good use of limited resources given that we only have $Y total to spend for our longtermist goals. 

In the tiers below, we illustrate the types of projects (and corresponding grant costs[1] in brackets) we'd potentially finance if our fundraising over the next six months reaches that tier. For each tier, we list only projects we likely wouldn't finance if our fundraising only met the preceding tier's total. For example, if we raised $1.2 million, we would likely fund everything in the $100,000 and $1M tiers, but only a small subset (up to $200,000) of projects in the $5M tier, and nothing in the $10M tier.

To put it differently, as the funding amount for the LTFF increases, the threshold for applications we would consider funding decreases, as there is more funding to go around. 

If LTFF raises $100,000

These are some fictional projects that we might fund if we had roughly $100,000 of funding over the next 6 months. Note that this is not a very realistic hypothetical: in worlds where we actually only have ~$100,000 of funding over 6 months, a) many LTFF grantmakers would likely quit, and b) the remaining staff and volunteers would likely think that referring grants to other grantmakers was a more important part of our job than allocating the remaining $100k. Still, these are projects that would meet our bar even if our funding was severely constrained in this way.

  • Funding to cover a four-month fellowship for a vital senior contractor at a leading AI Safety research institution ($40k).
  • A year-long stipend for a highly-recommended Theoretical Computer Science PhD graduate in a low cost-of-living area to independently investigate new failure modes of state-of-the-art narrowly superhuman models ($55k).
  • Financial support for a skilled expert in a relevant subfield to conduct three weeks of joint work with a team at an impactful biosecurity research organization, covering travel and accommodation expenses ($4k).[2]


Below are some hypothetical projects we might additionally fund if we had roughly $1M of funding over the next 6 months (roughly 1/5 - 1/6 of our past spending rate). This is roughly how much money we would have if we only account for our current reserves and explicit promises of additional funding we’ve received. 

  • Funding for a six-month fellowship to support a technology policy consultant in advising on a new existential risk division for a US think tank ($50k).
  • A four-month stipend, including physical therapy allowance, for a former research manager from an effective altruism research organization, enabling a professional sabbatical to address ongoing issues with repetitive strain injury and explore a career in independent research, possibly in digital sentience ($32k).
  • A four-month stipend to help a physics PhD student to transition to AI safety work ($7.5k).
  • One-year stipend for a researcher with a Machine Learning PhD to develop a novel approach to interpretability using LLMs ($120k).


$5M over 6 months is our current target, and roughly how much we want to raise to cover our grantmaking budget going forward. Note that our current threshold (2.9) is in between the $1M and $5M bars.

Should we secure roughly $5M in funding for the next six months, corresponding to our funding threshold from November 2022 to July 2023 (2.5), we might additionally fund the following hypothetical grants:

  • Nine months financial support for a gifted recent Master's in Machine Learning graduate to support a research collaboration on ML safety evaluations with a senior researcher ($75k).
  • Funding for an AI governance researcher with legal expertise to dedicate two weeks of work to a paper exploring the practical effects of anti-trust laws on AI safety coordination ($2.5k).
  • Support for a team of translators and editors to translate the Cold Takes “Most Important Century” blog series into Spanish ($15k).
  • An hourly rate of $50 for a cybersecurity PhD graduate to expand their skills in AI safety and conduct original research in important cybersecurity questions in AI governance ($50/hr).
  • Funding to enable 10-20 mid-career materials engineering professionals to participate in a professional conference focused on reducing biorisk ($15k).

Aside from Linch: 

To add more color to these examples, I’d like to discuss the sort of applications that are relatively close to the current LTFF funding bar – that is, the kind of applications we’ll neither obviously accept nor obviously reject. Hopefully, this will both demystify some of the inner workings of LTFF, as well as help donors make more informed decisions. 

Some grant applications to the LTFF look like the following: a late undergraduate or recent graduate from an Ivy League university or a comparable institution requests a grant to conduct independent research or comparable work in a high-impact field, but we don’t find the specific proposal particularly compelling. For example, the mentee of a fairly prominent AI safety or biosecurity researcher may request 6-12 months’ stipend to explore a particular research project that their mentor(s) are excited about, but LTFF fund managers and some of our advisors are unexcited about. Alternatively, they may want to take an AGISF course, or to read and think enough to form a detailed world model about which global catastrophic risks are the most pressing, in the hopes of then transitioning their career towards combating existential risk. 

In these cases, the applicant often shows some evidence of interest and focus (e.g., participation in EA local groups/EA Global or existential risk reading groups) and some indications of above-average competence or related experience, but nothing exceptional. Factors that would positively influence my impression include additional signs of dedication, a more substantial track record in relevant areas, indications of exceptional talent, or other signs of potential for a notably successful early-career investment. Conversely, evidence of deceitfulness, problematic unilateral actions or inclinations, rumors or indications of sketchiness not quite severe enough to be investigated by Community Health, or other signs or evidence of possibly becoming a high-downside grant would negatively influence my assessment.

I think the median grant application of this kind (without extenuating evidence) would be a bit below our funding bar until July (2.5), and just above our pre-November 2022 bar.


If we accumulate $7.5M in funds over the next six months, we might additionally support the following hypothetical grants. This aligns with our pre-November 2022 grantmaking threshold (2.0). However, we have never actually spent as much as $7.5m in any six-month period before November 2022. This is because we’ve had an average increase in both quantity and quality of applications this year, which meant there were not enough applications above the old bar to fund $7.5M worth of projects. 

  • A workshop led by professional facilitators to train employees of longtermist organizations in specific project management techniques ($10k).
  • Hiring a professional communications firm by a junior longtermist researcher to teach best communication practices to professionals in longtermist organizations ($58k).
  • A one-year grant for a Computer Science PhD graduate, previously funded for Machine Learning safety upskilling without substantial results, to shift into agent foundations research ($85k).
  • Additional funds for human evaluations in studies comparing methods for fine-tuning language models ($11k).


Below are some hypothetical grants that we might additionally fund if we have $10M in spending over the next 6 months. This will correspond to a lower grantmaking bar than at any point in LTFF’s history. That said, should we actually receive such a substantial influx, we might instead opt to carry out proactive grantmaking projects we deem more impactful, and/or reconsider our general policy against saving funds.

We will always refrain from funding projects we believe are net harmful in expectation, regardless of the funds raised.

  • A six-month stipend for a junior software engineer to continue AI alignment research at a notable research institution, including a budget for upskilling ($40k).
  • Extra funding for a student who wrote a bachelor’s thesis on existential risk to pursue a master's degree in philosophy at a renowned university ($12k).
  • Travel funding for a Machine Learning PhD student to present an alignment paper at a prestigious conference, despite the fund managers' internal belief that the research itself isn't particularly impactful ($2k).
  • Financial support for a Master's degree study in conflict and security with a concentration on AI and geopolitical studies ($75k).

If you’ve read this far, please don’t hesitate to comment if you have additional questions, clarifications, or feedback!

If you think grants above the $1M tier are valuable, please consider donating to us! If we do not receive more money soon, we will have to increase our bar again, resulting in a quite suboptimal (by my lights) misallocation of longtermist resources.


This post was written by Linch Zhang and Caleb Parikh, with considerable help from Daniel Eth. Thanks to Lizka Vaintrob, Nuño Sempere, Amber Dawn and GPT-4 for helpful feedback and suggestions.

Appendix A: Donation smoothing/saving

The LTFF saves money/smooths donations on the timescale of months (e.g. if we have unexpectedly high donations in August, we might want to ‘smooth out’ our grantmaking so that we award similar amounts in September, October, etc). However, we generally do not attempt to smooth donations on the timescale of years. That is, if we receive an unexpectedly high windfall in 2023, we would not by default plan to “save up” donations for future years. Instead, we may aim both to more aggressively solicit grant applications, and also to lower the bar for funding. Similarly, if we receive unexpectedly little in donations, we will likely raise the bar for funding and/or refer grant applicants to other donors. 

This is in contrast to Open Philanthropy, which tries to optimize for making the best grants over the timescale of decades, and the Patient Philanthropy Fund, which tries to optimize for making the best grants over the timescale of centuries. 

There are several considerations in favor of not attempting to do too much donation smoothing:

  • We are not experts on the question of longtermist funding over time, and aren’t trying to be. This question is arguably best decided by the surrounding community and ecosystem, and for donors to decide to donate to us when they believe (roughly) that we are a better usage of marginal funds than other options, at whatever level of desired spending.
  • We don’t and probably can’t make aggressive investment choices. Having unused donation money in our bank accounts is likely costly compared to individuals and large foundations holding the money, who can opt to make better and riskier investment choices than we’re able to. 
  • The donor community has in the past been opposed to funds not disbursing money. E.g. “slow grant disbursement” was a recurring problem on CEA’s Mistakes page.

However, this policy is not set in stone. If donors or the community have strong opinions, we welcome engagement here!

  1. ^

     See this appendix in the payout report for how we set grant and stipend amounts.

  2. ^

    Note that this grant would be controversial within the fund at a $100k funding bar, as some fund managers and advisers would say we shouldn’t fund any biosecurity grants at that level of funding.

Sorted by Click to highlight new comments since:

Would donors/other members of the community find it helpful if I were to repeat this process and write such a post for EAIF? Note that as I do not do grantmaking for EAIF, my attempts at doing the analagous "modifying and blending grants to form representative fictitious grants" might be missing some key nuances.

"Agree"-vote if helpful relative to the counterfactual, "Disagree" if not helpful; assume my nearest counterfactual is writing some other posts drawn from the same distribution as my past posts or comments, particularly LTFF-related ones.

This is hard to answer without knowing the exact counterfactual. I'd value you going deeper on topics you have the most information on, and my guess is EAIF is not your comparative advantage, but if there isn't a specific other post you're excited about I'd much rather have EAIF than nothing. I thought it might be helpful to give ideas of  posts I'd be interested in from you, specifically:

  • what do you want to see in the impact or theories of change section? (related)
  • the practicalities of living off of grants as an independent. do people ask for enough? how bad is it if you ask for too much? how do you structure work to avoid gaps between grants? 
  • how do you evaluate results from independent researchers?
  • how do you evaluate the success of grants for upskilling or exploration?
  • how do you evaluate work from other kinds of independent grant recipients (AXRP and Rob Miles's youtube channel come to mind, but probably there are more grants that are even harder to categorize)? 
  • what do you regret not funding?

Writing such a post for EAIF (even a 5x shorter version) would help me get an idea on what's the bar for a community project to be ~worthwhile, and especially to easily say "no, this isn't worthwhile".

I'm saying this because even this LTFF post updated my opinion about that.

Caleb and Linch randomly selected grants from each group.

I think your procedure to select the grants was great. However, would it become even better by making the probability of each grant being selected proportional to its size? In theory, donors should care about the impact per dollar (not impact per grant), which justifies weighting by grant size. This may matter because there is significant variation in grant size. The 5th and 95th percentile amount granted by LTFF are 2.00 k$ and 169 k$, so, specially if one is picking just a few grants as you did (as opposed to dozens of grants), there is a risk of picking unrepresentatively small grants.

Thank you! This is a good point; your analysis makes a lot of sense to me.

I really liked this post, and specifically the framing of "what will a marginal donation be" (as opposed to "what's the best thing we ever did" or so). 


[ramblings from my subjective view point of EA-software]

  1. It reminds me of how developers consider joining an EA org, and think "well, seems like all your stuff is already built, no?". I think writing about marginal things the org wants to build and needs help with - would go a long way for many job posts
  2. This somewhat updated me towards "it's a bad idea to fund me, my work isn't as important as all this" and also towards "maybe I better do some E2G so you can fund more things like this"

Thanks for sharing! I confess I had been wondering about moving my donations elsewhere due to lack of knowledge about LTFF's processes, but this and other recent posts will probably imply that I will continue donating to LTFF in the near future.

We are committed to improving the long-term trajectory of civilization, with a particular focus on reducing global catastrophic risks.

Which definition of global catastrophic risks are you considering? I think global catastrophes were originally defined by Nick Bostrom and Milan Ćirković as "events that cause roughly 10 million deaths or $10 trillion in damages or more". Maybe it would be better to be explicit about the severity of the events in the website?

Note that this grant [in bio] would be controversial within the fund at a $100k funding bar, as some fund managers and advisers would say we shouldn’t fund any biosecurity grants at that level of funding.

I would be curious to know how you compare grants in different areas. For example, could you share which fraction of grants in each area (e.g. AI, bio, nuclear, or other) are successful? I understand you consider AI and bio to be the most pressing areas (emphasis mine):

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.

You also only mentioned grants in AI and bio in the OP. However, even if applications in other areas were as likely as those in AI to be funded, they would still not be (randomly) selected to be in the OP, because applications outside of AI and bio only represent a small fraction of the total.

Which definition of global catastrophic risks are you considering? I think global catastrophes were originally defined by Nick Bostrom and Milan Ćirković as "events that cause roughly 10 million deaths or $10 trillion in damages or more". Maybe it would be better to be explicit about the severity of the events in the website?

I don’t think that as an organisation we have a specific definition in mind. I think it’s still worth saying we are most focussed in reducing global catastrophic risks as opposed to pursuing other goals like instilling caring about future generations as a value in society or economic growth.

In practice we direct funding towards activities that we think reduce catastrophic risks, but are most focussed on existential risks.

On mobile, "roughly ⅕ -⅙ of our past spending rate" doesn't display correctly -- that's one fifth to one sixth for my fellow mobile users.

[Edit for Forum management: these images displayed as white Xs in a black box on my Android / Samsung S22; I saved a screenshot if helpful. It looks like they have been edited to 1/5 and 1/6 in ordinary characters now.]

Thank you! I believe it should be fixed now! (I changed ⅕ to 1/5).

This change also helps with text-to-speach

It’s working on mobile for me (iPhone - safari)

Thanks for the post. Until now, I used to learn about what LTFF funds by manually reading through its grants database. It's helpful to know what the funding bar looks like and how it would change with additional funding.

I think increased transparency is helpful because it's valuable for people to have some idea of how likely their applications are to be funded if they're thinking of making major life decisions (e.g. relocating) based on them. More transparency is also valuable for funders who want to know how their money would be used.

The PI scores the application from -5 to +5. 

Does the zero point have any specific meaning? Specifically, does a negative score convey a belief that the proposal has net-negative EV?

In principle, the zero point is supposed to signify equivalent to burning the money, and negative signifies net-negative EV (neglecting financial cost of the grant). In practice, speaking personally, if I weakly think a grant is a bit net negative, but it's not particularly worrying nor something I feel confident about, I usually give it a score that's well below the funding threshold, but still positive (so that if other grantmakers are more confidently in favor of the grant, they can more likely outvote me here). If I were to confidently believe that a grant was of zero net value, I would give it a vote of zero.

I personally give a negative value and (when I have low certainty) flag that I'm willing to change/delete my votes if other people feel strongly, so as to not unduly tank the results. I think LTFF briefly experimented with weighted voting in the past but we've moved against it (I forgot why). 

I really appreciate your transparency about how you allocate funding! Thank you for this post!

I'm curating this. Along with other commenters, I really like the focus on the marginal grant. If I were to write a post that would help donors understand the impact of their donations to the Long Term Future Fund, it would look a lot like this. 

While I'm sympathetic to the reasoning, I was sad to hear that EA Funds would stop sharing publicly all its grants.  To my mind to this post goes a long way towards remedying that, and makes me much more likely to recommend the Long Term Future Fund to others. (That strikes me as a surprisingly large update, but I stand by it.)

Thanks a bunch for writing this!

Thanks for curating it :)

Thanks for the post!

A related question: Is LTFF more likely to fund a small AI safety research group than to fund individual independent AI Safety researchers?

So could we see a scenario where, if person A, B or C apply individually for an independent research grant, they might not meet your funding bar. But where, if similarly impressive people with a similarly good research agenda applied as a research group, they would be a more attractive funding opportunity for you?

(Giving my own professional opinion, not speaking for anybody else/employers) This seems unlikely to me, unless there's a different substantive reason to believe that the research group is better for either research qua research or upskilling. Eg having access to better mentors, or demonstrated evidence that the group is better at keeping each other on track. 

Plausibly I'm wrong here. Being an independent researcher kinda sucks in a variety of ways, and I can imagine having a group to work with to be good even if you can't point to a specific reason. But I don't currently think we have a bias towards groups and against independent researchers, and if anything I'd guess our revealed preferences are a bit in the other direction. 

More from Linch
Curated and popular this week
Relevant opportunities