The Long-Term Future Fund (LTFF) is one of the EA Funds. Between Friday Dec 4th and Monday Dec 7th, we'll be available to answer any questions you have about the fund – we look forward to hearing from all of you!

The LTFF aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.

Grant recommendations are made by a team of volunteer Fund Managers: Matt Wage, Helen Toner, Oliver Habryka, Adam Gleave and Asya Bergal. We are also fortunate to be advised by Nick Beckstead and Nicole Ross. You can read our bios here. Jonas Vollmer, who is heading EA Funds, also provides occasional advice to the Fund.

You can read about how we choose grants here. Our previous grant decisions and rationale are described in our payout reports. We'd welcome discussion and questions regarding our grant decisions, but to keep discussion in one place, please post comments related to our most recent grant round in this post.

Please ask any questions you like about the fund, including but not limited to:

  • Our grant evaluation process.
  • Areas we are excited about funding.
  • Coordination between donors.
  • Our future plans.
  • Any uncertainties or complaints you have about the fund. (You can also e-mail us at ealongtermfuture[at]gmail[dot]com for anything that should remain confidential.)

We'd also welcome more free-form discussion, such as:

  • What should the goals of the fund be?
  • What is the comparative advantage of the fund compared to other donors?
  • Why would you/would you not donate to the fund?
  • What, if any, goals should the fund have other than making high-impact grants? Examples could include: legibility to donors; holding grantees accountable; setting incentives; identifying and training grant-making talent.
  • How would you like the fund to communicate with donors?

We look forward to hearing your questions and ideas!


New Comment
153 comments, sorted by Click to highlight new comments since: Today at 10:29 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn't been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?

I think LTFF is doing something valuable by giving people the freedom to not "sell out" to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I'm worried about a situation where receiving a grant from LTFF isn't enough to be sustainable, so that people go back to doing more "safe" things like working in academia or at an established org.

Any thoughts on this topic?

The LTFF is happy to renew grants so long as the applicant has been making strong progress and we believe working independently continues to be the best option for them. Examples of renewals in this round include Robert Miles, who we first funded in April 2019, and Joe Collman, who we funded in November 2019. In particular, we'd be happy to be the #1 funding source of a new EA org for several years (subject to the budget constraints Oliver mentions in his reply).

Many of the grants we make to individuals are for career transitions, such as someone retraining from one research field to another, or for one-off projects. So I would expect most grants to not be renewals. That said, the bar for renewals does tend to be higher. This is because we pursue a hits-based giving approach, so are willing to fund projects that are likely not to work out -- but of course will not want to renew the grant if it is clearly not working.

I think being a risk-tolerant funder is particularly valuable since most employers are, quite rightly, risk-averse. Firing people tends to be harmful to morale; internships or probation periods can help, but take a lot of supervisory time. This means people who might be... (read more)

2Linda Linsefors2y
My impression is that it is not possible for everyone who want to help with the long term ti get hired by an org, for the simple reason that there are not enough openings at those orgs. At least in AI Safety, all entry level jobs are very competitive, meaning that not getting in is not a strong signal that one could not have done well there.  Do you disagree with this?
I can't respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I'm currently excited about funding independent work.
Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I'm sure would have got an offer a few years ago. I'm pretty happy to see the LTFF offering effectively "bridge" funding for people who don't quite meet the hiring bar yet, but I think are likely to in the next few years. However, I'd be hesitant about heading towards a large fraction of people working independently long-term. I think there's huge advantages from the structure and mentorship an org can provide. If orgs aren't scaling up fast enough, then I'd prefer to focus on trying to help speed that up. The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I'd love to be proven wrong here.
8Linda Linsefors2y
I claim we have proof of concept. The people who started the existing AI Safety research orgs did not have AI Safety mentors. Current independent researcher have more support than they had. In a way an org is just a crystalized collaboration of previously independent researchers.  I think that there are some PR reasons why it would be good if most AI Safety researchers where part of academia or other respectable orgs (e.g. DeepMind). But I also think it is good to have a minority of researchers who are disconnected from the particular pressures of that environment. However, being part of academia is not the same as being part of an AI Safety org. MIRI people are not part of academia, and someone doing AI Safety research as part of their PhD in a "normal" (not AI Safety focused) PhD program, is sorta an independent researcher.   We are working on that. [] I'm not optimistic about current orgs keeping up with the growth of the field, and I don't think it is healthy for the career to be too competitive, since this will lead to goodhearted on career intensives. But I do think a looser structure, built on personal connections rather than formal org employment, can grow in a much more flexible way, and we are experimenting with various methods to make this happen.

Yeah, I am also pretty worried about this. I don't think we've figured out a great solution to this yet. 

I think we don't really have sufficient capacity to evaluate organizations on an ongoing basis and provide good accountability. Like, if a new organization were to be funded by us and then grow to a budget of $1M a year, I don't feel like we have the capacity to evaluate their output and impact sufficiently well to justify giving them $1M each year (or even just $500k). 

Our current evaluation process routes feels pretty good for smaller projects, and granting to established organizations that have other active evaluators looking into them that we can talk to, but doesn't feel very well-suited to larger organizations that don't have existing evaluations done on them (there is a lot of due diligence work to be done on that I think requires higher staff capacity than we have). 

I also think the general process of the LTFF specializing into more something like venture funding, with other funders stepping in for more established organizations feels pretty good to me. I do think the current process has a lot of unnecessary uncertainty and risk in it, and I would like to ... (read more)

I agree with @Habryka that our current process is relatively lightweight which is good for small grants but doesn't provide adequate accountability for large grants. I think I'm more optimistic about the LTFF being able to grow into this role. There's a reasonable number of people who we might be excited about working as fund managers -- the main thing that's held us back from growing the team is the cost of coordination overhead as you add more individuals. But we could potentially split the fund into two sub-teams that specialize in smaller and larger grants (with different evaluation process), or even create a separate fund in EA Funds that focuses on more established organisations. Nothing certain yet, but it's a problem we're interested in addressing.

Ah yeah, I also think that if the opportunity presents itself we could grow into this role a good amount. Though I think on the margin I think it's more likely we are going to invest even more into more early-stage expertise and maybe do more active early-stage grantmaking.
Just to add a comment with regards to sustainable funding for independent researchers. There haven't previously been many options available for this, however, there are a growing number of virtual research institutes through which affiliated researchers can apply to academic funding agencies. The virtual institute can then administer the grant for a researcher (usually for much lower overheads than a traditional institution), while they effectively still do independent work. The Ronin Institute [] administers funding from US granters, and I am a Board member at IGDORE [] which can receive funding from some European granters. That said, it may still be quite difficult for individuals to secure academic funding without having some traditional academic credentials (PhD, publications, etc.). 
3Linda Linsefors2y
What do you mean by "There haven't previously been many options available"? What is stopping you from just giving people money? Why do you need an institute as middle hand?
4Steven Byrnes2y
My understanding is that (1) to deal with the paperwork etc. for grants from governments or government-like bureaucratic institutions, you need to be part of an institution that's done it before; (2) if the grantor is a nonprofit, they have regulations about how they can use their money while maintaining nonprofit status, and it's very easy for them to forward the money to a different nonprofit institution, but may be difficult or impossible for them to forward the money to an individual. If it is possible to just get a check as an individual, I imagine that that's the best option. Unless there are other considerations I don't know about. Btw Theiss [] is another US organization in this space.
  One other benefit of a virtual research institute is that they can act as formal employers for independent researchers, which may be desirable for things like receiving healthcare coverage or welfare benefits.   Thanks for mentioning Theiss, I didn't know of them before. Their website doesn't look so active now, but it's good to know about the  history of the independent research scene.
2Steven Byrnes2y
Theiss was very much active as of December 2020. They've just been recruiting so successfully through word-of-mouth that they haven't gotten around to updating the website. I don't think healthcare and taxes undermine what I said, at least not for me personally. For healthcare, individuals can buy health insurance too. For taxes, self-employed people need to pay self-employment tax, but employees and employers both have to pay payroll tax which adds up to a similar amount, and then you lose the QBI deduction (this is all USA-specific), so I think you come out behind even before you account for institutional overhead, and certainly after. Or at least that's what I found when I ran the numbers for me personally. It may be dependent on income bracket or country so I don't want to over-generalize... That's all assuming that the goal is to minimize the amount of grant money you're asking for, while holding fixed after-tax take-home pay. If your goal is to minimize hassle, for example, and you can just apply for a bit more money to compensate, then by all means join an institution, and avoid the hassle of having to research health care plans and self-employment tax deductions and so on. I could be wrong or misunderstanding things, to be clear. I recently tried to figure this out for my own project but might have messed up, and as I mentioned, different income brackets and regions may differ. Happy to talk more. :-)
2Jonas Vollmer2y

In the April 2020 payout report, Oliver Habryka wrote:

I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).

I'm curious to hear more about this (either from Oliver or any of the other fund managers).

Regardless of whatever happens, I've benefited greatly from all the effort you've put in your public writing on the fund Oliver. 

Thank you! 

I am planning to respond to this in more depth, but it might take me a few days longer, since I want to do a good job with it. So please forgive me if I don't get around to this before the end of the AMA.

Any update on this?

I wrote a long rant that I shared internally that was pretty far from publishable, but then a lot of things changed, and I tried editing it for a bit, but more things kept changing. Enough that at some point I gave up on trying to edit my document to keep up with the new changes, and instead just wait until things settle down, so I can write something that isn't going to be super confusing.

Sorry for the confusion here. At any given point it seemed like things would settle down more so I would have a more consistent opinion. 

Overall, a lot of the changes have been great, and I am currently finding myself more excited about the LTFF than I have in a long time. But a bunch of decisions are still to be made, so I will hold off on writing a bit longer. Sorry again for the delay. 

If you had $1B, and you weren't allowed to  give it to other grantmakers or fund prioritisation research, where might you allocate it? 

$1B is a lot. It also gets really hard if I don't get to distribute it to other grantmakers. Here are some really random guesses. Please don't hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.

My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output. 

My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so. 

I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.

I think I... (read more)

  I'm really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable? 
It seems to me that one of the biggest problems with the world is that only a small fraction of people who do a really large amount of good get much rewarded for it. It seems likely that this prevents many people from pursuing doing much good with their lives.  My favorite way of solving this kind of issue is with Impact Certificates, which has decent amount of writing on it [], and you can think of the above as just buying about $100M of impact certificates for the relevant people (in practice I expect that if you get a good impact certificate market going, which is a big if, you could productively spend substantially more than $1B). 

The cop-out answer of course is to say we'd grow the fund team or, if that isn't an option, we'd all start working full-time on the LTFF and spend a lot more time thinking about it.

If there's some eccentric billionaire who will only give away their money right now to whatever I personally recommend, then off the top of my head:

  1. For any long-termist org who (a) I'd usually want to fund at a small scale; and (b) whose leadership's judgement I'd trust, I'd give them as much money as they can plausibly make use of in the next 10 years. I expect that even organisations that are not usually considered funding constrained could probably produce 10-20% extra impact if they invested twice as much in their staff (let them rent really close to the office, pay for PAs or other assistants to save time, etc).

    I also think there can be value in having an endowment: it lets the organisation make longer-term plans, can raise the organisation's prestige, and some things (like creating a professorship) often require endowments.

    However, I do think there are some cases it can be negative: some organisations benefit a lot from the accountability of donors, and being too well-funded can attract the wron

... (read more)
What's your all-things-considered view for probability that the first transformative AI (defined by your lights) will be developed by a company that, as of December 2020, either a) does not exist or b) has not gone through Series A?    (Don't take too much time on this question, I just want to see a gut check plus a few sentences if possible).

About 40%. This is including startups that later get acquired, but the parent company would not have been the first to develop transformative AI if the acquisition had not taken place. I think this is probably my modal prediction: the big tech companies are effectively themselves huge VCs, and their infrastructure provides a comparative advantage over a startup trying to do it entirely solo.

I think I put around 40% on it being a company that does already exist, and 20% on "other" (academia, national labs, etc).

Conditioning on transformative AI being developed in the next 20 years my probability for a new company developing it is a lot lower -- maybe 20%? So part of this is just me not expecting transformative AI particularly soon, and tech company half-life being plausibly quite short. Google is only 21 years old!

Thanks a lot, really appreciate your thoughts here!

What processes do you have for monitoring the outcome/impact of grants, especially grants to individuals?

As part of CEA's due diligence process, all grantees must submit progress reports documenting how they've spent their money. If a grantee applies for renewal, we'll perform a detailed evaluation of their past work. Additionally, we informally look back at past grants, focusing on grants that were controversial at the time, or seem to have been particularly good or bad.

I’d like us to be more systematic in our grant evaluation, and this is something we're discussing. One problem is that many of the grants we make are quite small: so it just isn't cost-effective for us to evaluate all our grants in detail. Because of this, any more detailed evaluation we perform would have to be on a subset of grants.

I view there being two main benefits of evaluation: 1) improving future grant decisions; 2) holding the fund accountable. Point 1) would suggest choosing grants we expect to be particularly informative: for example, those where fund managers disagreed internally, or those which we were particularly excited about and would like to replicate. Point 2) would suggest focusing on grants that were controversial amongst donors, or where there were potential conflicts of interest.

It's important t... (read more)

Interesting question and answer! Do  the LTFF fund managers make forecasts about potential outcomes of grants?  And/or do you write down in advance what sort of proxies you'd want to see from this grant after x amount of time? (E.g., what you'd want to see to feel that this had been a big success and that similar grant applications should be viewed (even) more positively in future, or that it would be worth renewing the grant if the grantee applied again.) One reason that that first question came to mind was that I previously read a 2016 Open Phil post [] that states: (I don't know whether, how, and how much Open Phil and GiveWell still do things like this.)
We haven't historically done this. As someone who has tried pretty hard to incorporate forecasting into my work at LessWrong, my sense is that it actually takes a lot of time until you can get a group of 5 relatively disagreeable people to agree on an operationalization that makes sense to everyone, and so this isn't really super feasible to do for lots of grants. I've made forecasts for LessWrong, and usually creating a set of forecasts that actually feels useful in assessing our performance takes me at least 5-10 hours. It's possible that other people are much better at this than I am, but this makes me kind of hesitant to use at least classical forecasting methods as part of LTFF evaluation. 
Thanks for that answer. It seems plausible to me that a useful version of forecasting grant outcomes would be too time-consuming to be worthwhile. (I don't really have a strong stance on the matter currently.) And your experience with useful forecasting for LessWrong work being very time-consuming definitely seems like relevant data. But this part of your answer confused me: Naively, I'd have thought that, if that was a major obstacle, you could just have a bunch of separate operationalisations, and people can forecast on whichever ones they want to forecast on. If, later, some or all operationalisations do indeed seem to have been too flawed for it to be useful to compare reality to them, assess calibration, etc., you could just not do those things for those operationalisations/that grant.  (Note that I'm not necessarily imagining these forecasts being made public in advance or afterwards. They could be engaged in internally to the extent that makes sense - sometimes ignoring them if that seems appropriate in a given case.) Is there a reason I'm missing for why this doesn't work?  Or was the point about difficulty of agreeing on an operationalisation really meant just as evidence of how useful operationalisations are hard to generate, as opposed to the disagreement itself being the obstacle?
I think the most lightweight-but-still-useful forecasting operationalization I'd be excited about is something like   This gets at whether people think it's a good idea ex post, and also (if people are well-calibrated) can quantify whether people are insufficiently or too [] risk/ambiguity-averse, in the classic sense of the term.
4Jonas Vollmer2y
This seems helpful to assess fund managers' calibration and improve their own thinking and decision-making. It's less likely to be useful for communicating their views transparently to one another, or to the community, and it's susceptible to post-hoc rationalization. I'd prefer an oracle external to the fund, like "12 months from now, will X have a ≥7/10 excitement about this grant on a 1-10 scale?", where X is a person trusted by the fund managers who will likely know about the project anyway, such that the cost to resolve the forecast is small. I plan to encourage the funds to experiment with something like this going forward.
I agree that your proposed operationalization is better for the stated goals, assuming similar levels of overhead.
Just to make sure I'm understanding, are you also indicating that the LTFF doesn't write down in advance what sort of proxies you'd want to see from this grant after x amount of time? And that you think the same challenges with doing useful forecasting for your LessWrong work would also apply to that? These two things (forecasts and proxies) definitely seem related, and both would involve challenges in operationalising things. But they also seem meaningfully different. I'd also think that, in evaluating a grant, I might find it useful to partly think in terms of "What would I like to see from this grantee x months/years from now? What sorts of outputs or outcomes would make me update more in favour of renewing this grant - if that's requested - and making similar grants in future?"
We've definitely written informally things like "this is what would convince me that this grant was a good idea", but we don't have a more formalized process for writing down specific objective operationalizations that we all forecast on.
7Jonas Vollmer2y
I'm personally actually pretty excited about trying to make some quick forecasts for a significant fraction (say, half) of the grants that we actually make, but this is something that's on my list to discuss at some point with the LTFF. I mostly agree with the issues that Habryka mentions, though.
To add to Habryka's response: we do give each grant a quantitative score (on -5 to +5, where 0 is zero impact). This obviously isn't as helpful as a detailed probabilistic forecast, but I think it does give a lot of the value. For example, one question I'd like to answer from retrospective evaluation is whether we should be more consensus driven or fund anything that at least one manager is excited about. We could address this by scrutinizing past grants that had a high variance in scores between managers. I think it might make sense to start doing forecasting for some of our larger grants (where we're willing to invest more time), and when the key uncertainties are easy to operationalize.
Thank you!

I notice that all but one of the November 2020 grants were given to individuals as opposed to organisations. What is the reason for this?

To clarify I'm certainly not criticising - I guess it makes quite a bit of sense as individuals are less likely than organisations to be able to get funding from elsewhere, so funding them may be better at the margin. However I would still be interested to hear your reasoning.

I notice that the animal welfare fund gave exclusively to organisations rather than individuals in the most recent round. Why do you think there is this difference between LTFF and AWF? 

Speaking just for myself on why I tend to prefer the smaller individual grants: 

Currently when I look at the funding landscape, it seems that without the LTFF there would be a pretty big hole in available funding for projects to get off the ground and for individuals to explore interesting new projects or enter new domains. Open Phil very rarely makes grants smaller than ~$300k, and even many donors don't really like giving to individuals and early-stage organizations because they often lack established charity status, which makes their donations non-tax-deductable. 

CEA has set up infrastructure to allow tax-deductible grants to individuals and organizations without charity status, and the fund itself seems well-suited to evaluate organizations by individuals, since we all have pretty wide networks and can pretty quickly gather good references on individuals that are working on projects that don't yet have an established track record. 

I think in a world without Open Phil or the Survival and Flourishing Fund, much more of our funding would go to established organizations. 

Separately, I also think that I personally view a lot of the intellectual work to be done on... (read more)

1Jack Malde2y
Thanks for this detailed answer. I think that all makes a lot of sense. 

I largely agree with Habryka's comments above.

In terms of the contrast with the AWF in particular, I think the funding opportunities in the long-termist vs animal welfare spaces look quite different. One big difference is that interest in long-termist causes has exploded in the last decade. As a result, there's a lot of talent interested in the area, but there's limited organisational and mentorship capacity to absorb this talent. By contrast, the animal welfare space is more mature, so there's less need to strike out in an independent direction. While I'm not sure on this, there might also be a cultural factor -- if you're trying to perform advocacy, it seems useful to have an organisation brand behind you (even if it's just a one-person org). This seems much less important if you want to do research.

Tangentially, I see a lot of people debating whether EA is talent constrained, funding constrained, vetting constrained, etc. My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship. This is talent constrained in the sense that having a larger applicant pool will help the ... (read more)

My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship


This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually. 

I think one thing that's going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I've made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky. 

This is a good point, and I do think having multiple large funders would help with this. If the LTFF's budget grew enough I would be very interested in funding scalable interventions, but it doesn't seem like our comparative advantage now. I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I've seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researchers, etc). You can optimize this to allow senior researchers to mentor more people (e.g. lots of peer advice assistants to free up senior staff time, etc), but that seems unlikely to yield more than a 2x increase in growth rate. Most cases where orgs have scaled up successfully have drawn on a lot of existing talent. Tech startups can grow quickly but they don't teach each new hire how to program from scratch. So I'd love to see scalable ways to get existing researchers to work on priority areas like AI safety, biosecurity, etc. It can be surprisingly hard to change what researchers work on, though. Researchers tend to be intrinsically motivated, so right now the best way I know is to just do good technical work to show that problems exist (and are tractable to solve), combined with clear communication. Funding can help here a bit: make sure the people doing the good technical work are not funding constrained. One other approach might be to build better marketing: DeepMind, OpenAI, etc are great at getting their papers a lot of attention. If we could promote relevant technical work that might help draw more researchers to these problems. Although a lot of people in academia really hate these companies self-promotion, so it could backfire if done badly. The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can
5Ozzie Gooen2y
I agree that research organizations of the type that we see are particularly difficult to grow quickly. My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted.  Right now it seems like our solution to most problems is "try to solve it with experienced researchers", which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that's very hard to scale, as you note (I know of almost no organizations that have done this well).    Separately, I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while. 
Just want to say I agree with both Habryka's comments and Adam's take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don't have the capacity to absorb talent.
1Jack Malde2y
Thanks for this reply, makes a lot of sense!
2Jonas Vollmer2y
I agree with Habryka and Adam. Regarding the LTFF (Long-Term Future Fund) / AWF (Animal Welfare Fund) comparison in particular, I'd add the following: * The global longtermist community is much smaller than the global animal rights community, which means that the animal welfare space has a lot more existing organizations and people trying to start organizations that can be funded. * Longtermist cause areas typically involve a lot more research, which often implies funding individual researchers, whereas animal welfare work is typically more implementation-oriented.
1Jack Malde2y
Also makes sense, thanks.

What do you think has been the biggest mistake by the LTF fund (at least that you can say publicly)?

(I’m not a Fund manager, but I’ve previously served as an advisor to the fund and now run EA Funds, which involves advising EA Funds.)

In addition to what Adam mentions, two further points come to mind:

1. I personally think some of the April 2019 grants weren’t good, and I thought that some (but not all) of the critiques the LTFF received from the community were correct. (I can’t get more specific here – I don’t want to make negative public statements about specific grants, as this might have negative consequences for grant recipients.) The LTFF has since implemented many improvements that I think will prevent such mistakes from occurring again.

2. I think we could have communicated better around conflicts of interest. I know of some 2019 grants donors perceived to be subject to a conflict of interest, but there actually wasn’t a conflict of interest, or it was dealt with appropriately. (I also can recall one case where I think a conflict of interest may not have been dealt with well, but our improved policies and practices will prevent a similar potential issue from occurring again.) I think we’re now dealing appropriately with COIs (not in the sense that we refrain from any grants with a potential COI, but that we have appropriate safeguards in place that prevent the COI from impairing the decision). I would like to publish an updated policy once I get to it.

Historically I think the LTFF's biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren't funding interventions on climate change. We've received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it's important that donors have clear expectations regarding how their money will be used.

We've edited the fund page to make our focus areas more explicit, and EA Funds also added Founders Pledge Climate Change Fund for donors who want to focus on that area (and Jonas emailed donors who made this complaint, encouraging to switch their donations to the climate change fund). I hope this will help clarify things, but we'll have to be attentive to donor feedback both via things like this AMA and our donor survey, so that we can proactively correct any misconceptions.

Another issue I think we have is that we currently lack the capacity to be more proactively engaged with our grantees. I'd like us to do this for around 10% of our grant appli... (read more)

  I agree unclear messaging has been a big problem for the LTFF, and I’m glad to see the EA Funds team being responsive to feedback around this. However, the updated messaging on the fund page still looks extremely unclear and I’m surprised you think it will clear up the misunderstandings donors have. It would probably clear up most of the confusion if donors saw the clear articulation of the LTFF’s historical and forward looking priorities that is already on the fund page (emphasis added):  The problem is that this text is buried in the 6th subsection of the 6th section [] of the page. So people have to read through ~1500 words, the equivalent of three single spaced typed pages, to get an accurate description of how the fund is managed. This information should be in the first paragraph (and I believe that was the case at one point). Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding. (Aside: it’s frustrating that there’s not an easy way to see all grants categorized in a spreadsheet so that I could pull the actual numbers without going through each grant report and hand entering and classifying each grant.) In addition to clearly communicating that the fund prioritizes AI, I would like to see the fund page (and other communications) explain why that’s the case. What are the main arguments informing the decision? Did the fund managers decide this? Did whoever selected the fund managers (almost all of who have AI backgrounds) decide this? Under what conditions would the LTFF team expect this prioritization to change? The LTFF has done a fantastic job providing transparency into the rationale behind speci

The very first sentence on that page reads (emphasis mine):

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics.

I personally think that's quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn't mention pandemics in that sentence? Perhaps you think "especially" is not strong enough?

An important reason why we don't make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.

Here's a spreadsheet with all EA Funds grants (though without categorization). I agree a proper grants database would be good to set up at some point; I have now added this to my list of things we mig... (read more)

I don’t think it’s appropriate to discuss pandemics in that first sentence. You’re saying the fund makes grants that “especially” address pandemics, and that doesn’t seem accurate. I looked at your spreadsheet (thank you!) and tried to do a quick classification. As best I can tell, AI has gotten over half the money the LTFF has granted, ~19x the amount granted to pandemics (5 grants for $114,000). Forecasting projects have received 2.5x as much money as pandemics, and rationality training has received >4x as much money. So historically, pandemics aren’t even that high among non-AI priorities.  If pandemics will be on equal footing with AI going forward, then that first sentence would be okay. But if that’s the plan, why is the management team skillset so heavily tilted toward AI? I’m glad there’s interest in funding more biosecurity work going forward. I’m pretty skeptical that relying on applications is an effective way to source biosecurity proposals though, since relatively few EAs work in that area (at least compared to AI) and big biosecurity funding opportunities (like Open Phil grantees Johns Hopkins Center for Health Security and Blue Ribbon Study Panel on Biodefense) probably aren’t going to be applying for LTFF grants.  Regarding the page’s dual purpose, I’d say informing donors is much more important than informing applicants: it’s a bad look to misinform people who are investing money based on your information.  There’s been plenty of discussion (including that Open Phil report) on why AI is a priority, but there’s been very little explicit discussion of why AI should be prioritized relative to other causes like biosecurity.  Open Phil prioritizes both AI and biosecurity. For every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI. If the LTFF had a similar proportion, I’d say the fund page’s messaging would be fine. But for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI. That degree of concentration warrants an e
7Jonas Vollmer2y
Thanks, I appreciate the detailed response, and agree with many of the points you made. I don't have the time to engage much more (and can't share everything), but we're working on improving several of these things.

Thanks Jonas, glad to hear there are some related improvements in the works  For whatever it’s worth, here’s an example of messaging that I think accurately captures what the fund has done, what it’s likely to do in the near term, and what it would ideally like to do:

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks or promote the adoption of longtermist thinking. While many grants so far have prioritized projects addressing risks posed by artificial intelligence (and the grantmakers expect to continue this at least in the short term), the Fund is open to funding, and welcomes applications from, a broader range of activities related to the long-term future.

2Jonas Vollmer2y
1Jack Malde2y
I agree with you that that's pretty clear. Perhaps you could just have another sentence explaining that most grants historically have been AI-related because that's where you receive most of your applications? On another note, I can't help but feel that "Global Catastrophic Risk Fund" would be a better name than "Long-term Future Fund". This is because there are other ways to improve the long-term trajectory of civilisation than by mitigating global catastrophic risks. Also, if you were to make this change, it may help distinguish the fund from the long-term investment fund that Founders Pledge may set up.
6Jonas Vollmer2y
Some of the LTFF grants (forecasting, long-term institutions, etc.) are broader than GCRs, and my guess is that at least some Fund managers are pretty excited about trajectory changes, so I'd personally think the current name seems more accurate.
1Jack Malde2y
Ah OK. The description below does make it sound like it's only global catastrophic risks. Perhaps include the word 'predominantly' before the word "making"?
2Jonas Vollmer2y
The second sentence on that page (i.e. the sentence right after this one) reads: "Predominantly" would seem redundant with "in addition", so I'd prefer leaving it as-is.
4Jack Malde2y
OK sorry this is just me not doing my homework! That all seems reasonable.
Which of these two sentences, both from the fund page,  do you think describes the fund more accurately? 1. The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. (First sentence of fund page.) 2. Grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term. (Located 1500 words into fund page.) I'd say 2 is clearly more accurate, and I think the feedback you've received about donors being surprised at how many AI grants were made suggests I'm not alone.
Could you operationalize "more accurately" a bit more? Both sentences match my impression of the fund. The first is more informative as to what our aims are, the second is more informative as to the details of our historical (and immediate future) grant composition. My sense is that the first will give people an accurate predictive model of the LTFF in a wider range of scenarios. For example, if next round we happen to receive an amazing application for a new biosecurity org, the majority of the round's funding could go on that. The first sentence would predict this, the second not. But the second will give most people better predictions in a "business as usual" case, where our applications in future rounds are similar to those of current rounds. My hunch is that knowing what our aims are is more important for most donors. In particular, many people reading this for the first time will be choosing between the LTFF and one of the other EA Funds, which focus on completely different cause areas. The high-level motivation seems more salient than our current grant composition for this purpose. Ideally of course we'd communicate both. I'll think about if we should add some kind of high-level summary of % of grants to different areas under the "Grantmaking and Impact" section which occurs earlier. My main worry is this kind of thing is hard to keep up to date, and as described above could end up misleading donors in the other direction, if our application pool suddenly changes.
Adam has mentioned elsewhere here that he will prefer making more biosecurity [] grants. An interesting question here is how much the messaging should be descriptive of past donations, vs aspirational of where they want to donate more to in the future.  
Good point! I'd say ideally the messaging should describe both forward and backward looking donations, and if they differ, why. I don't think this needs to be particularly lengthy, a few sentences could do it. 
I agree that both of these are among our biggest mistakes.

(Not sure if this is the best place to ask this. I know the Q&A is over, but on balance I think it's better for EA discourse for me to ask this question publicly rather than privately, to see if others concur with this analysis, or if I'm trivially wrong for boring reasons and thus don't need a response). 

Open Phil's Grantmaking Approaches and Process has the 50/40/10 rule, where (in my medicore summarization) 50% of a grantmaker's grants have to have the core stakeholders (Holden Karnofsky from Open Phil and Cari Tuna from Good Ventures) on board, 40% have to be grants where Holden and Cari are not clearly on board, but can imagine being on board if they knew more, and  up to 10% can be more "discretionary." 
Reading between the lines, this suggests that up to 10% of funding from Open Phil will go to places Holden Karnofsky and Cari Tuna are not inside-view excited about, because they trust the grantmakers' judgements enough. 

Is there a similar (explicit or implicit) process at LTFF?

I ask because 

  • part of the original pitch for EA Funds, as I understood it, was that it would be able to evaluate higher-uncertainty, higher-reward donation oppor
... (read more)

This is an important question. It seems like there's an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position -- and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:

  1. Accountability generally seems to improve organisations functioning. It'd be surprising if the LTFF was a complete exception to this, and legibility seems necessary for accountability.
  2. There's asymmetric information between us and donors, so less legibility will tend to mean less donations (and I think this is reasonable). So, there's a tradeoff between greater counterfactual impact from scale, v.s. greater impact per $ moved.
  3. There's may be community building value in having a fund that is attractive to people without deep context or trust in the fund managers.

I'm not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and "safer" fund. Then donors can choose what... (read more)

Re: Accountability I’m not very familiar with the funds, but wouldn’t retrospective evaluations like Linch‘s be more useful than legible reasoning? I feel like the grantees and institutions like EA funds with sufficiently long horizons want to stay trusted actors in the longer run and so are sufficiently motivated to be trusted with some more inside-view decisions. * trust from donors can still be gained by explaining a meaningful fraction of decisions * less legible bets may have higher EV * I imagine funders will always be able to meaningfully explain at least some factors that informed them, even if some factors are hard to communicate * some donors may still not trust judgement sufficiently * maybe funded projects have measurable outcomes only far in the future (though probably there are useful proxies on the way) * evaluation of funded projects takes effort (but I imagine you want to do this anyway)
(Looks like this sentence got cut off in the middle) 
Thanks, fixed.
To be clear I think this is not my all-things-considered position. Rather, I think this is a fairly significant possibility, and I'd favor an analogue of Open Phil's 50/40/10 rule  (or something a little more aggressive) than to eg whatever the socially mediated equivalent of full discretionary control by the specific funders would. be.  This seems like a fine compromise that I'm in the abstract excited about, though of course it depends a lot on implementation details. This is really good to hear!
I do indeed think there has been a pressure towards lower risk grants, am not very happy about it, and think it reduced the expected value of the fund by a lot. I am reasonably optimistic about that changing again in the future, but it's one of the reasons why I've become somewhat less engaged with the fund. In particular Alex Zhu leaving the fund was I think a really great loss on this dimension.
2Jonas Vollmer2y
I think you, Adam, and Oli covered a lot of the relevant points. I'd add that the LTFF's decision-making is based on the average score vote from the different fund managers, which would allow for grants to go through in scenarios where one person is very excited, and others aren't very unexcited or against the grant. I.e., the mechanism allows an excited minority to make a grant that wouldn't be approved by the majority of the committee.  Overall, the mechanism strikes me as near-optimal. (Perhaps we should lower the threshold for making grants a bit further.) I do think the LTFF might be slightly too risk-averse, and splitting the LTFF into a "legible longtermist fund" and a "judgment-driven longtermist fund" to remove pressure from donors towards the legible version seems a good idea and is tentatively on the roadmap.

How much room for additional funding does LTF have? Do you have an estimate of how much money you could take on and still achieve your same ROI on the marginal dollar donated?

Really good question! 

We currently have ~$315K in the fund balance.* My personal median guess is that we could use $2M over the next year while maintaining this year's bar for funding. This would be:

  • $1.7M more than our current balance
  • $500K more per year than we’ve spent in previous years
  • $800K more than the total amount of donations received in 2020 so far
  • $400K more than a naive guess for what the total amount of donations received will be in all of 2020. (That is, if we wanted a year of donations to pay for a year of funding, we would need  $400K more in donations next year than what we got this year.)

Reasoning below:

Generally, we fund anything above a certain bar, without accounting explicitly for the amount of money we have. According to this policy, for the last two years, the fund has given out ~$1.5M per year, or ~$500K per grant round, and has not accumulated a significant buffer. 

This round had an unusually large number of high-quality applicants. We spent $500K, but we pushed two large grant decisions to our next payout round, and several of our applicants happened to receive money from another source just before we communicated our funding decision. This mak... (read more)

Do you have a vision for what the 3 to 10 year vision for the Long-Term Future Fund looks like? Do you expect it to be mostly the same and possibly add revenue, or have any large structural changes?

As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds, and I’ve also been thinking about the longer-term strategy for EA Funds as a whole.

Some thoughts on this question:

  • LTFF strategy: There is no official 3-10 year vision or strategy for the LTFF yet, but I hope we will get there sometime soon. My own best guess for the LTFF’s vision (which I haven’t yet discussed with the LTFF) is: ‘Thoughtful people have the resources they need to successfully implement highly impactful projects to improve the long-term future.’ My best guess for the LTFF’s mission/strategy is ‘make judgment-driven grants to individuals and small organizations and proactively seed new longtermist projects.’ A plausible goal could be to allocate $15 million per year to effective longtermist projects by 2025 (where ‘effective’ means something like ‘significantly better than Open Phil’s last dollar, similar to the current quality of grants’).
  • Grantmaking capacity: To get there, we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of impl
... (read more)

Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else


I found this point interesting, and have a vague intuition that EA Funds (and especially the LTFF) are really trying to do two different things:

  1. Having a default place for highly engaged EAs to donate, that is willing to take on large risks, fund things that seem weird, and rely heavily on social connections, the community and grantmaker intuitions
  2. Have a default place for risk-neutral donors who feel value aligned with EA to donate to, who don't necessarily have high trust for the community

Having something doing (1) seems really valuable, and I would feel sad if the LTFF reined back the kinds of things it funded to have a better public image. But I also notice that, eg, when giving donation advice to friends who broadly agree with EA ideas but aren't really part of the community, that I don't feel comfortable recommending EA Funds. And think that a bunch of the grants seem weird to anyone with moderately skeptical priors. (This is pa... (read more)

This sounds right to me.  Do you mean this as distinct from Jonas's suggestion of: It seems to me that that could address this issue well. But maybe you think the other institution should have a more different structure or be totally separate from EA Funds? FWIW, my initial reaction is "Seems like it should be very workable? Just mostly donate to organisations that have relatively easy to understand theories of change, have already developed a track record, and/or have mainstream signals of credibility or prestige (e.g. affiliations with impressive universities). E.g., Center for Health Security, FHI, GPI, maybe CSET, maybe 80,000 Hours, maybe specific programs from prominent non-EA think tanks."  Do you think this is harder than I'm imagining? Or maybe that the ideal would be to give to different types of things?
1Neel Nanda2y
Nah, I think Jonas' suggestion would be a good implementation of what I'm suggesting. Though as part of this, I'd want the LTFF to be less public facing and obvious - if someone googled 'effective altruism longtermism donate' I'd want them to be pointed to this new fund. Hmm, I agree that a version of this fund could be implemented pretty easily - eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I'm not sure how to do it well and ethically.
2Jonas Vollmer2y
Yeah, we could simply explain transparently that it would funge with Open Phil's longtermist budget.

Are there any areas covered by the fund's scope where you'd like to receive more applications?

I’d overall like to see more work that has a solid longtermist justification but isn't as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.

There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:

  • Work on structured transparency tools for detecting risks from rogue actors
  • Work on information security’s effect on AI development
  • Work on the offense - defense balance in a world with many advanced AI systems
  • Work on the likelihood and moral value of extraterrestrial life
  • Work on increasing institutional competence, particularly around existential risk mitigation
  • Work on effectively spreading longtermist values outside of traditional movement-building

These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question-- I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.

These are very much a personal take, I'm not sure if others on the fund would agree.

  1. Buying extra time for people already doing great work. A lot of high-impact careers pay pretty badly: many academic roles (especially outside the US), some non-profit and think-tank work, etc. There's certainly diminishing returns to money, and I don't want the long-termist community to engage in zero-sum consumption of Veblen goods. But there's also plenty of things that are solid investments in your productivity, like having a comfortable home office, a modern computer, ordering takeaway or having cleaners, enough runway to not have financial insecurity, etc.

    Financial needs also vary a fair bit from person to person. I know some people who are productive and happy living off Soylent and working on a laptop on their bed, whereas I'd quickly burn out doing that. Others might have higher needs than me, e.g. if they have financial dependents.

    As a general rule, if I'd be happy to fund someone for $Y/year if they were doing this work by themselves, and they're getting paid $X/year by their employer to do this work, I think I should be happy to pay the difference $(Y-X)/year provided the applicant has

... (read more)

What is the LTFF's position on whether we're currently at an extremely influential time for direct work? I saw that there was a recent grant on research into patient philanthropy, but most of the grants seem to be made from the perspective of someone who thinks that we are at "the hinge of history". Is that true?

At least for me the answer is yes, I think the arguments for the hinge of history are pretty compelling, and I have not seen any compelling counterarguments. I think the comments on Will's post (which is the only post I know arguing against the hinge of history hypothesis) are basically correct and remove almost all basis I can see for Will's arguments. See also Buck's post on the same topic.

I think this century is likely to be extremely influential, but there's likely important direct work to do at many parts of this century.  Both patient philanthropy projects we funded have relevance to that timescale-- I'd like to know about how best to allocate longtermist resources between direct work, investment, and movement-building over the coming years, and I'm interested in how philanthropic institutions might change.

I also think it's worth spending some resources thinking about scenarios where this century isn't extremely influential.

7Jonas Vollmer2y
Whether we are at the "hinge of history" is a gradual question; different moments in history have different degrees of influentialness. I personally think the current moment is likely very influential, such that I want to spend a significant fraction of the resources we have now, and I think on the current margin we should probably be spending more. I think this could change over the coming years, though.

What are you not excited to fund?

Of course there's lots of things we would not want to (or cannot) fund, so I'll focus on things which I would not want to fund, but which someone reading this might have been interested in supporting or applying for.

  1. Organisations or individuals seeking influence, unless they have a clear plan for how to use that influence to improve the long-term future, or I have an exceptionally high level of trust in them

    This comes up surprisingly often. A lot of think-tanks and academic centers fall into this trap by default. A major way in which non-profits sustain themselves is by dealing in prestige: universities selling naming rights being a canonical example. It's also pretty easy to justify to oneself: of course you have to make this one sacrifice of your principles, so you can do more good later, etc.

    I'm torn on this because gaining leverage can be a good strategy, and indeed it seems hard to see how we'll solve some major problems without individuals or organisations pursuing this. So I wouldn't necessarily discourage people from pursuing this path, though you might want to think hard about whether you'll be able to avoid value drift. But there's a big information asymmetry as a dono

... (read more)
I agree that both of these are among the top 5 things that I've encountered that make me unexcited about a grant.

Like Adam, I’ll focus on things that someone reading this might be interested in supporting or applying for. I want to emphasize that this is my personal take, not representing the whole fund, and I would be sad if this response stopped anyone from applying -- there’s a lot of healthy disagreement within the fund, and we fund lots of things where at least one person thinks it’s below our bar. I also think a well-justified application could definitely change my mind.

  1. Improving science or technology, unless there’s a strong case that the improvement would differentially benefit existential risk mitigation (or some other aspect of our long-term trajectory). As Ben Todd explains here, I think this is unlikely to be as highly-leveraged for improving the long-term future as trajectory changing efforts. I don’t think there’s a strong case that generally speeding up economic growth is an effective existential risk intervention.
  2. Climate change mitigation. From the evidence I’ve seen, I think climate change is unlikely to be either directly existentially threatening or a particularly highly-leveraged existential risk factor. (It’s also not very neglected.) But I could be excited about funding
... (read more)
6Jonas Vollmer2y
(I drafted this comment earlier and feel like it's largely redundant by now, but I thought I might as well post it.) I agree with what Adam and Asya said. I think many of those points can be summarized as ‘there isn’t a compelling theory of change [] for this project to result in improvements in the long-term future.’  Many applicants have great credentials, impressive connections, and a track record of getting things done, but their ideas and plans seem optimized for some goal other than improving the long-term future, and it would be a suspicious convergence [] if they were excellent for the long-term future as well. (If grantseekers don’t try to make the case for this in their application, I try to find out myself if this is the case, and the answer is usually ‘no.’)  We’ve received applications from policy projects, experienced professionals, and professors (including one with tens of thousands of citations), but ended up declining largely for this reason. It’s worth noting that these applications aren’t bad – often, they’re excellent – but they’re only tangentially related to what the LTFF is trying to achieve.

What are you excited to fund?

A related question: are there categories of things you'd be excited to fund, but haven't received any applications for so far?

I think the long-termist and EA communities seem too narrow on several important dimensions:

  • Methodologically there are several relevant approaches that seem poorly represented in the community. A concrete example would be having more people with a history background, which seems critical for understanding long-term trends. In general I think we could do better interfacing with the social sciences and other intellectual movements.

    I do think there are challenges here. Most fields are not designed to answer long-term questions. For example, history is often taught by focusing on particular periods, whereas we are more interested in trends that persist across many periods. So the first people joining from a particular field are going to need to figure out how to adapt their methodology to the unique demands of long-termism.

    There's also risks from spreading ourselves too thin. It's important we maintain a coherent community that's able to communicate with each other. Having too many different methodologies and epistemic norms could make this hard. Eventually I think we're going to need to specialize: I expect different fields will benefit from different norms and heuristics. But righ

... (read more)
7Jonas Vollmer2y
(As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds.) I agree with Adam and Asya. Some quick further ideas off the top of my head: * More academic teaching buy-outs []. I think there are likely many longtermist academics who could get a teaching buy-out but aren’t even considering it. * Research into the long-term risks (and potential benefits) of genetic engineering. * Research aimed at improving cause prioritization methodology. (This might be a better fit for the EA Infrastructure Fund, but it’s also relevant to the LTFF.) * Open access fees for research publications relevant to longtermism, such that this work is available to anyone on the internet without any obstacles, plausibly increasing readership and citations. * Research assistants for academic researchers (and for independent researchers if they have a track record and there’s no good organization for them). * Books about longtermism-relevant topics.
3Neel Nanda2y
How important is this in the context of eg scihub existing?
2Jonas Vollmer2y
Not everyone uses sci-hub, and even if they do, it still removes trivial inconveniences []. But yeah, sci-hub and the fact that PDFs (often preprints) are usually easy to find even if it's not open access makes me a bit less excited.
That's really interesting to read, thanks very much! (Both for this answer and for the whole AMA exercise)

I've already covered in this answer areas where we don't make many grants but I would be excited about us making more grants. So in this answer I'll focus on areas where we already commonly make grants, but would still like to scale this up further.

I'm generally excited to fund researchers when they have a good track record, are focusing on important problems and when the research problem is likely to slip through the cracks of other funders or research groups. For example, distillation style research, or work that is speculative or doesn't neatly fit into an existing discipline.

Another category which is a bit harder to define are grants where we have a comparative advantage at evaluating. This could be that one of the fund managers happens to already be an expert in the area and has a lot of context. Or maybe the application is time-sensitive and we're just about to start evaluating a grant round. In these cases the counterfactual impact is higher: these grants are less likely to be made by other donors.

LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?

The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn't represent what we think is the ideal split of total EA funding between cause-areas.

In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our applications are for speculative or early-stage projects. Given this, if you're reading this and are interested in applying to the LTFF but haven't seen us fund projects in your area before -- don't let that put you off. We're open to funding things in a very broad range of areas provided there's a compelling long-termist case.

Because cause prioritization isn't actually that decision relevant for most of our applications, I haven't thought especially deeply about it. In general, I'd say the fund is comparably excited about marginal work in reducing long-term risks from AI, biosafety, and general longtermist macrostrategy and capacity buildin... (read more)

What are the most common reasons for rejection for applications of the Long-Term Future Fund?

Filtering for obvious misfits, I think the majority reason is that I don't think the project proposal will be sufficiently valuable for the long-term future, even if executed well. The minority reason is that there isn't strong enough evidence that the project will be executed well.

Sorry if this is an unsatisfying answer-- I think our applications are different enough that it’s hard to think of common reasons for rejection that are more granular. Also, often the bottom line is "this seems like it could be good, but isn't as good as other things we want to fund". Here are some more concrete kinds of reasons that I think have come up at least more than once:

  • Project seems good for the medium-term future, but not for the long-term future
  • Applicant wants to learn the answer to X, but X doesn't seem like an important question to me
  • Applicant wants to learn about X via doing Y, but I think Y is not a promising approach for learning about X
  • Applicant proposes a solution to some problem, but I think the real bottleneck in the problem lies elsewhere
  • Applicant wants to write something for a particular audience, but I don’t think that writing will be received well by that audience
  • Project would be
... (read more)
Hey Asya! I've seen that you've received a comment prize on this. Congratulations! I have found it interesting. I was wondering: you give these two reasons for rejecting a funding application * Project would be good if executed exceptionally well, but applicant doesn't have a track record in this area, and there are no references that I trust to be calibrated to vouch for their ability. * Applicant wants to do research on some topic, but their previous research on similar topics doesn't seem very good. My question is: what method would you use to evaluate the track record of someone who has not done a Ph.D. in AI Safety, but rather on something like Physics (my case :) )? Do you expect the applicant to have some track record in AI Safety research? I do not plan on applying for funding on the short term, but I think I would find some intuition on this valuable. I also ask because I find it hard to calibrate myself on the quality of my own research.

Hey! I definitely don't expect people starting AI safety research to have a track record doing AI safety work-- in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don't know the details of your situation, but in general I don't think "former physics student starting AI safety work" fits into the category of "project would be good if executed exceptionally well". In that case, I think most of the value would come from supporting the transition of someone who could potentially be really good, rather than from the object-level work itself.

In the case of other technical Ph.D.s, I generally check whether their work is impressive in the context of their field, whether their academic credentials are impressive, what their references have to say. I also place a lot of weight on whether their proposal makes sense and shows an understanding of the topic, and on my own impressions of the person after talking to them.

I do want to emphasize that "paying a smart person to test their fit for AI safety" is a really good use of money from my perspective-- if the person turns out to be good, I've in some sense paid for a whole lifetime of high-quality AI safety research. So I think my bar is not as high as it is when evaluating grant proposals for object-level work from people I already know.

Most common is definitely that something doesn't really seem very relevant to the long-term future (concrete example: "Please fund this local charity that helps people recycle more"). This is probably driven by people applying with the same project to lots of different grant opportunities, at least that's how the applications often read.  I would have to think a bit more about patterns that apply to the applications that pass the initial filter (i.e. are promising enough to be worth a deeper investigation).

Do you think it's possible that, by only funding individuals/organisations that actually apply for funding, you are missing out on even better funding opportunities for individuals or organisations that didn't apply for some reason?

If yes, one possible remedy might be putting more effort into advertising the fund so that you get more applications. Alternatively, you could just decide that you won't be limited by the applications you receive and that you can give money to individuals/organisations who don't actually apply for funding (but could still use it well). What do you think about these options?

Yes, I think we're definitely limited by our application pool, and it's something I'd like to change. I'm pretty excited about the possibility of getting more applications. We've started advertising the fund more, and in the latest round we got the highest number of applications we rated as good (score >= 2.0, where 2.5 is the funding threshold). This is about 20-50% more than the long-term trend, though it's a bit hard to interpret (our scores are not directly comparable across time). Unfortunately the percentage of good applications also dropped this round, so we do need to avoid too indiscriminate outreach to avoid too high a review burden. I'm most excited about more active grant-making. For example, we could post proposals we'd like to see people work on, or reach out to people in particular areas to encourage them to apply for funding. Currently we're bottlenecked on fund manager time, but we're working on scaling that. I'd be hesitant about funding individuals or organisations that haven't applied -- our application process is lightweight, so if someone chooses not to apply even after we prompt them, that seems like a bad sign. A possible exception would be larger organisations that already make the information we need available for assessment. Right now I'm not excited about funding more large organisations, since I think the marginal impact there is lower, but if the LTFF had a lot more money to distribute then I'd want to scale up our organisation grants.
1Jack Malde2y
Thanks for this reply. Active grant-making sounds like an interesting idea!
Good question! Relatedly, are there common characteristics among people/organizations who you think would make promising applicants but often don't apply? Put another way, who would you encourage to apply who likely hasn't considered applying?

A common case is people who are just shy to apply for funding. I think a lot of people feel awkward about asking for money. This makes sense in some contexts - asking your friends for cash could have negative consequences! And I think EAs often put additional pressure on themselves: "Am I really the best use of this $X?" But of course as a funder we love to see more applications: it's our job to give out money, and the more applications we have, the better grants we can make.

Another case is people (wrongly) assuming they're not good enough. I think a lot of people underestimate their abilities, especially in this community. So I'd encourage people to just apply, even if you don't think you'll get it.

Do you feel that someone who had applied, unsuccessfully, and then re-applied for a similar project (but perhaps having gathered more evidence), would be more likely, less likely, or equally likely to get funding than someone submitting an identical application to the second case, but not having been rejected once before, having chosen to not apply?

It feels easy to get into the mindset of "Once I've done XYZ, my application will be stronger, so I should do those things before applying", and if that's a bad line of reasoning to use (which I suspect it might be), some explicit reassurance might result in more applications.

I think definitely more or equally likely. :) Please apply!
6Jonas Vollmer2y
Another one is that people assume we are inflexible in some way (e.g., constrained by maximum grant sizes or fixed application deadlines), but we can often be very flexible in working around those constraints, and have done that in the past.

Do you have any plans to become more risk tolerant?

Without getting too much into details, I disagree with some things you've chosen not to fund, and as an outsider view it as being too unwilling to take risks on projects, especially projects where you don't know the requesters well, and truly pursue a hits-based model. I really like some of the big bets you've taken in the past on, for example, funding people doing independent research who then produce what I consider useful or interesting results, but I'm somewhat hesitant around donating to LTF because I... (read more)

From an internal perspective I'd view the fund as being fairly close to risk-neutral. We hear around twice as many complaints that we're too risk-tolerant than too risk-averse, although of course the people who reach out to us may not be representative of our donors as a whole.

We do explicitly try to be conservative around things with a chance of significant negative impact to avoid the unilateralist's curse. I'd estimate this affects less than 10% of our grant decisions, although the proportion is higher in some areas, such as community building, biosecurity and policy.

It's worth noting that, unless I see a clear case for a grant, I tend to predict a low expected value -- not just a high-risk opportunity. This is because I think most projects aren't going to positively influence the long-term future -- otherwise the biggest risks to our civilization would already be taken care of. Based on that prior, it takes significant evidence to update me in favour of a grant having substantial positive expected value. This produces similar decisions to risk-aversion with a more optimistic prior.

Unfortunately, it's hard to test this prior: we'd need to see how good the grants we didn't make w... (read more)

9Gordon Seidoh Worley2y
This is pretty exciting to me. Without going into too much detail, I expect to have a large amount of money to donate in the near future, and LTF is basically the best option I know of (in terms of giving based on what I most want to give to) for the bulk of that money short of having the ability to do exactly this. I'd still want LTF as a fall back for funds I couldn't figure out how to better allocate myself, but the need for tax deductibility limits my options today (though, yes, there are donor lotteries).
2Jonas Vollmer2y
Interested in talking more about this – sent you a PM! EDIT: I should mention that this is generally pretty hard to implement, so there might be a large fee on such grants, and it might take a long time until we can offer it.

Can you clarify on your models on which kinds of projects could cause net harm? My impression is that there are some thoughts that funding many things would be actively harmful, but I don't feel like I have a great picture of the details here. 

If there are such models, are there possible structural solutions to identifying particularly scalable endeavors? I'd hope that we could eventually identify opportunities for long-term impact that aren't "find a small set of particularly highly talented researchers", but things more like, "spend X dollars advertising Y in a way that could scale" or "build a sizeable organization of people that don't all need to be top-tier researchers".

Some things I think could actively cause harm:

  • Projects that accelerate technological development of risky technologies without corresponding greater speedup of safety technologies
  • Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along
  • Projects that engage with policymakers in an uncareful way, making them less willing to engage with longtermism in the future, or causing them to make bad decisions that are hard to reverse
  • Movement-building projects that give a bad first impression of longtermists
  • Projects that risk attracting a lot of controversy or bad press
  • Projects with ‘poisoning the well’ effects where if it’s executed poorly the first time, someone trying it again will have a harder time-- e.g., if a large-scale project doing EA outreach to highschoolers went poorly, I think a subsequent project would have a much harder time getting buy-in from parents.

More broadly, I think as Adam notes above that the movement grows as a function of its initial composition. I think that even if the LTFF had infinite money, this pushes against funding every project where we expect the EV of the object-level wo... (read more)

I agree with the above response, but I would like to add some caveats because I think potential grant applicants may draw the wrong conclusions otherwise:

If you are the kind of person who thinks carefully about these risks, are likely to change your course of action if you get critical feedback, and proactively sync up with the main people/orgs in your space to ensure you’re not making things worse, I want to encourage you to try risky projects nonetheless, including projects that have a risk of making things worse. Many EAs have made mistakes that caused harm, including myself (I mentioned one of them here), and while it would have been good to avoid them, learning from those mistakes also helped us improve our work.

My perception is that “taking carefully calculated risks” won’t lead to your grant application being rejected (perhaps it would even improve your chances of being funded because it’s hard to find people who can do that well) – but “taking risks without taking good measures to prevent/mitigate them” will.

7Ozzie Gooen2y
Thanks so much for this, that was informative. A few quick thoughts: I’ve heard this one before and I could sympathize with it,  but it strikes me as a red flag that something is going a bit wrong. ( I’m not saying that this is your fault, but am flagging it is an issue for the community more broadly.)  Big companies often don’t have the ideal teams for new initiatives.  Often urgency is very important so they put something together relatively quickly. If it doesn’t work well is not that big of a deal, it is to spend the team and have them go to other projects, and perhaps find better people to take their place. In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up.  But if this is the case it would be obviously severely limiting.  The obvious solution to this would be to have bigger orgs with more possibility.  Perhaps of specific initiatives were going well and demanded independence it could happen later on, but hopefully not for the first few years. Some ideas I’ve had: - Experiment with advertising campaigns that could be clearly scaled up.  Some of them seem linearly useful up to millions of dollars. -  Add additional resources to make existing researchers more effective. - Buy the rights to books and spend on marketing for the key ones. - Pay for virtual assistants and all other things that could speed researchers out. - Add additional resources to make nonprofits more effective, easily. - Better budgets for external contractors. - Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding. While it might be a strange example, the wealthy, or in particular, the Saudi government are examples of how to spend lots of money with relatively few trusted people, semi-successfully. Having co
To clarify, I don’t think that most projects will be actively harmful-- in particular, the “projects that result in a team covering a space that is worse than the next person who have come along” case seems fairly rare to me, and would mostly apply to people who’d want to do certain movement-facing work or engage with policymakers. From a purely hits-based perspective, I think there’s still a dearth of projects that have a non-trivial chance of being successful, and this is much more limiting than projects being not as good as the next project to come along. I agree with this. Maybe another thing that could help would be to have safety nets such that EAs who overall do good work could start and wind down projects without being worried about sustaining their livelihood or the livelihood of their employees? Though this could also create some pretty bad incentives. Thanks for these, I haven’t thought about this much in depth and think these are overall very good ideas that I would be excited to fund. In particular: I agree with this; I think there’s a big opportunity to do better and more targeted marketing in a way that could scale. I’ve discussed this with people and would be interested in funding someone who wanted to do this thoughtfully. Also super agree with this. I think an unfortunate component here is that many altruistic people are irrationally frugal, including me-- I personally feel somewhat weird about asking for money to have a marginally more ergonomic desk set-up or an assistant, but I generally endorse people doing this and would be happy to fund them (or other projects making researchers more effective). I think historically, people have found it pretty hard to outsource things like this to non-EAs, though I agree with this in theory. --- One total guess at an overarching theme for why we haven’t done some of these things already is that people implicitly model longtermist movement growth on the growth of academic fields, which grow via slowly
3Jonas Vollmer2y
Again, I agree with Asya. A minor side remark: As someone who has experience with hiring all kinds of virtual and personal assistants for myself and others, I think the problem here is not the money, but finding assistants who will actually do a good job, and organizing the entire thing in a way that’s convenient for the researchers/professionals who need support. More than half of the assistants I’ve worked with cost me more time than they saved me. Others were really good and saved me a lot of time, but it’s not straightforward to find them. If someone came up with a good proposal for this, I’d want to fund them and help them. Similar points apply to some of the other ideas. We can’t just spend money on these things; we need to receive corresponding applications (which generally hasn’t happened) or proactively work to bring such projects into existence (which is a lot of work).
4Jonas Vollmer2y
There will likely be a more elaborate reply, but these two [] links [] could be useful.
2Ozzie Gooen2y

What crucial considerations and/or key uncertainties do you think the EA LTF fund operates under?

Some related questions with slightly different framings:  * What types/lines of research do you expect would be particularly useful for informing the LTFF's funding decisions? * Do you have thoughts on what types/lines of research would be particularly useful for informing other funders'  funding decisions in the longtermism space? * Do you have thoughts on how the answers to those two questions might differ?
I'd be interested in better understanding the trade-off between independent vs established researchers. Relative to other donors we fund a lot of independent research. My hunch here is that most independent researchers are less productive than if they were working at organisations -- although, of course, for many of them that's not an option (geographical constraints, organisational capacity, etc). This makes me place a bit of a higher bar for funding independent research. Some other fund managers disagree with me and think independent researchers tend to be more productive, e.g. due to bad incentives in academic and industry labs. I expect distillation style work to be particularly useful. I expect there's already relevant research here: e.g. case studies of the most impressive breakthroughs, studies looking at different incentives in academic funding, etc. There probably won't be a definitive answer, so it'd also be important that I trust the judgement of the people involved, or have a variety of people with different priors going in coming to similar conclusions. While larger donors can suffer from diminishing returns, there are sometimes also increasing returns to scale. One important thing larger donors can do that isn't really possible at the LTFF's scale is to found new academic fields. More clarity into how to achieve this and have the field go in a useful direction would be great. It's still mysterious to me how academic fields actually come into being. Equally importantly, what predicts whether they have good epistemics, whether they have influence, etc? Clearly part of this is the domain of study (it's easier to get rigorous results in category theory than economics; it's easier to get policymakers to care about economics than category theory). But I suspect it's also pretty dependent on the culture created by early founders and the impressions outsiders form of the field. Some evidence for this is that some very closely related fields can end up going
Edit: I really like Adam's answer There are a lot of things I’m uncertain about, but I should say that I expect most research aimed at resolving these uncertainties not to provide strong enough evidence to change my funding decisions (though some research definitely could!) I do think weaker evidence could change my decisions if we had a larger number of high-quality applications to choose from. On the current margin, I’d be more excited about research aimed at identifying new interventions that could be promising. Here's a small sample of the things that feel particularly relevant to grants I've considered recently. I'm not sure if I would say these are the most crucial: * What sources of existential risk are plausible? * If I thought that AI capabilities were perfectly entangled with their ability to learn human preferences, I would be unlikely to fund AI alignment work. * If I thought institutional incentives were such that people wouldn’t create AI systems that could be existentially threatening without taking maximal precautions, I would be unlikely to fund AI risk work at all. * If I thought our lightcone was overwhelmingly likely to be settled by another intelligent species similar to us, I would be unlikely to fund existential risk mitigation outside of AI.   * What kind of movement-building work is effective? * Adam writes above [] how he thinks movement-building work that sacrifices quality for quantity is unlikely to be good. I agree with him, but I could be wrong about that. If I changed my mind here, I’d be more likely to fund a larger number of movement-building projects. * It seems possible to me that work that’s explicitly labeled as ‘movement-building’ is generally not as effective for movement-building as high-quality dir

Several comments have mentioned that CEA provides good infrastructure for making tax-deductible grants to individuals and also that the LTF  often does, and is well suited to, make grants to individual researchers. Would it make sense for either the LTF or CEA to develop some further guidelines about the practicalities of receiving and administering grants for individuals (or even non-charitable organisations) that are not familiar with this sort of income, to help funds get used effectively?
As a motivating example, when I recently received an L... (read more)

4Jonas Vollmer2y
Thanks for the input, we'll take this into account. We do provide tax advice for the US and UK, but we've also looked into expanding this. Edit: If you don't mind, could you let me know which jurisdiction was relevant to you at the time?
I received my LTF grant while living in Brazil (I forwarded the details of the Brazilian tax lawyer I consulted to CEA staff). However, I built up my grantee expectations while doing research in Australia and Sweden, and was happy they were also valid in Brazil.  My intuition is that most countries that allow either PhD students or postdocs to receive tax-free income for doing research at universities will probably also allow CEA grants to individuals to be declared in a tax-free manner, at least if the grant is for a research project.
2Jonas Vollmer2y
Makes sense, thanks!
Is that tax advice published anywhere? I'd assumed any grants I received in the UK would be treated as regular income, and if that's not the case it's a pleasant surprise!
4Jonas Vollmer2y
It's not public. If you like, you can PM me your email address and I can try asking someone to get in touch with you.

What would you like to fund, but can't because of organisational constraints? (e.g. investing in private companies is IIRC forbidden for charities).

It's actually pretty rare that we've not been able to fund something; I don't think this has come up at all while I've been on the fund (2 rounds), and I can only think of a handful of cases before. It helps that the fund knows some other private donors we can refer grants to (with applicants permission), so in the rare cases something is out of scope, we can often still get it funded. Of course, people who know we can't fund them because of the fund's scope may choose not to apply, so the true proportion of opportunities we're missing may be higher. A big class of things the LTFF can't fund is political campaigns. I think that might be high-impact in some high-stakes elections, though I've not donated to campaigns myself, and I'm generally pretty nervous of anything that could make long-termism perceived as a partisan issue (which it obviously is not). I don't think we'd often want to invest in private companies. As discussed elsewhere in this thread, we tend to find grants to individuals better than to orgs. Moreover, one of the attractive points of investing in a private company is that you may get a return on your investment. But I think the altruistic return on our current grants is pretty high, so I wouldn't want to lock up capital. If we had 10-100x more money to distribute and so had to invest some of it to grant out later, then investing some proportion of it in companies where there's an altruistic upside might make more sense.
2Jonas Vollmer2y
If a private company applied for funding to the LTFF and they checked the "forward to other funders" checkbox in their application, I'd refer them to private donors who can directly invest in private companies (and have done so once in the past, though they weren't funded).

What do you think is a reasonable amount of time to spend on an application to the LFTT?

6Jonas Vollmer2y
If you're applying for funding for a project that's already well-developed (i.e. you have thought carefully about its route to value, what the roadmap looks like, etc.), 30-60 minutes should be enough, and further time spent polishing likely won't improve your chances of getting funding. If you don't have a well-developed project, it seems reasonable to add whichever amount of time it takes to develop the project in some level of detail on top of that.
9Linda Linsefors2y
That's surprisingly short, which is great by the way.  I think most grants are not like this. That is, you can increase your chance of funding by spending a lot of time polishing a application, which leads to a sort of arms-raise among applicants where more and more time are wasted on polishing applications. I'm happy to hear that LTFF do not reward such behavior. On the other hand, the same dynamic will still happen as long as people don't know that more polish will not help.  You can probably save a lot of time on the side of the applicants by: * Stating how much time you recommend people spend on the application * Share some examples of successful applications (with the permission of the applicant) to show others what level and style of wringing to aim for. I understand that no one application will be perfectly representative, but even just one example would still help, and several examples would help even more. Preferably if the examples are examples of good enough, rather than optimal writing, assuming that you want people to be satisfyzers, rather than maximizes with regards to application writing quality.
3Jonas Vollmer2y
On reflection I actually think 1-4 hours seems more correct. That's still pretty short, and we'll do our best to keep it as quick and simple as possible. We're just updating the application form and had been planning to make the types of changes you're suggesting (though not sharing successful applications - but that could be interesting, too)

What percentage of people who are applying for a transition grant from something else to AI Safety, get approved? Anything you want to add to put this number in context? 

What percentage of people who are applying for funding for independent AI Safety research, get approved? Anything you want to add to put this number in context? 

For example, if there is a clear category of people who don't get funding, becasue they clearly want to do something different than saving the long term future, than this would be useful contextual information.

3Jonas Vollmer2y
This isn't exactly what you asked, but the LTFF's acceptance rate of applications that aren't obvious rejections is ~15-30%.