The Long-Term Future Fund (LTFF) is one of the EA Funds. Between Friday Dec 4th and Monday Dec 7th, we'll be available to answer any questions you have about the fund – we look forward to hearing from all of you!
The LTFF aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas, and to otherwise increase the likelihood that future generations will flourish.
Grant recommendations are made by a team of volunteer Fund Managers: Matt Wage, Helen Toner, Oliver Habryka, Adam Gleave and Asya Bergal. We are also fortunate to be advised by Nick Beckstead and Nicole Ross. You can read our bios here. Jonas Vollmer, who is heading EA Funds, also provides occasional advice to the Fund.
You can read about how we choose grants here. Our previous grant decisions and rationale are described in our payout reports. We'd welcome discussion and questions regarding our grant decisions, but to keep discussion in one place, please post comments related to our most recent grant round in this post.
Please ask any questions you like about the fund, including but not limited to:
- Our grant evaluation process.
- Areas we are excited about funding.
- Coordination between donors.
- Our future plans.
- Any uncertainties or complaints you have about the fund. (You can also e-mail us at ealongtermfuture[at]gmail[dot]com for anything that should remain confidential.)
We'd also welcome more free-form discussion, such as:
- What should the goals of the fund be?
- What is the comparative advantage of the fund compared to other donors?
- Why would you/would you not donate to the fund?
- What, if any, goals should the fund have other than making high-impact grants? Examples could include: legibility to donors; holding grantees accountable; setting incentives; identifying and training grant-making talent.
- How would you like the fund to communicate with donors?
We look forward to hearing your questions and ideas!
Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I'm sure would have got an offer a few years ago.
I'm pretty happy to see the LTFF offering effectively "bridge" funding for people who don't quite meet the hiring bar yet, but I think are likely to in the next few years. However, I'd be hesitant about heading towards a large fraction of people working independently long-term. I think there's huge advantages from the structure and mentorship an org can provide. If orgs aren't scaling up fast enough, then I'd prefer to focus on trying to help speed that up.
The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I'd love to be proven wrong here.