LTFF is running an Ask Us Anything! Most of the grantmakers at LTFF have agreed to set aside some time to answer questions on the Forum.
I (Linch) will make a soft commitment to answer one round of questions this coming Monday (September 4th) and another round the Friday after (September 8th).
We think that right now could be an unusually good time to donate. If you agree, you can donate to us here.
About the Fund
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas and to otherwise increase the likelihood that future generations will flourish.
In 2022, we dispersed ~250 grants worth ~10 million. You can see our public grants database here.
Related posts
- LTFF and EAIF are unusually funding-constrained right now
- EA Funds organizational update: Open Philanthropy matching and distancing
- Long-Term Future Fund: April 2023 grant recommendations
- What Does a Marginal Grant at LTFF Look Like?
- Asya Bergal’s Reflections on my time on the Long-Term Future Fund
- Linch Zhang’s Select examples of adverse selection in longtermist grantmaking
About the Team
- Asya Bergal: Asya is the current chair of the Long-Term Future Fund. She also works as a Program Associate at Open Philanthropy. Previously, she worked as a researcher at AI Impacts and as a trader and software engineer for a crypto hedgefund. She's also written for the AI alignment newsletter and been a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science and Engineering from MIT.
- Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy.
- Linchuan Zhang: Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
- Oliver Habryka: Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.
You can find a list of our fund managers in our request for funding here.
Ask Us Anything
We’re happy to answer any questions – marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.
There’s no real deadline for questions, but let’s say we have a soft commitment to focus on questions asked on or before September 8th.
Because we’re unusually funding-constrained right now, I’m going to shill again for donating to us.
If you have projects relevant to mitigating global catastrophic risks, you can also apply for funding here.
Personally, I'd like to see more work being done to make it easier for people to get into AI alignment without becoming involved in EA or the rationality community. I think there are lots of researchers, particularly in academia, who would potentially work on alignment but who for one reason or another either get rubbed the wrong way by EA/rationality or just don't vibe with it. And I think we're missing out on a lot of these people's contributions.
To be clear, I personally think EA and rationality are great, and I hope EA/rationality continue to be on-ramps to alignment; I just don't want them to be the ~only on-ramps to alignment.
[I realize I didn't answer your question literally, since there are some people working on this, but I figured you'd appreciate an answer to an adjacent question.]