LTFF is running an Ask Us Anything! Most of the grantmakers at LTFF have agreed to set aside some time to answer questions on the Forum.
I (Linch) will make a soft commitment to answer one round of questions this coming Monday (September 4th) and another round the Friday after (September 8th).
We think that right now could be an unusually good time to donate. If you agree, you can donate to us here.
About the Fund
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. In addition, we seek to promote, implement, and advocate for longtermist ideas and to otherwise increase the likelihood that future generations will flourish.
In 2022, we dispersed ~250 grants worth ~10 million. You can see our public grants database here.
Related posts
- LTFF and EAIF are unusually funding-constrained right now
- EA Funds organizational update: Open Philanthropy matching and distancing
- Long-Term Future Fund: April 2023 grant recommendations
- What Does a Marginal Grant at LTFF Look Like?
- Asya Bergal’s Reflections on my time on the Long-Term Future Fund
- Linch Zhang’s Select examples of adverse selection in longtermist grantmaking
About the Team
- Asya Bergal: Asya is the current chair of the Long-Term Future Fund. She also works as a Program Associate at Open Philanthropy. Previously, she worked as a researcher at AI Impacts and as a trader and software engineer for a crypto hedgefund. She's also written for the AI alignment newsletter and been a research fellow at the Centre for the Governance of AI at the Future of Humanity Institute (FHI). She has a BA in Computer Science and Engineering from MIT.
- Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy.
- Linchuan Zhang: Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
- Oliver Habryka: Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.
You can find a list of our fund managers in our request for funding here.
Ask Us Anything
We’re happy to answer any questions – marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.
There’s no real deadline for questions, but let’s say we have a soft commitment to focus on questions asked on or before September 8th.
Because we’re unusually funding-constrained right now, I’m going to shill again for donating to us.
If you have projects relevant to mitigating global catastrophic risks, you can also apply for funding here.
In light of this (worries about contributing to AI capabilities and safetywashing) and/or general considerations around short timelines, have you considered funding work directly aimed and slowing down AI, as opposed to the traditional focus on AI Alignment work? E.g. advocacy work focused on getting a global moratorium on AGI development in place (examples). I think this is by far the highest impact thing we could be funding as a community (as there just isn't enough time for Alignment research to bear fruit otherwise), and would be very grateful if a fund or funding circle could be set up that is dedicated to this (this is what I'm personally focusing my donations on; I'd like to be joined by others).
Thank you. This is encouraging. Hopefully there will be more applications soon.