The LTFF recently switched to doing grant rounds, our first round closes on Saturday (deadline EOD anywhere 2025-Feb-15). I think you should consider submitting a quick application in the next 24 hours. We will likely consider applications submitted over the next few days in this round (unless we are overwhelmed with applications).
Apply nowIn my personal view, I don't think there has been a better time to work on AI safety projects than right now. There is a clear-ish set of priorities, funders willing to pay for projects, and an increasing sense from the AI safety community that we might be close to the critical window for ensuring AI systems have a profoundly positive effect on society.[1]
I am particularly keen to see applications on:
- publicly communicating AI threat models and other societal implications
- securing AI systems in ways I don't expect to be done by default in labs
- getting useful safety research out of AI systems when the AI is powerful and scheming against you
- analysis of AI safety research agendas that might be especially good candidates for AIs (e.g. because they can be easily decomposed into subquestions that are easily checkable)
- new organisations that could use seed funding
- gatherings of various sizes and stakeholders for navigating the transition to powerful AI systems
- neglected technical AI governance research and fieldbuilding programs
- career transition grants for anyone thinking of the above
- areas that Open Philanthropy recently divested from
Other LTFF fund managers are excited about other areas and an area not being included in the list above is not a strong indicator that we aren't excited about it.
You can apply to the round here (deadline EOD anywhere 2025-Feb-15).
- ^
we are also interested in funding other longtermist areas, though empirically they meet our bar much less often than AI safety areas.
When will the next round likely be?
Not sure right now, but probably sometime next quarter.