Cross-posted to LessWrong.

One thing I have realized as a fund manager for the EA Long-Term Future Fund is that there are a lot of grants that I would like to make that never cross our desk, just because potential applicants are too intimidated by us or don’t realize that their idea is one we’d be willing to fund if only they applied.

To try to remedy that problem, I’m going to start offering the following service: if you have any idea of any way in which you think you could use money to help the long-term future, but aren’t currently planning on applying for a grant from any grant-making organization, I want to hear about it. Feel free to send me a private message on the EA Forum or LessWrong. I promise I’m not that intimidating :)

Not only that, but having talked about this with some of the other EA Funds managers, many of them were willing to extend the same offer as well:

For example, here are some of the sorts of grants I’m often excited about but that I rarely see anyone apply for:

  • “I want to transition to a career in something longtermist, but that transition would be difficult for me financially and I’d like to have some extra financial reserves to make it easier.”
  • “I think I would be more productive in my longtermist job if I had more money to spend on things that would save me time.”
  • “I have an idea for a longtermist project I want to work on, but I don’t want to commit to definitely working on that project for the duration of a long grant and want freedom to change my mind and switch to a different project if I want.”
  • “I have an ambitious idea for a project that I think would benefit the long-term future, but I think it would take a lot of money, more than what I normally see LTFF grants being given out for.”

Really, though, I don’t want to anchor anybody too much on these specific ideas—if you have any idea of any way in which you think you could use money to help the long-term future, I want to hear about it.

Comments8


Sorted by Click to highlight new comments since:

Thank you, this is quite helpful! 

Does anybody know if the same applies to OpenPhil?

Trying my luck here but would I also be able to get funds for academic projects (my research interests are in Metascience/Innovation/Growth)?

Academic projects are definitely the sort of thing we fund all the time. I don't know if the sort of research you're doing is longtermist-related, but if you have an explanation of why you think your research would be valuable from a longtermist perspective, we'd love to hear it.

Since it was brought up to me, I also want to clarify that EA Funds can fund essentially anyone, including:

  • people who have a separate job but want to spend extra time doing an EA project,
  • people who don't have a Bachelor's degree or any other sort of academic credentials,
  • kids who are in high school but are excited about EA and want to do something,
  • fledgling organizations,
  • etc.

Hello! A quick question about deadlines. 

Context: I'm considering applying for PhD funding (Economics PhD on EA-aligned research).

The EA fund's  official website says: "The Long-Term Future Fund will respond with a funding decision within six weeks, and typically in just three weeks. If you need to hear back sooner (e.g., within just a few days), you can let us know in the application form, and we will see what we can do." (same for IF) 

Whereas  the EA forum  apply now post says: "The upcoming application deadlines and funding decision dates are: 7 Mar 2021, decision by 2 Apr 2021 13 Jun 2021, decision by 9 Jul 2021 3 Oct 2021, decision by 29 Oct 2021 6 Feb 2022, decision by 4 Mar 2022 5 Jun 2022, decision by 1 Jul 2022 2 Oct 2022, decision by 28 Oct 2022" (and the deadlines seem to apply to all three funds not just animal welfare)

The answer seems important in deciding whether I should send my application in by tomorrow.

I'm almost certain that the website is correct.

I think the post you link to is outdated. It was superseded by this newer post.

I'd therefore encourage you to submit your application!

(But note that I'm a fund manager for the EAIF, not the LTFF, and so cannot speak for the LTFF.)

Thank you very much!

I've updated that post.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3