I’ve noticed that some people seem to have misconceptions about what kinds of grants EA Funds can make, so I put together a quick list of things we can fund that may be surprising to some of you.

(Reminder: our funding deadline for this round is March 7, though you can apply at any time of the year.)

  • EA Funds will consider building longer-term funding relationships, not just one-off grants.
    • Even though we typically make one-off grant decisions and have some turnover among fund managers, we can consider commitments to provide longer-term funding. We are also happy to otherwise help with the predictability of funding, e.g. by sharing our thoughts on how easy or hard we expect it to be to get funding in the future.
  • EA Funds can provide academic scholarships and teaching buy-outs.
    • We haven’t received a lot of applications for scholarships in the past, but the Long-Term Future Fund (LTFF) and EA Infrastructure Fund (EAIF) would be very excited about funding more of them.
    • Few graduate students and postdocs seem to be aware that they can be bought out of teaching duties, but sometimes this can be a great way to make more time for research.
  • EA Funds will consider funding organizations, including medium-sized ones, not just small projects.
    • The LTFF and EAIF are still unsure whether they will want to fund larger organizations in the longer term. Until then, they will consider funding organizations as long as they don’t have a comparative disadvantage for doing so. If Open Philanthropy has not seriously considered funding you, we will consider you, at least for now.
  • EA Funds will consider making large grants.
    • We have made grants larger than $250,000 in the past and will continue to consider them (potentially referring to other funders along with our evaluation). We think our comparative advantage is evaluating grants that other funders aren’t aware of or don’t have the capacity to evaluate, which typically are small grants, but we are flexible and willing to consider exceptions.
  • The EAIF and LTFF can make grants at any time of the year and on short notice.
    • We run funding rounds because it saves us some effort per application. But if your project needs funding within a month and the next decision deadline is three months away, we can still make it happen.
    • If there were a project that would have a very large impact if funded within three days, and no impact otherwise, there’s a high chance that we would get it funded.
  • The EAIF and LTFF can make anonymized grants.
    • As announced here, we can get you funded without disclosing personal information about you in our public payout reports.
  • EA Funds can pass on applications to other funders.
    • In cases where we aren’t the right funder (e.g., because we don’t have sufficient funding, or it’s a for-profit start-up, or there is some other issue), we are open to passing along applications when we think it might be a good fit. We are in touch with Open Philanthropy, EA-aligned for-profit investors, and other funders, and they have expressed interest in receiving interesting applications from us.

In general, we will listen to the arguments rather than rigidly following self-imposed rules. We have few constraints and are good at finding workarounds for the ones we have (except for some legal ones). We want to help great projects succeed and will do what it takes to make that happen. If you are unsure whether EA Funds can fund something, the best working hypothesis is that it can.

 

Reminder: the current EA Funds round closes March 7th. Apply here, and see this article for more information.

(Note that the Global Health and Development Fund does not accept funding applications, so this post does not apply to it.)

Comments14


Sorted by Click to highlight new comments since:

Some further, less important thoughts:

  • Some people who repeatedly didn’t get funded have been fairly vocal about that fact, creating the impression that it’s really hard to get funded at least among some people. I feel unhappy about this because it seems to discourage people from launching new things. The reason why a proposal doesn’t get funded is usually quite specific to the project and person. They may get funded with a different project, or a different person may get funded for the same kind of project.
  • The absolute number of careful long-term EA funders is still low, but this has been growing over the past years. Extrapolating from that, it seems plausible that the funding situation in EA will likely be excellent in the years to come.
  • I believe (and others at EA Funds agree) that novel projects often shouldn’t receive long-term funding because it’s still unclear whether they will have much of an impact. At the same time, I am also keen to ensure that the staff of the project can feel financially secure. Based on this, my suggestion is that grantseekers should ask to pay themselves generous salaries for a short time frame, so they don’t have to worry about financial security, but will also strongly consider discontinuing their project early on if it doesn’t bear fruit. And we should encourage grantseekers to do so.

Relevant for people trying to get funding for a project: 

People could consider writing up their project as a blog post on the forum see if they get any bites for funding. In general, I think I'd encourage people looking for funding to do more writing up one page summaries of what they would like to get funded. It would include things like: 

  • Problem the project addresses
  • Why the solution the project proposes is the right one for the problem
  • Team and why  they're well suited to work on this

I'd guess if you write a post like this there'd be quite a few people happy to read that and answer if it sounds like something they'd be interested to fund / if they know anyone to pass it on to / what more they'd need to know to fund or pass it on. Whereas my perception is that currently people feeling out a potential project and whether it could get funded are much more likely to approach people to ask to get on a call, which is far more time consuming and doesn't allow someone to quickly answer 'this isn't for me, but this other person might be interested'. 

How do you feel about there being very few large institutional donors in effective altruism? This seems like it could be a good thing as it allows specialization and coordination, but also could be bad because it means if a particular person doesn't like you, you may just be straight up dead for funding. It also may be bad for organizations to have >80% of their funding come from one or two sources.

Some quick thoughts:

  • EA seems constrained by specific types of talent and management capacity, and the longtermist and EA meta space has a hard time spending money usefully
  • In this environment, funders need to work proactively to create new opportunities (e.g., by getting new, high-value organizations off the ground that can absorb money and hire people)
  • Proactively creating such opportunities is typically referred to as "active grantmaking"
  • I think active grantmaking benefits a lot from resource pooling, specialization, and coordination, and less from diversification [edit: I think active grantmaking relies on diverse, creative ideas, but can be implemented within a single organization]
  • Likewise, in an ecosystem that's overall saturated with funding, it's quite likely that net-harmful projects receive funding; preventing this requires coordination, and diversification can be bad in such a situation
  • Given the above, I think funder coordination and specialization will have large benefits, and think the costs of funder diversification will often outweigh the benefits
  • However, I think the optimum for the EA ecosystem might still be to have 3-5 large donors  instead of the status quo of 1-3 funders (depending on how you count them) 
  • I think small and medium donors will continue to play an important role by funding projects they have local/unique information about and funding them (instead of giving to EA Funds)

(Thanks to some advisors who recently helped me think about this.)

Thanks for elaborating! 

> the optimum for the EA ecosystem might still be to have 3-5 large donors  instead of the status quo of 1-3 funders 

Agree on that. One additional thought that came to mind is that if there are indeed 3-5 large donors but they (unkowningly) rely on the views of the same expert for grants in a certain field (e.g. climate advocacy), then Peter's concerns still apply. 

If you have time to elaborate (no worries if not), then I'd be curious about the minimum number of field experts that are consulted when making EA Funds grant decisions. I understand that this is also costly as it takes time for those experts to understand and evaluate new approaches, so I don't know what the optimum here should be. Maybe ask 2-3 experts for every new project, and in case they disagree, ask 1-2 more experts?

There's no strict 'minimum number'-- sometimes the grant is clearly above or below our bar and we don't consult anyone, and sometimes we're really uncertain or in disagreement, and we end up consulting lots of people (I think some grants have had 5+).

I will also say that each fund is somewhat intentionally composed of fund managers with somewhat varying viewpoints who trust different sets of experts, and the voting structure is such that if any individual fund manager is really excited about an application, it generally gets funded. As a result, I think in practice, there's more diversity in what gets funded than you might expect from a single grantmaking body, and there's less risk that you won't get funded just because a particular person dislikes you.

What Asya said.

I'd add that fund managers seem aware of it being bad if everyone relies on the opinion of a single person/advisor, and generally seem to think carefully about it.

That's great to hear - I did not know that

Thanks for elaborating! Your process seems robustly good, and I appreciate the extra emphasis on diverse viewpoints & experts. 

I have a clarification question: How do you define coordination in this context? Could you give a few concrete examples of coordination?

Again, some fairly quick, off-the-cuff thoughts (if they sound wrong, it's possible that I communicated poorly):

  • Avoiding duplication of effort. E.g., lots of grantees apply to multiple funders simultaneously, and in Q4 2020, 3 grants were approved both by LTFF and SAF/SFF, creating substantial unnecessary overhead.
  • Syncing up on whether grants are net negative. E.g., we may think that grant A is overall worth funding, but has a risk of being net-negative, so would err on the side of not making it (to avoid acting unilaterally). If we talk to other grantmakers and they agree with our assessment after careful investigation of the risks, we can still go ahead and make the grant. Similarly, we may think grant B doesn't have such a risk, but by talking to another grantmaker, we may learn about an issue we were previously unaware of, and may decide not to make the grant.
  • Similar to the above, syncing up on grants in general (i.e. which ones are good use of resources, or what the main weaknesses of existing organizations are).
  • Joining forces on active grantmaking. E.g., another funder may have some promising ideas but not enough time to implement them all. EA Funds may have some spare resources and a comparative advantage for working on a particular one of those ideas, so can go ahead and implement them, receiving input/mentorship from the other funder we wouldn't otherwise have received.
  • Generally giving each other feedback on their approach and prioritization. E.g., we may decide to pursue an active grantmaking project that seems like poor use of resources, and other grantmakers may make us aware of that fact.

Thanks, this was very helpful! 

From where I'm coming from, having seen bits of many sides of this issue, I think average quality matters more than average quantity.

Traits of mediocre donors (including "good" donors with few resources):
- Don't hunt for great opportunities
- High amounts of noise/randomness in results
- Be strongly overconfident in some weird ways
- Have poor resolution, meaning they will not be able to choose targets much better than light common sense wisdom
- Difficult, time consuming, and opaque to work with
- Not very easy to understand, or not predictable

If one particular person not liking your for an arbitrary reason (uncorrelated overconfidence) stops you from getting funding, that would be the sign of a mediocre donor.  

If we had a bunch of these donors, the chances would go up for some nonprofits. Different nonprofits could be overconfident in different ways, leading to more groups being over or below different bars. Some bad nonprofits would be happy, because the noise could increase their chances of getting funding. But I think this is a pretty mediocre world overall.

Of course, one could argue that a given particular donor base isn't that good, so more competition is likely to result in better donors. I think competition can be quite healthy and result in improvements in quality. So, more organizations can be good, but for different reasons, and only so much as they result in better quality.

Similar to Jonas, I'd like to see more great donors join the fray, both by joining the existing organizations and helping them, and by making some new large funds.

Thanks a lot for your efforts in making the EA Funds flexible and as valuable as possible, and also making sure everyone is aware of them, really appreciate it!

I think this could go a long way in realizing innovative ideas and helping people get started with their high impact careers & organisations.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f