Hide table of contents
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. 

Commenting and feedback guidelines: 

I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time, in part because I won't do any further work on it.

This is a post I drafted in November 2023, then updated for an hour in March 2025. I don’t think I’ll ever finish it so I am just leaving it in this draft form for draft amnesty week (I know I'm late). I don’t think it is particularly well calibrated, but mainly just makes a bunch of points that I haven’t seen assembled elsewhere. Please take it as extremely low-confidence and there being a low-likelihood of this post describing these dynamics perfectly.

I’ve worked at both EA charities and non-EA charities, and the EA funding landscape is unlike any other I’ve ever been in. This can be good — funders are often willing to take high-risk, high-reward bets on projects that might otherwise never get funded, and the amount of friction for getting funding is significantly lower.

But, there is an orientation toward funders (and in particular staff at some major funders), that seems extremely unusual for charitable communities: a high degree of deference to their opinions.

As a reference, most other charitable communities I’ve worked in have viewed funders in a much more mixed light. Engaging with them is necessary, yes, but usually funders (including large, thoughtful foundations like Open Philanthropy) are viewed as… an unaligned third party who is instrumentally useful to your organization, but whose opinions on your work should hold relatively little or no weight, given that they are a non-expert on the direct work, and often have bad ideas about how to do what you are doing.

I think there are many good reasons to take funders’ perspectives seriously, and I mostly won’t cover these here. But, to name a few, I think it is genuinely the case that it could be good to listen to grantmakers because:

  • They are looking at the forest, and not the trees: they’re thinking about how your work fits into an ecosystem, and not just your project. They are also usually somewhat more focused on long-term goals rather than short-term success.
  • They are comparing your project to others, and hopefully making good judgments on how your project stacks up against alternate options.
  • They aren’t you, so they aren’t inherently biased towards thinking you are going to successfully do the things you expect to do.
  • They’ve thought a lot about the issue you’re working on (hopefully!) and that might mean they have useful input.

But, I’m not going to focus on the upside, because I think the default state in the EA ecosystem right now is to take the upside too seriously — deference to the opinions of funders is often the default. This post also simplifies a bunch of really complicated issues. Some funders might be better than others! And deference varies in degree!

Deference is everywhere

I’m not going to make a particularly in-depth point that deference is the default, because it seems widely known. Here are things I’ve observed in the last few years:

  • An organization made a major strategic pivot based on a handful of sentences of feedback from a funder — maybe 3 sentences of writing at most. No conversation with the funder or anyone else occurred before this pivot.
    • Another organization I know of seemed to do a similar pivot based on what seemed to me like equally little feedback, but I had less insight into the decision-making process.
  • A major policy funder in one EA cause area is seen as a direct detriment to progress on policy work but demands to be deeply engaged because they are funding it. No one does anything about this due to their funding.
  • I’ve seen funders push heavily for projects that everyone involved thought was a bad idea, but due to the funder thinking it was a good call, did it anyway.
  • I was told explicitly, on two occasions, by two different people in leadership positions in EA organizations, things along the lines of, “I don’t think I fully grasp this argument, but [funder] thinks it, and seems to think about this a lot, so I’ve updated heavily toward their position.” In both cases, this was highly consequential as a decision.
  • A major grantmaker in an EA cause area relied primarily on another funder's opinions when conducting “evaluations”. Their end result was clearly bad, and the programs they were funding didn’t end up getting anything done, for reasons that were obvious to everyone on the inside ahead of time.

I recognize these are vague, primarily for me to preserve the confidentiality of groups/people. But I think it seems relatively common as a phenomenon and situations like the above seem to be regularly discussed.

Funders often lack information you have access to

A fairly common view within EA is that funders have access to more information than you might have as an individual. This is probably true in a bunch of important ways — funders have heard of many different ways of doing direct work from different applicants, and probably talk to lots of different people in general. They also probably tend to talk to people globally, while an organization might just network with people in their region or ecosystem.

But this misses a critical detail: funders also lack information you have access to. In particular, they often lack negative information about projects and people.

 

Funders don’t hear as many negative things about projects

Funders, especially monopolistic funders, hear a lot about either how well things could go, or how well they’ve gone in the past. The organizations providing information to funders have strong incentives to not tell funders about downside risks or realized failures accurately, or tell funders accurate information about mistakes they’ve made. With a fairly monopolistic funder like Open Philanthropy, the stakes are even higher. If a group messes up a pitch or conversation with Open Philanthropy, their entire organization could lose their jobs and the project will stop. The incentives of organizations are very heavily skewed towards providing distorted, inaccurate information about projects or past work.

I think the degree to which this impacts what information funders get varies by EA cause area — I'd guess it is more pervasive in the animal welfare and global health organizations (more traditional) than in the more GCR-inclined communities (more likely to be comfortable doing something negative for the project if it is better epistemically), but I suspect this dynamic is widely distorting funders’ views of projects across EA cause areas. I’m not suggesting that this is intentional misleading of funders — most of this is distortions happening due to how things are framed, or what information is or isn’t presented.

Additionally, funders don’t hear from projects accurate negative impressions about other projects. There are social incentives for individuals to not provide accurate pictures of other groups — you don’t want to seem uncooperative or like you’re trying to negatively portray a competitor for funding to improve your position. My impression is that there are multiple instances where this has led to organizations that are better at “playing politics” with funders ending up getting a lot more funding, relative to the (perceived by other people in their ecosystem) impact/effectiveness of their organization.

My own experience doing animal advocacy has consistently been that achieving things like corporate campaign victories isn’t the biggest barrier to success — it’s trying to coordinate with other groups. Often, there will be a group involved who has a reputation among their peers for being disruptive and unproductive, or able to take credit for things they didn’t achieve. But, because of these dynamics, this reputation doesn’t reach funders. Or, the ecosystem strategy will be dictated by people who funders trust, because they have leverage over the other groups (via their trust with and relationship with a funder), and even when they make bad calls will get their way across an ecosystem.

 

Funders don’t hear negative things about people

Similarly, funders get far less negative information about individuals than people in the in-group gossip circles, for similar reasons to the above. This is probably good in some respects — funders are insulated from false or out-of-context rumors. But, they also don’t seem to hear about things that are widely known and useful information. For example, my understanding is that pervasive sexual harassment from some individuals was widely known about by employees at many animal welfare organizations, especially senior staff, but that information didn’t reach major funders for several years.

Funders often don’t share your values

Funders don’t share your values and often don’t hold the values they publicly state.

The most prominent example of this in EA is Open Philanthropy. There broadly seems to be an impression among EAs that Open Philanthropy is a highly rational, welfare-optimizing foundation. But, their revealed actions seem to consistently demonstrate that while they might be highly aligned with EA in many regards, they also regularly drift from what EA consensus seems to be.

Examples of their lack of alignment with the EA community as a whole have been their minimal (relatively speaking) focus on animal welfare, their past focus on criminal justice reform, and more. This is fine! Open Philanthropy can have the values it wants to have as a private grantmaking entity and make whatever grants it wants! However, there seems to be a widespread impression that Open Philanthropy is directly responsive to the EA community and EA values — this doesn’t seem true (see past discussion on this here).

This shouldn’t be a major issue, but because Open Philanthropy’s actions are often taken as being guiding for the community, there is some level of risk for the community as a whole. Open Philanthropy seems much more like an adjacent, but only somewhat-aligned organization — they aren’t focused directly on impact from a pure EA lens, but instead are doing a broad suite of things, some of which might be considered “EA” while others might not be.

Funders have experience in grantmaking. That is different from experience doing the work.

Funders have skills in grantmaking. This is likely very different from experience in direct work. A given employee at an EA-aligned organization will often have significantly more experience in doing the work they are doing than an employee of a funding organization.

This again seems like a fine division of labor. However, given that funders often give active feedback on the projects they fund or even directly shape the strategies of groups, experience with the work itself could be valuable.

For example, the EA Animal Welfare Fund has ~1.5 out of 6 managers experienced with any experience in direct animal welfare work (sorry in advance for my career judgments if wrong). While I think the EA AWF does great work, I also think that they also make a lot of wasted grants that they wouldn’t make if more of the grantmakers had some level of direct experience in animal advocacy.

Grantmaking is a different skill than doing direct work, and just because a funder has the opportunity to evaluate many proposals for direct work doesn’t mean that they will do a better job, especially if they aren’t drawing on their own expertise.

There are also obviously ways where having experience in direct work is biasing. I’m not sure exactly how to balance this. But I think that my general experience is that at least within animal welfare funding, the grantmakers are currently worse at evaluating projects than many people I know doing direct work. This might vary a lot by cause area, and it’s unclear to me if the average grantmaker is worse than the average person doing direct work in animal welfare. But, it's the case (in my opinion) within animal welfare that the best grantmakers seem worse at evaluating projects for impact than the best direct workers. Edit to add: But, it doesn't seem ideal for the best direct workers to give up their work to become grantmakers either.

What can we do to make this better?

March 2025 note: I never finished anything past here.

  • More funders
  • More regranting and projects that distribute grantmaking decision-making across more people
  • Less deference to grantmakers on strategy
  • More deference to (some) organizations
    • This has lots of issues though, like relying on information from ineffective organizations
  • Better and more evaluators
  • More pipelines to grantmaking for more talented people

There are lots of issues with over-updating on this!

  • Lots of direct work organizations might also have bad ideas about how to do the work.
  • People involved in direct work might be biased toward their own friends, interventions they've tried, or own values.
  • Organizations are thinking about different goals (e.g. long-term survival) than grantmakers, and might not focus on pure impact.
Comments9


Sorted by Click to highlight new comments since:

I think this is a significant issue, though I imagine a lot of this can be explained more by the fact that OP is powerful than that it is respected. 

If your organization is highly reliant on one funder, then doing things that funder regards as good is a major factor that will determine if you will continue to get funding, even if you might disagree. So it could make a lot of sense to update your actions towards that funder, more than would be the case if you had all the power.

I think that decentralizing funding is good insofar as the nonprofit gets either more power (to the extent that this is good) or better incentives. There are definitely options where one could get more funding, but that funding could come from worse funders, and then incentives decline.

Ultimately, I'd hope that OP and other existing funders can improve, and/or we get other really high-quality funders. 

So it could make a lot of sense to update your actions towards that funder, more than would be the case if you had all the power.

 

That makes a lot of sense. However, updating actions toward a funder because of their power is one thing; updating beliefs is another. 

So there are several questions lurking for me here -- you mentioned one, whether deference to OP is "explained more by the fact that OP is powerful than that it is respected" (the true cause of deference). But the other question is what people tell themselves (and others) about why they defer to OP's views, and that could even be the more important question from an epistemic standpoint.

If Org A chooses to do X, Y, and Z in significant part because OP is powerful (and it would not have done so otherwise), it's important for Org A to be eagle-eyed about its reasoning (at least internally). Cognitive dissonance reduction is a fairly powerful force, and it's tempting to come out about to the view that X, Y, and Z are really important when you're doing them for reasons other than an unbiased evaluation of their merits.

One could argue that we should give ~0 deference to OP's opinions when updating our viewpoints, even if we alter our actions. These opinions already get great weight in terms of what gets done for obvious practical reasons, so updating our own opinions in that direction may (over?)weight them even more. 

Moreover, non-OP views probably influence other people's views even if they are not consciously given any weight. As noted above, there's the cognitive dissonance reduction effect. There's also the likelihood that X, Y, and Z are getting extra buzz due to OP's support of those ideas (e.g., they are discussed more, people are influenced by seeing organizations that follow X, Y, and Z achieve results due to their favorable funding posture, etc.). Filtering out these kinds of effects on one's nominally independent thinking is difficult. If people defer to what OP thinks on top of experiencing these indirect effects, then it's reasonable to think they are functionally double-counting OP's opinion.

That roughly sounds right to me. 

I think that power/incentives often come first, then organizations and ecosystems shape their epidemics to some degree in order to be convenient. This makes it quite difficult what causally led to what. 

At the same time, I'm similarly suspicious of a lot of epistemics. It's obviously not just beliefs that OP likes that will be biased to favor convenience. Arguably a lot of these beliefs just replace other bad beliefs that were biased to favor other potential stakeholders or other bad incentives. 

Generally I'm quite happy for people and institutions to be quite suspicious of their worldviews and beliefs, especially ones that are incentivized by their surroundings. 

(I previously wrote about some of this in my conveniences post here, though that post didn't get much attention.)

Draft amnesty equivalent of a comment (i.e. I haven't put much thought into it.) I really enjoyed this post and agreed with a lot of what is in there. 

For context, I manage a modest sized grant portfolio in the animal welfare space and think a lot about these (that is, my) shortcomings. 

As an example, I had a meeting with a grant recipient last week where they explained that one of their programs wasn't working out as planned. It was refreshing to hear this kind of honesty from a grant recipient because it is so rare. 

Re experience outside of grant making, I have tended to volunteer as much as possible, but that is a limited substitute for all that you learn in paid campaigning. 

I'd be interested to hear if anyone else has other ideas how to improve the problems Abraham describes here. 

As someone who runs an org that didn't start work in the "EA world" and only has a minority of funding through EA circles, this is hugely eye opening and even a bit bizzare to me. I can scarcely believe those stories above. I can hardly think of a situation where I would "defer to a funder". After running an org for 7 years, and working in my field for 11 why on earth would I "defer" to a group who knows far less about what we do than our team? 

The funding situation outside of EA is very different among philanthropies. People assess you and your org through what is usually a long and tortuous proces, then see if you fit. Usually we don't get much feedback. In saying that I've had lots of great conversations and suggestions from funders - many of which we have implemented, but I've never had one funder ever try and suggest a major change of course, nor make funding contingent on changing anything that's not really minor. 

Interestingly outside of EA in the not-for-profit world "Deferring to funders" might be expressed in another way "chasing the money" and is often the sign of a weak and ineffective NGO. Big NGOs like Save the Children, World Vision and perhaps even CHAI shift their focus like the wind to access new pots of money. They then often roll out programs they don't have expertise in because they just aren't set up to do that kind of work, spending a lot of money setting up new programs, rather than scaling and refining what they do best. 

On a related note I've got big concerns around orgs "Pivoting" and expanding their programs outside their usual core work to access GiveWell projects and money. There's a lot of danger here not only in lower quality or less efficient work, but also mission creep losing focus on our core work. I might write more about this at some point.

Hi Nick, thanks for your interesting comment. I'm not sure how to read this particular part though: 

[...] I've never had one funder even try and suggest a major change of course, nor make funding contingent on changing anything.

For clarification, are you saying there is a difference between these two scenarios below, or are they just different ways of phrasing the same thing?:

  • a funder makes funding contingent on changing something
  • a funder decides not to fund your org, and informs you what made them make that decision (with the implication that you could change that thing, and get funding next time)

My best guess is that you do see a difference, in which in the former case, the funder is more explicitly requesting a change, and perhaps also that they are your main/only funder, so you have no choice about whether to make the change if you want to continue operating. Is that right? (Edit: or perhaps your emphasis in on whether the suggestion is to make a major change, which is too big for the org to competently undertake.)

I ask because it is counterintuitive to me to think that the below scenario is preferable, because it seems to involve withholding useful information -- but perhaps it could be considered worth it, in order for the funder to avoid creating the incentive to just "chase the money":

  • a funder deciding not to fund your org, and not telling why they made that decision.

Outside of EA, having one major funder is pretty rare. And if you did have no choice but to change or die, I would suggest the best option is often to die, or at the very least go back to the drawing board and consider your assumptions and advantages. The question isn't "could" your org make the change, but more does it really make sense to do something completely different?

When we get rejected by non-EA funders, its usually things like

- Not much information so we don't really know (most common)
- You don't fit the kind of things we fund
- You are too early or too late stage
- We don't believe what you're doing is as good as community health workers/systematic change/xxxxx
- You're not using enough tech

Funders don't explicitly request large changes because that would be seen as massive overreach.

A funder suggesting an org makes a major pivot doesn't make much sense to me. Then you're not even really the same org anymore. I think you'd usually be better to shut the org down and start again if you want to wildly change what you do. If you do change hugely, you're something new you have no track record or experience there. Outside the EA world funders mostly fund us because reasons like.

a) Think our model makes high impact 
b) Trust our org's team and track record doing what we do
c) Think our finances makes sense
d) Talk to other funders or assessors who recommend us
e) It fits their (often quite narrow) funding criteria

That's a bit scattergun but hope it helps.
 

This is a fascinating topic, and I truly appreciate you having the courage to bring it up, Abraham. More people in this forum should be open to discussions like this.
As someone who has worked in fundraising for nearly a decade, I share many of your perspectives and wanted to contribute my thoughts as well.

First, I completely agree that a disproportionate level of deference is given to a handful of major funders. In my view, the primary reason for this is the lack of funding diversity. When 30%–50% (or more) of an organization’s revenue comes from just a few key funders, it's almost inevitable that their opinions will heavily influence strategic decisions. In many cases, this isn't just a preference—it's a financial necessity. However, I also believe that funders' recommendations should be seen as valuable guidance rather than directives that must be followed unquestioningly.

Second, why do organizations give these funders so much weight? It’s not just about financial power. Many organizations trust that these funders, given their experience and broad oversight, are well-positioned to provide informed opinions, despite their team's experience or not in the field. Ideally, these insights should be grounded in objective data rather than personal biases or professional relationships.

Third, I do think there’s a kind of "inner circle" of influencers who shape the broader conversation—especially in fields like animal welfare. This influence is likely exacerbated by the limited number of evaluators and the lack of diverse methodologies for assessing interventions, new organizations, and meta-level work. Without a variety of evaluative perspectives, the same voices tend to dominate.

That said, I’m really encouraged to see more diverse perspectives emerging in this forum. I look forward to more thought-provoking discussions like this in the future!
(Disclaimer: The views I express here are mine alone and do not necessarily reflect the views of my employer).

Executive summary: The EA community exhibits an unusual degree of deference to funders, leading to strategic shifts based on minimal feedback, distorted information flows, and misaligned incentives, which could be mitigated by diversifying grantmaking structures and reducing automatic deference to funders' opinions.

Key points:

  1. Unusual deference to funders – Unlike other charitable communities, EA organizations often treat funders’ opinions as highly authoritative, even when they lack direct expertise in the work being funded.
  2. Funders lack critical information – They often receive incomplete or distorted data, particularly regarding negative aspects of projects, due to incentives for grantees to present overly positive narratives.
  3. Misalignment of values – Major EA funders, such as Open Philanthropy, do not always align with EA consensus, yet their funding choices often set de facto strategic priorities for the movement.
  4. Grantmaking differs from direct work – Funders typically specialize in evaluating grants rather than executing projects, leading to potential misjudgments in funding decisions.
  5. Potential solutions – Reducing deference to funders, increasing the number of funders and evaluators, and distributing grantmaking decisions more widely could improve funding quality and ecosystem resilience.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
Relevant opportunities