Hide table of contents

I’ve been a part-time grantmaker since January 15th or so, first (and currently) as a guest manager on the Long-Term Future Fund, and more recently did non-trivial other grantmaking as well. 

In doing so, I learned a number of useful but not fun lessons, some of which may be useful to others on this forum. I mostly learned these lessons through consulting more senior grantmakers and other advisors, and secondarily from thinking through these problems abstractly, rather than independently learning them the hard way from contact with reality with my own grants.

Standard disclaimers apply: all views are my own. I do not speak for any of the people who advised me on grantmaking. I do not speak for the Long-Term Future Fund, other grantmakers, or my full-time employer (Rethink Priorities). 

I believe my points are more true for the longtermist and meta spaces (I have zero experience with animal or global health grantmaking), and more true for longtermism than meta (in particular, I think the points about feedback especially are directionally true for meta but less so than in longtermist grantmaking). Most of my actual grants have been in forecasting, but I actually do not believe my points below are more true for forecasting grants than for other subfields of longtermist grantmaking.

In this article, I tried my best to convey knowledge that is both true and at least somewhat novel to me/this Forum. In doing so, I may say things that are systematically not-yet-said because others believe that these messages are difficult to hear and convey publicly with the appropriate level of care and nuance. I tried my best to be both honest and kind, but I likely still screwed up the balance at various parts. I ask for your patience. 

Summary

Important lessons

  1. (Refresher) The top grants are much much better than the marginal grants, even ex ante.
  2. All of the grantmakers are extremely busy
  3. Most (within-scope) projects are usually rejected for personal factors or person-project fit reasons
  4. Some of your most impactful work in grantmaking won’t look like evaluating grants
  5. Most grantmakers don’t (and shouldn’t) spend a lot of time on evaluating any given grant.
  6. It’s rarely worth your time to give detailed feedback

Less Important Lessons

  1. The value of time for people in EA is incredibly high
  2. The direct counterfactual impact of grant evaluation is lower than I initially would have guessed
  3. People are often grateful to you for granting them money. This is a mistake.
  4. Conflicts of Interests are unavoidable if you want to maximize impact as a grantmaker in EA.
  5. Acceptance/Rejection rates can easily be extremely misleading

(Refresher) The top grants are much much better than the marginal grants, even ex ante.

I think it’s well understood in EA that for most important values of X in the EA space (charitable interventions, organizations, careers, etc,), the best X are much more impactful than the marginal X. 

I think I have internalized this more after I started grantmaking than before. In particular, I now understood that (from the perspective of many grantmakers), their top grants are ex ante much more impactful than their marginal grants. This lesson has publicly been discussed by other grantmakers, see e.g.

I think the top decile is at least an order of magnitude more impactful (per dollar) than the bottom decile grantmaking that we’re doing

Claire

I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make.  And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways.

Buck

In practice, this often means that your highest impact actions as a grantmaker is to figure out ways to make more excellent high-impact grants, and make sure those grants go well. (rather than e.g., make more grants overall, or marginally improve marginal rejects to be marginal acceptances, or make marginal acceptances into median grants). 

Internalizing this dynamic well drives many of the choices that might be surprising to readers of the forum, some of which I’ll cover below. 

All of the grantmakers are extremely busy

As a refresher, all grantmakers at the Long-Term Future Fund (and I think all the other EA Funds grantmakers as well) are part-time. What may not be obvious is that they’re usually very part-time. I believe my colleagues at LTFF work many hours in their day jobs (counting meetings and other shallow work) as a norm rather than the exception, and the LTFF grantmaking is on top of that. 

(I personally do not, for personal psychological reasons. Unfortunately this still does not leave me with ample productive high-energy time for grantmaking).

I don’t have a good sense of how much full-time grantmakers at places like Open Phil and Future Fund work, but I will not be surprised if they have similarly packed schedules as well, due to the sheer volume of grants.

Understanding and internalizing this well drives some of my other lessons here, including the extremely high value-of-time for people doing EA work.

Most (within-scope) projects are usually rejected for personal factors or person-project fit reasons

“Within-scope” is an important disclaimer here: I would not be surprised if more than half of LTFF grants are desk-rejected because they have no plausible story for tractably improving the long-term future. 

But of the projects you are seriously considering at all, most of the time, if you reject projects, it is because you don’t think the person(s) applying have a good fit for the role. This could be due to general competence, bad project-specific competence and fit, or poor vetting on our end.

Unfortunately, even if the problem is solely due to project-specific competence and fit, there aren’t many tractable and high-EV levers for grantmakers to pull. If somebody applies with a poorly scoped research project, they may well be an amazing operations hire or entrepreneur but unfortunately a) the grantmaking process is not set up to evaluate this well, and b) we are specialized in evaluating grants, not in giving open-ended career advice.

(This makes feedback on projects pretty hard as well as awkward, which I’ll discuss later as well).  

I think this is a major misconception in the EA community. Many times, I hear on the grapevine that the LTFF doesn't do “grants in X cause area, or Y location.” This is rarely correct unless “X” means “traditional neartermism charities” or “something covered very well by another EA grantmaker.” Instead, it’s much more likely that we did not think any of the grantseekers (so far) in X cause area were a good fit for this work. (Additionally, sometimes nobody applies to us with grants in that area, and we sometimes make grants in a specific area but none of the grantees want to be public)

The flip side here is that from a grantseeker perspective, you shouldn’t hesitate to apply for grants from EA funders just because other grantseekers in your cause area were rejected!

Some of your most impactful work in grantmaking won’t look like evaluating grants

One naive way to model opportunity cost of evaluating X grant is to think whether the time spent here is wiser or less wise than spending more time on evaluating Y grant. But this is narrowing your options too far. Recall that your job as a grantmaker is ultimately to make the world better as much as possible by your actions, and more proximally to make more extremely impactful projects exist and be really good.

Much of your impact can come from finding impactful new people and projects or making them happen in the first place. For example, you can do this via being a relative expert in a space, and spotting new opportunities (in my case this will be in forecasting), or by knowing talented EA-adjacent folks and convincing them to explore or start new EA projects.  

More broadly, instead of evaluating other grants, your opportunity costs could instead look like:

  1. Designing structures to make it easier for the best grants to be funded
    1. If the most impactful projects are much more impactful than other projects, reducing time costs, attentional costs, and stress of your top grantees is just extremely important
  2. Advising the very best applicants and grantees
  3. Coming up with novel funding mechanisms, advertising, broader solicitations, etc., such that top grantseekers who otherwise wouldn’t apply to you would now do so.
    1. Example 1: Future Fund’s attempts at various different solicitations (open call, regranting program, whatever their next thing is, etc)
    2. Example 2: Open Phil’s various targeted Requests for Proposals to get a specific slice of novel proposals that they believe can be high impact
  4. Active grantmaking
    1. That is, personally coming up with the projects that you want to see in the world, and then persuading other people (through a combination of your judgment and your pool of cash) to implement your projects
  5. Building a “brand” as a grantmaker that’s a) generally positive and b) helps your best grantseekers self-select in.
    1. (my personal hot takes, here and below. I didn’t check with others about what they think the brands should be, or what others think they are, I’m just saying what I think they are) For example, LTFF’s brand (built before I joined) is roughly “be very willing to fund weird things that the grantmakers’ inside views believe are really impactful for the long-term future.”
      1. LTFF specifically, and EA Funds in general, also have had a brand of extreme transparency and willingness to publicly document as much of their decisions as possible, but due in part to intentional prioritization choices, I think this brand is going away.
    2. Open Phil’s brand is roughly “classic EA: we are careful thinkers who thought really hard about what the best opportunities are, and we genuinely believe after careful consideration that these are the best giving opportunities. We are not afraid of taking risks with hits-based giving, but these risks are careful and measured. ”
    3. Future Fund’s brand is roughly “be bold, be daring. Be willing to throw lots of money on really big and high-impact things.”
    4. (I’m less sure about the brands of places like Longview and SFF).
      1. Though SFF has some cool decision-making, etc innovations going on like the S-Process.
  6. Understanding the world better and building a deep worldview so you can spot stellar grant opportunities that otherwise would not be on your radar
    1. I think of Worldview investigations, and a lot of Holden’s writings as like this.
  7. Other creative ways to get the highest impact grants out
    1. ??? Ideas welcome: the sky’s the limit
  8. Spend less of your time on grantmaking and more on direct work

Most grantmakers don’t (and shouldn’t) spend a lot of time on evaluating any given grant. 

A common mistake most junior grantmakers make (myself included) is to spend a lot of time agonizing over specific grants. Because of the previously noted lessons, this is rarely the correct course of action. 

To spell it out in detail: there are good grants, bad grants, and marginal grants. You shouldn’t spend too much time on good grants, because they likely should be funded anyway. You shouldn’t spend too much time on bad grants, because they likely shouldn’t be funded anyway. So this leaves you with marginal grants. Marginal grants are marginal because of one of two possibilities:

  1. They are either really high positive impact or really high negative impact, and you’re not sure which.
  2. They are just not very high impact overall, and after further investigation, you’d come to the conclusion that they’re either just above or just below your threshold.

#1 is a good reason to spend a lot of time investigating in detail. #2 is not. Unfortunately most grants look like #2 than #1.

Actually it’s worse than this: an additional assumption behind the value of investigating marginal grants in the #1 bucket is that you can improve decision clarity with further investigation. And the vast majority of grants just don’t look like this. 

Note that at least 3 of the above 4 lessons are necessary for the strength of this conclusion:

  1. If the top grants aren’t very far away (ex ante) from the marginal grant, then there’s more value in separating out “worth funding” vs “not worth funding” grants
  2. If the grantmakers aren’t extremely busy, then marginal cost of further investigation time is lower
  3. If your opportunity cost with grant evaluation is just within evaluating other grants, then your grant evaluation time can stretch to fill your (limited) grantmaking time. But unfortunately, they trade off indirectly against other things you can do to solicit the best grants. For example, the indirect opportunity cost of (not) spending time being creative with (e.g.) grant solicitations can be quite large.

I’m not happy about this set of choices. I consider this one of the most “unfun” lessons here. My own org has had trouble getting funding in the past, though my guess is that more consideration would not have helped their funding case much. At some level, I feel quite sad that there’s all those people trying hard to improve the long-term future, and we’re not giving them the careful and serious due consideration that they in some sense rightfully deserve. However, my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice.

And ultimately, nobody said that (consequentialist) morality had to be easy, or fair. It’s the moral patients that ultimately matter, not the feelings of the grantseekers or grantmakers. And if I sacrifice foregone opportunities to make highly impactful grants for the sake of a vague sense of procedural justice, or fairness, then I would be acting wrongly. (A caveat here is that of course procedural justice or fairness can globally make sense if some applicants strongly prefer grantmakers who care immensely about procedural justice or fairness, so it may make sense for some grantmakers or grantmaking agencies to honestly and publicly specialize in this).

It’s rarely worth your time to give detailed feedback

I’m personally very pro-feedback, even compared to most people in EA. I try my best to give lots of feedback as a manager. In the past, I’ve been pretty outspoken about the value to the movement of giving feedback to near-rejects for EA org jobs

But from a grantmaking perspective, detailed feedback is rarely worthwhile, especially to rejected applicants. The basic argument goes like this: it’s very hard to accurately change someone’s plans based on quick feedback (and it’s also quite easy to do harm if people overupdate on your takes too fast just because you’re a source of funding).  Often, to change someone’s plans enough, it requires careful attention and understanding, multiple followup calls, etc. And this time investment is rarely enough for you to change a rejected (or even marginal) grant to a future top grant. Meanwhile, the opportunity cost is again massive.

Similarly, giving useful feedback to accepted grants can often be valuable, but it just isn’t high impact enough compared to a) making more grants, b) making grants more quickly, and c) soliciting creative ways to get more highest-impact grants out. (An exception here may be giving advice and feedback to the top accepted grants).

I’m pretty sad about this. I really enjoy giving (especially positive) feedback to people. I don’t love the power dynamic inherent in the grantmaking relationship, but otherwise giving feedback is fun, I learn a lot in doing so, and also it feels like a basic sign of respect to acknowledge someone’s work and provide some opinions on how I subjectively think things can go better.

But ultimately, I have to be honest with myself: for any given grantee, most of the value I provide as a grantmaker is by recommending grants. As much as both they and I like to imagine that my feedback is valuable, ultimately they’re here for the money, and it’ll be a dereliction of duty to sacrifice making more grants in order to provide more feedback.

Caveat 1: This assumes that you don’t suck as a grantmaker

A major complication with the narrative above is that it assumes you don’t suck as a grantmaker. Unfortunately, even if you do suck as a grantmaker, there aren’t a lot of divergences in the actions you ought to take from the above. For example, if you suck as a grantmaker, you don’t suck much less by spending a lot more time per grant, nor do you get much information about how much you suck. Instead, you get updates on the quality of your grantmaking via:

  1. Feedback from more senior grantmakers
  2. Feedback from others in the community
  3. Careful retrospective analysis about the impact of past grants

Similarly, there is not much value for you to provide feedback to (would-be) grantees, since having poor judgment on which grants are good likely correlates with having poor judgment on how to give advice to grantees. 

Caveat 2: This assumes that your organization, movement, etc, isn’t awfully wrong at a high-level

Unfortunately, this is again not something you can improve by looking at specific grants. Instead, this is best improved by understanding the world better, forming worldview investigations, conducting your own or contracting out cause prioritization research, careful empirical retrospective analysis again, etc. 

Some less important/salient lessons: 

  1. The value of time for people in EA is incredibly high
    1. Oftentimes in EA grantmaking, there are decisions where an hour of extra work can get you better decision quality for the allocation of thousands of dollars, but it’s still (rationally) not worth it to investigate further.
    2. I think the proper update is that this is not because the grants aren’t valuable, but because your other actions are even more valuable, whether in grantmaking or other EA work.
  2. The direct counterfactual impact of grant evaluation is lower than I initially would have naively guessed
    1. This is because if you don’t approve the best grants, it is likely that some other grantmaker would have.
    2. But this is (mostly) salvaged by the impact of saving the time of more impactful people (including both grantmakers and grantseekers).
      1. So in the end, the naive calculation is closer to the true “impact equity” than measuring impact by direct counterfactualness.
    3. However, you can potentially have direct counterfactual impact with active grantmaking, for example by giving license for unusually competent people with the relevant skillsets on the peripheries of the EA community to do EA work.
  3. People are often grateful to you for granting them money. This is a mistake.
    1. Most saliently, this is not my money.
    2. More subtly (but more importantly), we really don’t want to move to a culture of patronage and favors. Grantmakers should be understood as people who try their best to use all of the information resources at their disposal (including networks) to make the correct allocation of limited resources for the greater good, not people who reciprocate (selfishly) other actions with money.
      1. I think even if I were to be donating my own money, the attitude I’d like to have (and in fact the one I did have when e.g. doing my own relatively small Giving What We Can donations) is something like “I should view myself as one of the stewards of this capital. It's pooled up around us right now, but it belongs to the world.” But I feel this is even more true given that I was not the person to earn this money to begin with.
  4. Conflicts of Interests are unavoidable if you want to maximize impact as a grantmaker in EA.
    1. In tension with the above point, the EA community is just really small, and the subcommunities of fields within it (AI safety, or forecasting, or local community building) are even smaller.
    2. Having a CoI often correlates strongly with having enough local knowledge to make an informed decision, so (e.g) if the grantmakers with the most CoIs in a committee always recuse themselves, a) you make worse decisions, and b) the other grantmakers have to burn more time to be more caught up to speed with what you know.
    3. I started off with a policy of recusing myself from even small CoIs. But these days, I mostly accord with (what I think is) the equilibrium: a) definite recusal for romantic relationships, b) very likely recusal for employment or housing relationships, c) probable recusal for close friends, d) disclosure but no self-recusal by default for other relationships.
    4. To avoid issues with #3, I’ve also made a conscious effort to do more recusals in reverse: that is, I’ve made a conscious effort to avoid being too friendly with grantees.
  5. Acceptance/Rejection rates can easily be extremely misleading
    1. The denominator of who applies to different funding agencies, and for different cause areas, and from different localities, can just vary massively in terms of person-specific and person-project fit factors.

Conclusion

The above is a list of unfun lessons I’ve learned as a junior grantmaker, which might be helpful for plugging a few gaps within the current public understanding of EAs about what the current funding landscape looks like. It’s meant to be an epistemic contribution rather than a motivational contribution. However, if I were to guess at reasonable calls-to-action that some readers ought to extrapolate from the above, potential moves include:

  • All the grantmakers are extremely busy, and there is a lot of value to making better grants. Some people here may wish to skill up in grantmaking, and test their fit for this work, for example by a) identifying great projects in the world that can be done, b) finding and convincing great people to do such projects, and c) linking the projects and founders with the relevant EA funders.
  • Many people believe that the best grants (projects) are much better than marginal grants (projects), even within effective altruism or longtermism. So from the perspective of a would-be grantee, your goal should not be to “do a good enough job to get funding,” but instead to “aim really high and do a really fantastic job.” Maximize expected impact, don’t satisfice it.

Acknowledgements 

Thanks to EA Funds et.al for building a grantmaking infrastructure and allowing me to test my fit and interests in grantmaking. Thanks also to the senior grantmakers and other advisors for helping me think through these questions. Very few of the ideas here are original to me. Thanks also to Asya Bergal, Caleb Parikh, Max Daniel, and other reviewers of this piece. I do not speak for anyone other than myself. Most mistakes are my own. Thanks to the funders who made this possible. And of course, thanks especially to all of the applicants of EA funding more broadly, who took a chance in pursuit of building something altruistic and great.

Comments58
Sorted by Click to highlight new comments since: Today at 8:38 AM

This got me wondering: how much agreement is there between grantmakers (assuming they already share some broad philosophical assumptions)?

Because, if the top grants are much better than the marginal grants, and grantmakers would agree over what those are, then you could replace the 'extremely busy' grantmakers with less busy ones. The less busy ones would award approximately the same grants but be able to spend more time investigating marginal giving feedback.

I'm concerned about the scenario where (nearly) all grantmakers are too busy to give feedback and applicants don't improve their projects.

Doing random draws of the highest impact grants and making a few grant makers evaluate them without interacting seems like an easy (but expensive) test. I expect grant makers to talk enthusiastically with their colleagues  about their top grants, so grant makers  might already know if this idea is worthwhile or not. 
But yes, if agreement is low, grant makers should try to know each others tastes, and forward grants they think another grant maker would think of as high impact. If grant makers spend a lot of time training specific world views this seems like a more important thing to explore. 

It might be worth checking if people sometimes mistake high impact grants for marginal grants. Another small random draw would probably be decent here too (Sharing experiences of high impact grants with your colleagues is a mechanism prone to a lot of selection effects, so it might be worth checking if this causes considerable oversight) 

I expect none of this to be new/interesting for LTFF


(Feedback about writing style and content  is much appreciated, as this is my first real comment on the EA forum)

I liked the comment. Welcome!

I think determining the correlation between the ex-ante assessment of the grants, and the ex-post impact could be worth it. This could be restricted to e.g. the top 25 % of the grants which were made, as these are supposedly the most impactful.

Randomization is definitely bold! 

This got me wondering: how much agreement is there between grantmakers (assuming they already share some broad philosophical assumptions)?

I wonder if "grantmakers" is the wrong level of abstraction here. I think LTFF grantmakers usually agree much more often than they disagree about top grants, and there is usually agreement about other grants too. I think  (having not been involved in the selection process) the agreement is in part due to sharing similar opinions because we're (we think) correct and in part because similar judgement/reasoning process/etc is somewhat selected for. 

I suspect there is similar (but probably lower?) correlations with other "good" longtermist grantmakers who do generalist grant eval work. I think some longtermist grantmakers (e.g. a subset of Open Phil Program Officers) specialize really deeply in a subfield, such that they have (relatively) deep specialist knowledge of some fields and are able to do a lot more active grantmaking/steering of that subfield. So they'll be able to spot top grant opportunities that we normally can't. 

Because, if the top grants are much better than the marginal grants, and grantmakers would agree over what those are, then you could replace the 'extremely busy' grantmakers with less busy ones.

I suspect we have different empirical views on how busy (or more precisely, how high the counterfactual value of time) the "less busy" good grantmakers are. But in broad strokes, I think what you say is correct and is a direction that many funders are moving towards; I think the bar for becoming a grantmaker in EA has gone down a bunch in the last few years. E.g. 1) I don't think I would have qualified as a grantmaker 2 years ago, 2) Open Phil appears to be increasing hiring significantly, 3) many new part-time grantmakers with the Future Fund regranting program, etc.

The less busy ones would award approximately the same grants but be able to spend more time investigating marginal giving feedback.

I agree that this will probably be better than the status quo. However, naively if you set up the infrastructure to do this well, you'd also have  set up the infrastructure to do more counterfactually valuable activities (give more grants, give more advice to the top grantees, do more active grantmaking, etc). 

Some feedback please, esp. if it's about the content, ToC, methods, etc

Maybe not detailed feedback, but I think you should give some feedback, especially to applicants who are particularly EA aligned and have EA forum type epistemic and discussion norms.

I think you should also encourage applicants to briefly respond.

And ideally this should be added to the public conversation too.

Why? Because we are in a low-information space, EA is a question not an answer, and my impression is that we are very uncertain about approaches and theories of change, especially in the X-risk & LT space.

I don't think we make progress by 'just charging forward on big stuff' (not that you are advocating that). A big part of many of these projects themselves, and their impact is 'figuring out and explaining what we should be doing and why'. So if the grant-making process can add substantial value to that, it's worth doing (if it has high benefit/cost, which I argue it does).

You may know and appreciate something the applicants do not, and vice versa. (Public) conversation moves us forward.

"But this is better done in more systematic ways/contexts"

Maybe, but ~often these careful assessments don't happen. I highly rate public feedback in academia as part of the review process if possible ... Because in academia people actually ~under-read other people's work carefully. (Everyone is trying to 'publish their own'.) In EA this is probably less of a problem, we have better norms but still.

You, the grantmaker have read at least some parts of their project and taken it very seriously in ways that others will not. And you have some expertise and knowledge that may be transferable, particularly given our low information space.

But "It’s very hard to accurately change someone’s plans based on quick feedback"

That is OK. You don't need to change their plans dramatically. The information you are giving them will still feed into their world model, and if they respond, v/v. And even better if it can be made public in some way.

But "It's about personal factors" (OK, that's ~different)

"No because of personal factors or person-project fit reasons" is probably the most common situation in a lot of cases.

I agree that this case and this type of feedback is a bit different, and does not pertain to my arguments above so much. Still, I think people would really appreciate some feedback here. What skills are they missing? What value have they failed to demonstrate? (Here an ounce of personalized feedback could be supplemented by a more substantial body of 'generalized feedback')

What level of feedback detail do applicants currently receive? I would expect that giving a few more bits beyond a simple yes/no would have a good ROI, e.g. at least having the grantmaker tick some boxes on a dropdown menu. 

"No because we think your approach has a substantial chance of doing harm", "no because your application was confusing and we didn't have the time to figure out what it was saying", and "no because we think another funder is better able to evaluate this proposal, so if they didn't fund it we'll defer to their judgment" seem like useful distinctions to applicants without requiring much time from grantmakers.

Buck
2y19
0
0

The problem with saying things like this isn't that they're time consuming to say, but that they open you up to some risk of the applicant getting really mad at you, and have various other other risks like this. These costs can be mitigated by being careful (eg picking phrasings very intentionally, running your proposed feedback by other people) but being careful is time-consuming.

One solution could be to have a document with something like „The 10 most common reasons for rejections“ and send it to people with a disclaimer like „We are wary of giving specific feedback because we worry about [insert reasons]. The reason why I rejected this proposal is well covered among the 10 reasons in this list and it should be fairly clear which ones apply to your proposal, especially if you would go through the list with another person that has read your proposal.“

Also 'no because my intuitions say this is likely to be low impact', and 'other'

But I agree that those four options would be useful -- maybe even despite the risk that the person immediately decides to try arguing with the grant maker about how his proposal really is in fact likely to be high impact, beneficial rather than harmful, and totally not confusing, and that the proposal definitely shouldn't be othered.

As mentioned in my post, "No because of personal factors or person-project fit reasons" is probably the most common situation in a lot of cases.

Still, you might be able to turn that into a few easy sentences. Ex. "We don't think you have enough expertise on the topic", "We don't think you have the right skills for the project, we think there are likely better other candidates out there" "You're research project is poorly scoped",...

These are just quick examples I had on top of my brain, they likely could be massively improved.

Even just sharing something in the rejection letter like

"Unfortunately, even if the problem is solely due to project-specific competence and fit, there aren’t many tractable and high-EV levers for grantmakers to pull. If somebody applies with a poorly scoped research project, they may well be an amazing operations hire or entrepreneur but unfortunately a) the grantmaking process is not set up to evaluate this well, and b) we are specialized in evaluating grants, not in giving open-ended career advice."

might be very helpful.

It's hard to say some of those without coming off as insulting.

I think it’s okay to come off as a bit insulting in the name of better feedback, especially when you’re unlikely to be working with them long-term.

Buck
2y16
0
0

If you come across as insulting, someone might say you're an asshole to everyone they talk to for the next five years, which might make it harder for you to do other things you'd hoped to do.

Not giving feedback on proposals is sometimes seen as insulting as well. We got rejected about 4 times by EA grants without feedback and we probably spent 50 hours writing and honing the proposals. Getting a "No" is harder to swallow than "No, because...". I wasn't insulted because I get all of the reasons for no feedback, but it doesn't leave you feeling happy about all the work that went into it. 

I also agree with various comments here that the ROI of very short feedback is likely very high, and I don't think it's a big time burden to phrase it in a non-insulting way. I'm going to reapply again in the upcoming months and it's likely we get rejected again. If I knew the reason why I might not reapply or reapply better, both of which would save the grant maker considerable time (perhaps X more than writing one minute feedback).  

Hi, this is a ruthless comment because you said you spent a lot of time on this.

Note that no one likes me and I am not in any EA org or anything.

 

Feedback

Note the feedback you got here:

  • "Startups are pretty competitive. For me to put money into a business venture, I'd want quite a bit of faith that the team is very strong. This would be pretty high bar. From looking at this, it's not clear to me promising the team is at this point." source here
  • I don't think the EA community is uniquely suited to answer your question. Whether this is a great startup idea or not is difficult for me to figure out and I think speaking to people in the startup and venture-capital community will get you better answers. source here
     

Frankly, these people are being nice. My remix/take on the above is:

  • There's a high bar for average startups trying to get money in the real world. We're not sure if EAs should fund average startups (that don't have some theory of change or direct externality), maybe even if they are lead by aligned EAs. In this case, what you're presenting doesn't indicate it meets the bar of an average startup. 

    EAs should definitely not fund below-average startups that are for-profit because this would just fund lemons and things would runaway with unaligned behavior.

 

Pitch isn't good and suggests underlying business is unpromising

Your "pitch" is really bad, even for a for-profit. It literally consists of saying that if you're very very successful, you will have lots of money to donate to EA.

 

This seems like an argument someone would say about any endeavor, profit or non-profit, so there's no content.

It is a form of 1% fallacy, but even more extreme, literally , as you are going through 0.01% scenarios, in addition to relying on network effects (which are both hit and miss, and can rationalize almost any behavior).

This is bad because it suggests the absence of any advisor reviewing this, or experience presenting. Which while not substantive, frankly is a large part of a startup. 

In my opinion, when pitching or under high incentives to demonstrate latent traits that you could just list off, "absence of evidence is also evidence of absence". For example: if you had connections with previous online marketplaces, or even just a reasonable exit, or even reasonable online marketplace experience, you could have mentioned it. 

Like, every business, non-profit or project, at least has one or more "hypotheses" or "angles" or "unfair advantages" they try to argue they have.

 

More:

  • There's more subtle/ideological/opinionated points: of all the things that might be influenced by "motivated reasoning", it seems like becoming a powerful CEO "for altruism", is like the canonical pattern.
    • So someone coming in and asking to make them CEO of a powerful online business empire, without a strong past or maybe contract or limit on their profit, can make many people roll their eyes.
  • As a meta point, it's unclear if this comment is the same reasoning as what EA funds believes (but maybe some of the above reasoning applies?). My guess is that a difference is that they focus on non-profit, directly (or meta directly) impactful projects. Making a for-profit is possible, but they have a tighter focus.

Hi Charles,

Thanks for responding and I'm sure people like you, at least I do for being ruthless, that honestly helps a lot so thank you!

I should start by clarifying that the EA forum post is not the proposal that we put many hours in, that was probably written in about 30 minutes and checked by one EA, and it's not meant to be a full pitch but just an intro to what we do. We did get feedback on that (and I got a lot more later at EAG, even from some of the grantmakers that rejected us) but when I refer to getting no feedback I mean the rejections from grants with no feedback. 

About your remix/take on the EA forum, I agree that EA shouldn't fund average startups and/or for-profits (I don't consider our venture a for-profit though). I can't be an unbiased judge on whether we're average or below/above average, but I can send you the pitch deck and answer any questions/concerns you might have and then you can be the judge. You also mention previous experience, advisor reviews, hypotheses and unfair advantages, and those are all in the pitch. I'm now thinking it might have been a mistake to do a quick write up for feedback and thoughts, maybe a thorough one would have been better because I think I can address most of the concerns that you and others have but wanted to avoid a 10 page post outlining all of the questions and criticisms and how we're addressing them. 

Can I send you the deck and relay your questions in a quick call? I learn the most from the harshest critics and you seem to be one. 

I think it's worth mentioning the recent paper that Brad West wrote on this subject. It explains very well why we exist and what we're trying to do, much better than my quick and dirty writeup. Happy to hear any feedback on that! 

I agree, and like I said, I'm sure those sentences can be massively improved.

I prefer to have my feelings a little hurt than remain in the dark as to why a grant didn't get accepted.

I agree

[anonymous]2y35
2
0

I think the elephant in the room is : "Why are they part-time?"

If making more grants is so important, either hire more people or work full-time, no? This is something I do not understand with the current status quo

Thanks for writing this! 

I appreciate your points about how EA grantmakers are 1. part time, 2. extremely busy, 3. and should spend more time getting grants out the door instead of writing feedback. I hope nobody has interpreted your lack of feedback as a personal affront! It just seems like the correct way to allocate your (and other grantmakers') time. 

I think the EA community as a whole is biased too far towards spending resources on transparency at the expense of actually doing ~the thing~. Hopefully this post makes some people update! 
 

A small comment: if feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.

I also think that it would be worth exploring ways to give feedback with as little time cost as possible.

A closely related idea that seems slightly more promising to me: asking other EAs, other grantmakers and other relevant experts for feedback - at conferences or via other means - rather than the actual grantmakers who rejected your application. Obviously the feedback will usually be less relevant, but it could be a way to talk to less busy people who could still offer a valuable perspective and avoid the "I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more favourable decision" problem that Larks mentions.

I had one group ask me for feedback on their rejected grant proposal at a recent EAG and I was confused why they were asking me at the time, but I now think it's not a bad idea if you can't get the time/energy of the grantmakers in question.

(Apologies if this is what you were suggesting, PabloAMC, I just thought from the thread on this comment so far you were suggesting meeting the grantmakers who rejected the proposal.)

No need to apologize! I think your idea might be even better than mine :)

[I]f feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.

That sounds worse to me. Conferences are rare and hence conference-time is more valuable than non-conference time. Also, I don't want to be ambushed by people who are annoyed they didn't get money, or prospective applicants who are trying to network their way into a more favourable decision.

Mmm, that's not what I meant. There are good and bad ways of doing it. In 2019 someone reached out to me before the EA Global to check whether it would be ok to get feedback on one application I rejected (as part of some team). And I was happy to meet and give feedback. But I think there is no damage in asking.

Also, it's not about networking your way in, it's about learning for example about why people liked or not a proposal, or how to improve it. So, I think there are good ways of doing this.

Conference time is valuable precisely because it allows people to do things like "get feedback from an EA experienced in the thing they're trying to do". If "insiders" think their time is too valuable for "outsiders", that's a bad sign.

Getting feedback from someone because they have expertise feels structurally different to me than getting feedback from someone because they have money.

As you noted, it's not you who "has money" as a grantmaker. On the other hand, it is you who knows what parameters make projects valuable in the eyes of EA funders. Which is exactly the needed expertise.

I'm not implying how this should compare to any individual grantmaker's other priorities at a conference. But it seems wrong to me to strike it down as not being valuable use of conference time.

Grantmakers aren't just people with money - they are people with a bird's eye view of the space of grant proposals. This may not be the same as topic expertise, but it's still quite important for a person with a project trying to make it fit into the broader goals of the EA community. 

My intuition is that grantmakers often have access to better experts, but you could always reach to the latter directly at conferences if you know who they are.

A solution that I'm more excited about is one-to-many channels of feedback where people can try to generalize from the feedback that others receive. 

I think this post by Nuño is a good example in this genre, as are the EAIF and LTFF payout reports. Perhaps some grantmakers can also prioritize public comms  even more than they already do (e.g. public posts on this Forum), but of course this is also very costly.

Ofer
2y16
0
0

Hi Linch, thank you for writing this!

I started off with a policy of recusing myself from even small CoIs. But these days, I mostly accord with (what I think is) the equilibrium: a) definite recusal for romantic relationships, b) very likely recusal for employment or housing relationships, c) probable recusal for close friends, d) disclosure but no self-recusal by default for other relationships.

In January, Jonas Vollmer published a beta version of the EA Funds' internal Conflict of Interest policy. Here are some excerpts from it:

Any relationship that could cause significantly biased judgment (or the perception of that) constitutes a potential conflict of interest, e.g. romantic/sexual relationships, close work relationships, close friendships, or living together.

.

The default suggestion is that you recuse yourself from discussing the grant and voting on it.

.

If the above means we can’t evaluate a grant, we will consider forwarding the application to another high-quality grantmaker if possible. If delegating to such a grantmaker is difficult, and this policy would hamper the EA community’s ability to make a good decision, we prefer an evaluation with conflict of interest over none (or one that’s significantly worse). However, the chair and the EA Funds ED should carefully discuss such a case and consider taking additional measures before moving ahead.

Is this consistent with the current CoI policy of the EA Funds?

In general, what do you think of the level of conflict of interests within EA grantmaking? I’m a bit of an outsider to the meta / AI safety folks located in Berkeley, but I’ve been surprised to find out the frequency of close relationships between grantmakers and grant receivers. (For example, Anthropic raised a big Series A from grantmakers closely related to their president Daniella Amodei’s husband, Holden Karnofsky!)

Do you think COIs pose a significant threat to the EA’s epistemic standards? How should grantmakers navigate potential COIs? How should this be publicly communicated?

(Responses from Linch or anybody else welcome)

Ofer
2y10
0
0

In general, what do you think of the level of conflict of interests within EA grantmaking?

My best guess, based on public information, is that CoIs within longtermism grantmaking are being handled with less-than-ideal strictness. For example, generally speaking, if a project related to anthropogenic x-risks would not get funding without the vote of a grantmaker who is a close friend of the applicant, it seems better to not fund the project.

(For example, Anthropic raised a big Series A from grantmakers closely related to their president Daniella Amodei’s husband, Holden Karnofsky!)

My understanding is that Anthropic is not a nonprofit and it received funding from investors rather than grantmakers. Though Anthropic can cause CoI issues related to Holden's decision-making about longtermism funding. Holden said in an interview:

Anthropic is a new AI lab, and I am excited about it, but I have to temper that or not mislead people because Daniela, my wife, is the president of Anthropic. And that means that we have equity, and so [...] I’m as conflict-of-interest-y as I can be with this organization.


Do you think COIs pose a significant threat to the EA’s epistemic standards?

I think CoIs can easily influence decision making (in general, not specifically in EA). In the realm of anthropogenic x-risks, judging whether a high-impact intervention is net-positive or net-negative is often very hard due to complex cluelessness. Therefore, CoI-driven biases and self-deception can easily influence decision making and cause harm.

How should grantmakers navigate potential COIs? How should this be publicly communicated?

I think grantmakers should not be placed in a position where they need to decide how to navigate potential CoIs. Rather, the way grantmakers handle CoIs should be dictated by a detailed CoI policy (that should probably be made public).

Here's my general stance on integrity, which I think is a superset of issues with CoI. 

As noted by ofer, I also think investments are structurally different from grants. 

This is a great set of guidelines for integrity. Hopefully more grantmakers and other key individuals will take this point of view.

I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics. I think your motivated reasoning critique of EA is the strongest argument that current EA priorities do not accurately represent the most impactful causes available. I still think EA is the best bet available for maximizing my expected impact, but I have baseline uncertainty that many EA beliefs might be incorrect because they’re the result of imperfect processes with plenty of biases and failure modes. It’s a very hard topic to discuss, but I think it’s worth exploring (a) how to limit our epistemic risks and (b) how to discount our reasoning in light of those risks.

I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics.

I'm confused by this. My inside view guess is that this is just pretty small relative to other factors that can distort epistemics. And for this particular problem, I don't have a strong coherent outside view because it's hard to construct a reasonable reference class for what communities like us with similar levels of CoIs might look like.

My impression is that Linch's description of their actions above is consistent with our current COI policy. The Fund chairs and I have some visibility over COI matters, and fund managers often flag cases when they are unsure what the policy should be, and then I or the fund Chairs can weigh in with our suggestion. 

Often we suggest proceeding as usual or a partial but not full recusal (e.g. the fund manager should participate in discussion but not vote on the grant themselves).

Thank you for the info!

I understand that you recently replaced Jonas as the head of the EA Funds. In January, Jonas indicated that the EA Funds intends to publish a polished CoI policy. Is there still such an intention?

The policy that you referenced is the most up-to-date policy that we have but, I do intend to publish a polished version of the COI policy on our site at some point. I am not sure right now when I will have the capacity for this but thank you for the nudge.

Yitz
2y12
0
0

my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice

Has anyone actually tested this? It might be worthwhile to record your initial impressions on a set number of grants, then deliberately spend x amount of time researching them further, and calculating the ratio of how often further research makes you change your mind.

“ People are often grateful to you for granting them money. This is a mistake.”

How would you recommend people react when they receive a grant? Saying thank you simply seems polite and standard etiquette, but I agree that it misportrays the motives of the grantmaker and invites concerns of patronage and favoritism.

[half baked idea]

It seems reasonable to thank someone for the time they spent evaluating a grant, especially if you also do it when the grant is rejected (though this may be harder). I think it is reasonable to thank people for doing their job even (maybe especially?) when you are not the primary beneficiary of that job, and that their reason for doing it is not thanks.

Something in this direction seems generally right. I think it's reasonable to be grateful for people doing good work in EA (including in grantmaking), and it's unreasonable to expect for rejected grantees to feel grateful (or happy in general). However, a relevant litmus test is whether you'd also thank people for doing evaluations that are entirely unrelated to your work, just because you think grantmaking is valuable work.

I also think saying a polite "thank you, this money will help us do impactful work in XYZ" when receiving a grant seems reasonable, the problem I'm more alluding to is when it feels excessive or repeated (like I was in an event which was attended by many employees of an org that I recommended a grant to, and I think all ~7 of them thanked me at some point during the event). 

A potential alternative is to thank the grantmaking agency and infrastructure rather than the specific investigators of your grant. Another alternative is to express your gratitude much more broadly for the consequentialist value of donating to valuable EA work, rather than making it seem to be about reciprocity or relationships. 

The weirdness Linch points at makes sense to me.  Other kinds reactions that channel enthusiasm that seem good to me 

"This is very cool, I'm excited other people also see promise in this work, and I can't wait to get started" 

"I'm honored by the trust that's been placed in me, I take it seriously and will strive to live up to it" 

Or/and you could just generally thank everyone in EA who seems to be doing important jobs well

This is how I've responded to positive funding news before, seems right.

Thanks, I like your suggestions!

'It’s rarely worth your time to give detailed feedback'

This seems at odds with the EA Funds' philosophy that you should make a quick and dirty application that should be 'the start of a conversation'.

Two things. 

First, there is a big difference between "detailed feedback" and "conversation" - if something is worth funding, working out how to improve it is worth time and effort, and delaying until it's perfect is a bad idea. Whereas If it's fundamentally off base, it isn't worth more feedback than "In general terms, this is why" - and if it's a marginal grant, making it 10% better is worth 10% of a small amount.

Second, different grantmakers work differently, and EA funds often engages more on details to help build the idea and improve implementation. But junior grantmakers often aren't qualified to do so!

That makes sense, though I don't think it's as clear a dividing line as you make out. If you're submitting a research project for eg, you could spend a lot of time thinking about parameters vs talking about the general thing you want to research, and the former could make the project sound significantly better - but also run the risk you get rejected because those aren't the parameters the grant manager is interested in.

A wonderful post and thank you for sharing this inside view, Linch! 

"I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make.  And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways." - Buck

It's very interesting to see the correlates between VC funding and grants from EA organizations (and both can probably learn from each other). As Reid Hoffman mentions in Blitzscaling (I cannot find the quote a.t.m.), a VC wants organizations to blitzscale because they know most will fail (EA ~have 0.01x impact) and that nearly all their earnings will come from very few of their investments (EA ~have 100x impact). 

Thanks for writing this, Linch! I’m starting a job in grantmaking and found this interesting and helpful.

Just out of curiosity - how much time is wasted on evaluating half-ass'ed proposals?

>People are often grateful to you for granting them money. This is a mistake.

Sometimes they're resentful if you reject them (though this depends on community and is probably highly asymmetric).

Just out of curiosity - how much time is wasted on evaluating half-ass'ed proposals?

My guess is not much at current margins, usually clearly bad things are rejected quickly. If someone who we'd otherwise fund sends a half-assed proposal, usually we can figure that out pretty quickly and ask them to elaborate. 

Sometimes they're resentful if you reject them (though this depends on community and is probably highly asymmetric).

Yeah that makes sense. I think this is mostly fine. 

Great post, thank you.