Hide table of contents

Epistemic status: speculative

This is a response to recent posts including Doing EA Better and The EA community does not own its donors' money. In order to make better funding decisions, some EAs have called for democratizing EA's funding systems. This can be problematic, because others have raised questions such as (paraphrasing heavily) such as: "how do we decide who gets a vote?" and "would funders still give if they were forced to follow community preferences"? The same EAs have argued that EA decision-making is “highly centralised, opaque, and unaccountable”, and said that to improve our impact on the world, the effective altruism movement should be more decentralized and there should be greater transparency amongst EA institutions

To meet both of the families of concerns expressed over the last week, I propose a grant-assessment system that improves transparency, decentralizes decision-making, and could better inform grant allocation by drawing information from a wider section of the community whilst maintaining funders' prerogatives to select the areas they wish to donate to. The proposal is to adopt a peer-review process used by the grant-making system run by public bodies in the United States, such as the National Institutes of Health and the National Science Foundation.

In this model, the funder’s program manager makes decisions about grant awards based on reviews and numerical scores allocated by peer reviewers coordinating in expert panels to evaluate grant applications. This would be a positive-sum change that benefits both funders and the community: the community has more input into the grant-making process, and funders benefit from expertise in the community to better achieve their objectives. 

In the rest of this post, I will describe the National Institutes of Health grant evaluation process, describe why I think now is the right time for the effective altruism movement to consider peer review as part of a more mature grant evaluation process, give some notes on implementation in EA specifically, and describe how this approach can both maintain funders’ prerogative to spend their own money as they wish, while giving the community a greater level of decision-making.

The grant peer review process at the NIH and NSF

National Institutes of Health

The National Institutes of Health (NIH) uses a peer review process to evaluate grant applications. This process involves the formation of ‘study sections’, which are groups of experts in the relevant field who review and evaluate grant applications.

When an application is received, it is assigned to a study section based on its scientific area of focus. Each study section is composed of scientists, physicians, and other experts who have experience in the field related to the research proposed in the application. These would be drawn from the scientific community at large. Study section members are typically compensated for participation, but participation isn’t a full time job–it’s generally a small additional duty researchers can choose to take on, as part of their broader set of research activities.

The study section members more-or-less independently review the applications and provide written critiques that are used to evaluate the strengths and weaknesses of each application. The study section then meets to discuss the applications, and each member provides a priority score and written summary of the application. These scores and summaries are used to determine which applications will be funded.

In summary, the NIH uses study sections composed of experts in the relevant field to review and evaluate grant applications through a peer review process. The study section members provide written critiques, scores, and summaries of the applications, which are used to determine which applications will be funded.

National Science Foundation

Relative to the NIH, The National Science Foundation (NSF) has a remit to fund broader, basic scientific research that does not necessarily have immediate applications. 

The NSF use a peer-review process similar to the National Institutes of Health (NIH) to evaluate grant applications. However, there are some key differences. For instance, the NSF review process generally has a broader scope of expertise required and it allows for a multidisciplinary approach to the review of proposals. Additionally, the NSF review process is typically done in two rounds and also includes a "broader impact" criteria, which evaluates the potential impact of the proposed research on society as a whole.

Analogies to the EA context

Very roughly speaking, in an EA context, we can imagine the NIH to have a remit more similar to EA Global Health and Development and Animal Welfare causes. Their outcomes can often be measured quantitatively, even if they cannot always be quantitatively compared against one another. In contrast, the NSF might have a remit more similar to existential risk causes, where the targets, while important, could have causes that are less quantifiably related to the outcome of reducing existential risk, such as improving democratic decision-making.

Why the time is right to adopt peer-review in grant-making processes

A short history of the effective altruism funding environment

The following is an outsiders’ account, and it may well be wrong, but seems likely to track some of the dynamics of the funding situation in EA, in my opinion.

From around 2015, to the establishment of the FTX Foundation around 2021, EA Funds had access to a substantial amount of funding, but the movement overall was still “funding constrained”, i.e., there was more work to do and more people to do it than there was funding available to do it. At the same time, the community was fairly small, and aside from a few well-developed areas, such as AI Safety, there wasn’t a large community of people with more credible knowledge of various topics than existed in grantmaking organizations.

When the FTX Foundation opened for business, SBF said something like he felt morally obligated to spend at least 5% of his net worth per year; otherwise, are you really a credible billionaire effective altruist? At one time valued well over $10b, that would mean spending at least $500m per year. Effective altruism was now “talent constrained”, i.e., a lot of money had to be spent very quickly. That meant it really could have seemed suboptimal to create extra systems that could slow down decision-making.

In late 2022, funders including OpenPhil have said the bar must be raised on funding–it once again seems like funding is constrained. However, unlike in our previous funding-constrained environment, we have many more effective altruist organizations, affiliated independent and academic researchers, and other experts who sit outside of funding organizations but could provide relative expertise.

People in the community who could participate in peer review

EA Funds could benefit from adopting a peer review process similar to that used by the NIH or NSF. In the past, a peer review process may have been difficult due to a lack of established experts in the field of effective altruism. However, as the community has grown, there are now enough established experts who could be called upon to review applications. 

The effective altruism community has grown significantly in recent years. The growth in the community has led to an increase in the number of independent researchers and academics who have developed expertise in specific areas relevant to effective altruism. Additionally, the growth of the community has led to the formation of established effective altruism organizations, which employ experts who work on specific causes or topics. So there are a variety of people who might all have important perspectives to share:

  • Specialists within established EA research organizations
  • Academics within the EA community
  • Academics outside the EA community with expertise in areas of interest to EA funders
  • Independent researchers in the EA community who funders recognize as particularly knowledgeable in a topic

Experts in all of these areas can serve as potential peer reviewers and provide valuable evaluations of grant applications.

Implementation

While there are already existing funding models, and this post has pointed to the NIH and NSF models in particular, there are undoubtedly differences in the EA landscape which need to be considered. 

Funders could decide for themselves what point along a spectrum from “fast” to “thorough” they’d like to be. No peer review at all is at the “fast” end of the spectrum. Perhaps slightly past that is a grant-maker calling a couple of friends who know something more about the topic for their opinion. A Google Form with a 10-minute completion time per grant, sent to three carefully chosen experts with a request for comment and a rating, would be a little more thorough still. At the other end of the scale, a study section of experts could sit down over a video call or in person to review a set of grants after having independently read and reviewed them and come to a collective recommendation about the grants they recommend.

There are two particular issues I think need particularly careful thought.

First, while selecting from a set of existing, established researchers leverages existing expertise, it does run the risk of allowing cliques of experts to capture funding interests. A funder might build in institutional counterbalances to this epistemic risk by regularly seeking contributors from outside existing groups of experts. 

Second, while the EA community is significantly bigger than it was a few years ago, it remains small enough for significant forms of corruption and gamesmanship in grant-making (“I’ll support your application if you support mine!”). It would make sense to impose strict institutional safeguards to maintain degrees of separation between reviewers and recipients wherever possible, for any potential conflicts of interest to be disclosed, and for violations of these safeguards to be met with appropriate penalties, such as exclusion from reviewing future funding rounds.

Will it help?

I’m not sure. In theory, I expect that the average grant application carries some strengths and weaknesses, and that the average grant reviewer will miss some of the strengths and weaknesses. Reviewers each with independent perspectives will tend to catch different strengths and weaknesses. By increasing the number of well-informed reviewers, the review process will, on average, identify more strengths in weaknesses.


In practice, I don’t have the evidence to demonstrate this will work. If you nodded along with the Doing EA Better authors when they said

We need to value expertise and rigour more

or

EA institutions should see EA ideas as things to be co-created with the membership and the wider world, rather than transmitted and controlled from the top down

or

EAs should be more open to institutions and community groups being run democratically or non-hierarchically

then you might like the proposed funding model because it more highly values expertise, or because facilitates EA grant-making institutions co-creating EA ideas (grant application choices) with more community members, or because it is a less hierarchical model than the status quo.

Why would we want to emulate government peer review?

We shouldn’t. We should create our own model!

If you’re worried that a review process could be too slow and cumbersome, perhaps you’d agree that an initial implementation by a small regranter, or an implementation with a very light-weight review process (perhaps a quick Google form filled out a small group of experts and reviewed by a grant-maker) would not carry a substantial cost, while at the same time, allowing our movement to learn whether such a system would be helpful to more widely adopt.

If you’re worried that governments are the last group of organizations we should seek to emulate, I’d suggest that perhaps, to the extent government grant-making institutions are inefficient, it’s because of the constrained policy environment they exist within, rather than something inherent in the process of peer-reviewed grant-making.

If you’re worried that peer review on the whole is a broken model, I sympathize. On the other hand, consider that (analogous to government grant-making), the problem isn’t inherent in the practice of peer review; it's the specific form academic peer review has evolved to given the incentive space typical academic peer review exists within.

If you’re a critic of government, or of academic peer review, ask yourself whether your objections to the government and to peer review come from the practice of asking knowledgeable people their opinion about things, or if it is something else inherent in the way that governments and academia work.

How funders can maintain the prerogative to donate money as they choose

On this forum, various people have argued that funders' money is not owned by the community, and the community doesn't have the right to tell funders how to spend their money. On the other hand, one comment ‘bit the bullet’, and said that while the community may not have the right to dictate to funders, funders do have a responsibility to spend their money in a way that does the most good. This is arguably a foundational tenet of effective altruism.

By asking expert reviewers to rate and select grant applications, funders can leverage the community's expertise to better achieve their own priorities. While they would be giving up some control, the reality is that many funders are looking for good, reliable, and trustworthy advice about how to achieve their objectives. 

There is always a trade-off with decision-making, and it may turn out that the cost in time and money for establishing a peer-reviewed grant process does not improve granting enough to justify the cost.

But peer reviewing doesn’t have to be laborious process; it can be almost as brief or extensive as you like. There’s a trade-off between transparency, better-informed decision-making, and decision-making efficiency, and I suspect the optimal point, from an impact perspective, is somewhere between the two extremes of “no outside feedback” and “NIH-level study sections”.

There are a couple of ways funding organizations would maintain control over their donations. First, while grant review panels give grants ratings and make recommendations, there's no reason funding organizations couldn't make the final decision on funding--in fact they probably should do. Second, funding organizations set the scope of grant review panels and choose when to use them. 

Funders might decide on a set amount of money for allocated to each of existential risk and global health and development, perhaps along "worldview diversification" lines, while allowing review panels to set priorities within each cause area. This could be quite granular, for instance, asking an existential risk review panel to evaluate the best grants for improving democratic decision-making, or attracting talent to prosaic AI alignment. At the other end of decision-making--it would also be possible to set up a study section of priorities researchers to determine how much funding each worldview or each cause area would be given. It remains in the funders hands to decide how to spend money.

Conclusion

I won't pretend this solution would fully address the concerns from the Doing EA Better post. This is not a democratizing solution in the sense of allowing community members to vote. But I hope it might be a useful solution for some of the problems outlined in that post and address some of the priorities expressed. Specifically, this proposal would decentralize decision-making and more highly value expertise and rigor.


Acknowledgements

Thanks to Justis from the EA Forum team and Bruce Tsai for helpful comments. All errors and bad takes are solely my own, and this post represents no one's views except my own lightly held ideas.

50

More posts like this

Comments7
Sorted by Click to highlight new comments since: Today at 3:33 PM

I think this post could have been better researched, as it relates to how EA funders already work, and how they attempt to address problems.

Note that many funders do already consult multiple outside experts, especially for large grants, and do something like gather light-weight input. I don't know if they have uniform systems for this, or have this written up publicly, but I have been asked for this type of input by 2 different funders, and know a third, Survival and Flourishing Fund, has said quite a bit about their more complex model.

Also, a critical problem with peer review funding, as has been widely discussed, is that it doesn't promote long-shot bets or allow unpopular ideas to ever get explored, since you need a set of people to all rate the idea highly, instead of just one reviewer. This is a key reason that Survival and Flourishing - which as I mentioned, does use a group of expert reviewers - is structured to use their complex S-process, rather than ratings.

Thank you, David! From what you've said here it seems clear my post was missing critical information.

I'm not sure this post literally could have been much better researched, conditional on me writing it. I don't feel entitled to contact funders to ask them about their process (perhaps I should feel free to? I'm not sure). EA Funds website mentions briefly they "engage expert-led teams of subject matter experts" in their decision-making, and that's something I should have researched first and mentioned, but also, I think that gives away so little information that I learned more from your reply here than I would have from reading that.

Perhaps other funders describe their process in more detail, I don't know, and if so, I concede that's something I could have identified before writing the post.

So the only other way I can see this post could have had more information is that I could have asked more widely with people more familiar with the process than I am. But I'm not personally acquainted with anyone who I knew would know more about the process.

Or, finally, I could have left it to someone else to write, but then, if they didn't, I wouldn't have learned from you that grant-makers already engage in expert consultation.

It probably is obvious to you, considering your experiences. But the way grant making was described on the Doing Good Better post last week--something to the effect of "it helps to move to the bay and make friends with grant-makers"--suggests to me the process is pretty opaque to a lot of other people, not just to me, and so I suppose I'm glad I opened a conversation even if I don't have a lot of insight to share on the process.

Edit: There is a point I'm trying to make, other than defending my own process, which is that the process in general is fairly opaque, and if the information you're talking about is publicly available, I'm not aware of it! And that validates something of the transparency critiques from the DGB post last week.

I don't feel entitled to contact funders to ask them about their process (perhaps I should feel free to? I'm not sure)

I do think you should feel free to do this! Open Phil, EA Funds, GiveWell, and ACE all have contact pages, and "I'm thinking about how EA funding could be better and I wanted to understand more how you work," followed by specific questions that aren't too much work to answer, is something I'd expect to be well received.

On the other hand, I don't think you have to do this. Your post, as is, still is helpful in describing how the NIH and NSF make funding decisions. Personally, though, I would want to at talk to existing funders and learn a bit about their current process (or learn that they don't want to talk about it) before including proposals about what I think they should do differently ;)

I agree that this is tricky to do, because the processes aren't so well publicly documented. (Not that they should be - funders providing information about their processes make them more gameable, as most government funding is!) 

I do think that you could have asked more people with knowledge of the process to review the post, and also think that the Survival and Flourishing Fund documents what they do pretty clearly, including both their writeup, and at least one forum post by a reviewer documenting it pretty extensively.

Thanks for writing this, Ben!

Do you (or anyone else) have any views about the circumstances under which a peer-review process is most likely to come back as cost-effective? If this is trialed, it would be worthwhile to trial it in a set of circumstances where it has its best chance of proving its value.

For instance, you mentioned EA Funds at one point as a grantmaker who might benefit, I do not think that would not be the right place to run a trial due to their relatively small grant sizes. I don't think seeing what peer review accomplishes on grants that commonly run in the five to perhaps low-six figure range would give it the best chance to prove itself. But others might disagree!

At a pinch, I would say review might be more worthwhile for topics where the work builds on a well-developed but pre-existing body of research. So, funding a graduate to take time to learn about AI Safety full-time as a bridge to developing a project probably wouldn't benefit from a review, but an application to develop a very specific project based on a specific idea probably would.

I don't have a sense on how often five-to-low-six-figure grants involve very specific ideas. If you told me they usually don't, I would definitely update against thinking a peer review would be useful in those circumstances.

I have no idea, to be honest. My belief that smaller grants might not be the best trial run for cost-effectiveness is based more on assumptions that (1) highly qualified reviewers might not think reviewing grants in that range is an effective use of their time; and (2) very quick reviews are likely to identify only clearly erroneous exercises of grantmaking discretion. Either assumption could be wrong!

But I think at that grant size, the cost-effectiveness profile might be more favorable for a system of peer review under specified circumstances rather than as a automatic practice. Knowing that they were only being asked when there was a greater chance their assistance might be outcome-determinative might help with attracting quality reviewers too.