This was originally posted as a comment on an old thread. However, I think the topic is important enough to deserve a discussion of its own. I would be very interested in hearing your opinion on this matter. I am an academic working in the field of philosophy of science, and I am interested in the criteria used by funding institutions to allocate their funds to research projects.
A recent trend of providing relatively high research grants (relative to some of the most prestigious research grants across EU, such as for instance ERC starting grants ~ 1.5 mil EUR) to projects on AI risks and safety made me curious, and so I looked a bit more into this topic. What struck me as especially curious is the lack of transparency when it comes to the criteria used to evaluate the projects and to decide how to allocate the funds.
Now, for the sake of this article, I will assume that the research topic of AI risks and safety is important and should be funded (to which extent it actually is, is beside the point and deserves a discussion of its own; so let's just say it is among the most pursuit-worthy problems in view of both epistemic and non-epistemic criteria).
Particularly surprising was a sudden grant of 3.75 mil USD by Open Philanropy Project (OPP) to MIRI. Note that the funding is more than double the amount given to ERC starting grantees. Previously, OPP awarded MIRI with 500.000 USD and provided an extensive explanation of this decision. So, one would expect that for a grant more than 7 times higher, we'd find at least as much. But what we do find is an extremely brief explanation saying that an anonymous expert reviewer has evaluated MIRI's work as highly promising in view of their paper "Logical Induction".
Note that in the last 2 years since I first saw this paper online, the very same paper has not been published in any peer-reviewed journal. Moreover, if you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter -- *correction:* there are five papers published as conference proceedings in 2016, some of which seem to be technical reports, rather than actual publications, so I am not sure how their quality should be assessed; I see no such proceedings publications in 2017). It suffices to say that I was surprised. So I decided to contact both MIRI asking if perhaps their publications haven't been updated on their website, and OPP asking for the evaluative criteria used when awarding this grant.
MIRI has never replied (email sent on February 8). OPP took a while to reply, and last week I received the following email:
"Hi Dunja,
Thanks for your patience. Our assessment of this grant was based largely on the expert reviewer's reasoning in reviewing MIRI's work. Unfortunately, we don't have permission to share the reviewer's identity or reasoning. I'm sorry not to be more helpful with this, and do wish you the best of luck with your research.
Best,
[name blinded in this public post; I explained in my email that my question was motivated by my research topic]"
All this is very surprising given that OPP prides itself on transparency. As stated on their website:
"We work hard to make it easy for new philanthropists and outsiders to learn about our work. We do that by:
- Blogging about major decisions and the reasoning behind them, as well as what we’re learning about how to be an effective funder.
- Creating detailed reports on the causes we’re investigating.
- Sharing notes from our information-gathering conversations.
- Publishing writeups and updates on a number of our grants, including our reasoning and reservations before making a grant, and any setbacks and challenges we encounter." (emphasis added)
However, the main problem here is not the mere lack of transparency, but the lack of effective and efficient funding policy.
The question, how to decide which projects to fund in order to achieve effective and efficient knowledge acquisition has been researched within philosophy of science and science policy for decades now. Yet, some of the basic criteria seem absent from cases such as the above mentioned one. For instance, establishing that the given research project is worthy of pursuit cannot be done merely in view of the pursuit-worthiness of the research topic. Instead, the project has to show a viable methodology and objectives, which have been assessed as apt for the given task by a panel of experts in the given domain (rather than by a single expert reviewer). Next, the project initiator has to show expertise in the given domain (where one's publication record is an important criterion). Finally, if the funding agency has a certain topic in mind, it is much more effective to make an open call for project submissions, where the expert panel selects the most promising one(s).
This is not to say that young scholars, or simply scholars without an impressive track record wouldn't be able to pursue the given project. However, the important question here is not "Who could pursue this project?" but "Who could pursue this project in the most effective and efficient way?".
To sum up: transparent markers of reliability, over the course of research, are extremely important if we want to advance effective and efficient research. The panel of experts (rather than a single expert) is extremely important in assuring procedural objectivity of the given assessment.
Altogether, this is not just surprising, but disturbing. Perhaps the biggest danger is that this falls into the hands of press and ends up being an argument for the point that organizations close to effective altruism are not effective at all.
Gotcha. I’ll probably wrap up with this comment, here’s my few last thoughts (all on the topic of building a research field):
(I’m commenting on phone, sorry if paragraphs are unusually long, if they are I’ll try to add more breaks later.)
Final note: of your initial list of three things, the open call for research is the one I think is least useful for OpenPhil. When you’re funding at this scale in any field, the thought is not “what current ideas do people have that I should fund”, but “what new incentives can I add to this field”? And when you’re adding new incentives that are not those that already exist, it’s useful to spend time initially talking a lot with the grandees to make sure they truly understand your models (and you theirs) so that the correct models and incentives are propagated.
For example, I think if OpenPhil has announced a $100 grant scheme for Alignment research, many existing teams would’ve explained why their research already is this, and started using these terms, and it would’ve impeded the ability to build the intended field. I think this is why, even in cause areas like criminal justice and farm animal welfare, OpenPhil has chosen to advertise less and instead open 1-1 lines of communication with orgs they think are promising.
Letting e.g. a criminal justice org truly understand what you care about, and what sorts of projects you are and aren’t willing to fund, helps them plan accordingly for the future (as opposed to going along as usual and then suddenly finding out you aren’t interested in funding them any more). I think the notion that they’d be able to succeed by announcing a call for grants to solve a problem X, is too simplistic a view of how models propagate; in general to cross significant inferential gaps you need (on the short end) several extensive 1-1 conversations, and (on the longer end) textbooks with exercises.
Added: More generally, how many people you can fund quickly to do work is a function of how inferentially far you are away from the work that the people you hope to fund are already doing.
(On the other hand, you want to fund them well to signal to the rest of a field that there is real funding here if they provide what you’re looking for. I’m not sure exactly how to make that tradoeff.)
Re: Pre-paradigmatic science: see the above example of Wegener. If you want to discuss pre-paradigmatic research let's discuss them seriously. Let's go into historical examples (or contemporary ones, all the same to me), and analyze the relevant evaluative criteria. You haven't given me a single reason why my proposed criteria wouldn't work in the case of such research. Just because there is a scientific disagreement in the given field doesn't imply that no experts can be consulted (except for a singel one) to evaluate the promise of the given innovative i... (read more)