This was originally posted as a comment on an old thread. However, I think the topic is important enough to deserve a discussion of its own. I would be very interested in hearing your opinion on this matter. I am an academic working in the field of philosophy of science, and I am interested in the criteria used by funding institutions to allocate their funds to research projects.
A recent trend of providing relatively high research grants (relative to some of the most prestigious research grants across EU, such as for instance ERC starting grants ~ 1.5 mil EUR) to projects on AI risks and safety made me curious, and so I looked a bit more into this topic. What struck me as especially curious is the lack of transparency when it comes to the criteria used to evaluate the projects and to decide how to allocate the funds.
Now, for the sake of this article, I will assume that the research topic of AI risks and safety is important and should be funded (to which extent it actually is, is beside the point and deserves a discussion of its own; so let's just say it is among the most pursuit-worthy problems in view of both epistemic and non-epistemic criteria).
Particularly surprising was a sudden grant of 3.75 mil USD by Open Philanropy Project (OPP) to MIRI. Note that the funding is more than double the amount given to ERC starting grantees. Previously, OPP awarded MIRI with 500.000 USD and provided an extensive explanation of this decision. So, one would expect that for a grant more than 7 times higher, we'd find at least as much. But what we do find is an extremely brief explanation saying that an anonymous expert reviewer has evaluated MIRI's work as highly promising in view of their paper "Logical Induction".
Note that in the last 2 years since I first saw this paper online, the very same paper has not been published in any peer-reviewed journal. Moreover, if you check MIRI's publications you find not a single journal article since 2015 (or an article published in prestigious AI conference proceedings, for that matter -- *correction:* there are five papers published as conference proceedings in 2016, some of which seem to be technical reports, rather than actual publications, so I am not sure how their quality should be assessed; I see no such proceedings publications in 2017). It suffices to say that I was surprised. So I decided to contact both MIRI asking if perhaps their publications haven't been updated on their website, and OPP asking for the evaluative criteria used when awarding this grant.
MIRI has never replied (email sent on February 8). OPP took a while to reply, and last week I received the following email:
"Hi Dunja,
Thanks for your patience. Our assessment of this grant was based largely on the expert reviewer's reasoning in reviewing MIRI's work. Unfortunately, we don't have permission to share the reviewer's identity or reasoning. I'm sorry not to be more helpful with this, and do wish you the best of luck with your research.
Best,
[name blinded in this public post; I explained in my email that my question was motivated by my research topic]"
All this is very surprising given that OPP prides itself on transparency. As stated on their website:
"We work hard to make it easy for new philanthropists and outsiders to learn about our work. We do that by:
- Blogging about major decisions and the reasoning behind them, as well as what we’re learning about how to be an effective funder.
- Creating detailed reports on the causes we’re investigating.
- Sharing notes from our information-gathering conversations.
- Publishing writeups and updates on a number of our grants, including our reasoning and reservations before making a grant, and any setbacks and challenges we encounter." (emphasis added)
However, the main problem here is not the mere lack of transparency, but the lack of effective and efficient funding policy.
The question, how to decide which projects to fund in order to achieve effective and efficient knowledge acquisition has been researched within philosophy of science and science policy for decades now. Yet, some of the basic criteria seem absent from cases such as the above mentioned one. For instance, establishing that the given research project is worthy of pursuit cannot be done merely in view of the pursuit-worthiness of the research topic. Instead, the project has to show a viable methodology and objectives, which have been assessed as apt for the given task by a panel of experts in the given domain (rather than by a single expert reviewer). Next, the project initiator has to show expertise in the given domain (where one's publication record is an important criterion). Finally, if the funding agency has a certain topic in mind, it is much more effective to make an open call for project submissions, where the expert panel selects the most promising one(s).
This is not to say that young scholars, or simply scholars without an impressive track record wouldn't be able to pursue the given project. However, the important question here is not "Who could pursue this project?" but "Who could pursue this project in the most effective and efficient way?".
To sum up: transparent markers of reliability, over the course of research, are extremely important if we want to advance effective and efficient research. The panel of experts (rather than a single expert) is extremely important in assuring procedural objectivity of the given assessment.
Altogether, this is not just surprising, but disturbing. Perhaps the biggest danger is that this falls into the hands of press and ends up being an argument for the point that organizations close to effective altruism are not effective at all.
Thanks for the comment! I think, however, your comment doesn't address my main concerns: the effectiveness and efficiency of research within the OpenPhil funding policy. Before I explain why, and reply to each of your points, let me clarify what I mean by effectiveness and efficiency.
By effective I mean research that achieves intended goals and makes an impact in the given domain, thus serving as the basis for (communal) knowledge acquisition. The idea that knowledge is essentially social is well known from the literature in social epistemology, and I think it'd be pretty hard to defend the opposite, at least with respect to scientific research.
By efficient I mean producing as much knowledge by means of as little resources (including time) as possible (i.e. epistemic success/time&costs of research).
Now, understanding how OpnPhil works doesn't necessarily show that such a policy results in effective and efficient research output:
not justifying their decisions in writing: this indeed doesn't suggest their policy is ineffective or inefficient, though it goes against the idea of transparency and it contributes to the difficultly of assessing the effectiveness and efficiency of their projects;
not avoiding the "superficial appearance of being overconfident and uninformed": again this hardly shows why we should consider them effective and efficient; their decision may very well be effective/efficient, but all that is stated here is that we may never know why.
Compare this with the assessment of effective charities: while a certain charity may state the very same principles on their website, we may agree that we understand how they work; but this will in no way help us to assess whether they should count as effective charity or not.
In the same vane all I am asking is: should we, and if so why, consider the funding policy of OpenPhil effective and efficient? Why is this important? Well, I take it to be important in case we value effective and efficient research as an important ingredient of funding allocation within EA. If effective altruism is supposed to be compatible with ineffectiveness and inefficiency of philanthropic research, the burden of proof is on the side that would hold this stance (similarly to the idea that EA would be compatible with ineffective and inefficient charity work).
Now to your points on the grant algorithm:
1.Effectiveness and efficiency
In the particular case I discuss above, it may have been likely, but unfortunately, it is entirely unclear why it was so. That's all I sam saying. I see no argument except for "trust a single anonymous reviewer". Note that the reasoning of the reviewer could easily be blinded for public presentation to preserve their anonymity. However, none of that is accessible. As a result, it is impossible to judge why the funding policy should be considered effective or efficient, which is precisely my point.
2.A panel of expert reviewers
I beg to differ: a board of reviewers may very well consist of individuals who do precisely what you assign to a single reviewer: "a lot of grant-making judgment". As it is well known from journal publication procedures, a single reviewer may easily be biased in a certain way, or have a blind spot concerning some points of research. Introducing at least two reviewers is done in order to keep biases in check and avoid blind spots. Defending the opposite goes against basic standards of social epistemology (starting already from Millian views on scientific inquiry, to critical rationalists' stance, to the points raised by contemporary feminist epistemologists). Finally, if this is how OpenPhil works, that doesn't tell us anything concerning the effectiveness/efficiency of such a policy.
3.One's track record (including one's publication record)
But why should we take that to be effective and efficient funding policy? That the grant-maker felt so is hardly an argument. I am sure many ineffective charities feel they are doing the right thing, yet we wouldn't call them effective for that, would we?
4.The applicability of the above methodology to philanthropic funding
Again, they may have done so up to now, but my question is really: why is this effective or efficient? Philanthropic research that falls into the scope of scientific domain is essentially scientific research. The basic ideas behind the notion of pursuit worthiness have been discussed e.g. by Anne-Whitt and Nickles, but see also the work by Kitcher, Longino, Douglas, Lacey - to name just a few authors who have emphasized the importance of social aspects of scientific knowledge and the danger of biases. Now if you wish to argue that philanthropic funding of scientific research does not and (more importantly) should not fall under the scope of criteria that cover the effectiveness and efficiency of scientific research in general, the burden of proof will again be on you (I honestly can't imagine why this would be the case, especially since all of the above mentioned scholars pay close attention to the role of non-epistemic (ethical, social, political, etc.) values in the assessment of scientific research).
Ah, I see. Thanks for responding.
I notice until now I’ve been conflating whether the OpenPhil grant-makers themselves should be a committee, versus whether they should bring in a committee to assess the researchers they fund. I realise you’re talking about the latter, while I was talking about the former. Regarding the latter (in this situation) here is what my model of a senior staff member at OpenPhil thinks in this particular case of AI.
If they were attempting to make grants in a fairly mainstream area of research (e.g. transfer learning on racing games... (read more)