In March 2017, OpenPhil published the reasoning behind their three-year $30M grant to OpenAI. In that post, they state:
We plan to do informal reviews each year. We currently plan to do a more in-depth review to consider further renewal at the end of this three-year term. The key questions for renewal will be whether OpenAI appears to be a significant positive force for reducing potential risks from advanced AI, and/or whether our involvement is tangibly helping OpenAI move towards becoming a positive force for AI safety.
Does anyone know if any such review was ever published publically? I've searched their website, the forum etc. but have not found anything like that. (I have found a lot of speculation on "what happened" between OpenPhil and OpenAI though.) This December 2020 post does not have any relevant mentions of such review work.
Judging by the OpenPhil database, they have not made any subsequent grants to OpenAI which makes me conclude they did not find OpenAI "to be a significant positive force" per the quote above. This is just more speculation, though, and it would be very interesting to learn where OpenPhil stands on OpenAI, first-hand.