This is also broadly representative of how I think about evaluating opportunities.
We won’t generally have access to work that isn’t shared with the general public, but may incidentally have access to such work through individual fund members having private conversations with researchers. Thus far, we’ve evaluated organizations based on the quality of their past research and the quality of their team.
We may also evaluate private research by evaluating the quality of its general direction, and the quality of the team pulling it off. For example, I think the discourse around AI safety could use a lot of deconfusion. I also recognize that such deconfusion could be an infohazard, but nevertheless want such research to be carried out, and think MIRI is one of the most competent organizations around to do it.
In the event that our decision for whether to fund an organization hinges on the content of their private research, we’ll probably reach out to them and ask them if they’re willing to disclose it.
I don’t think any of us have any particular expertise on this question. You could try sending an application on their jobs page.
I personally fund some things that are too small and generally much too weird for anyone else to fund, but besides that, I don’t control any alternative pots of discretionary funding.