[This is my best attempt at summarizing a reasonable outsider's view of the current state of affairs. Before publication, I had this sanity checked (though not necessarily endorsed) by an EA researcher with more context. Apologies in advance if it misrepresents the actual state of affairs, but that's precisely the thing I'm trying to clarify for myself and others.]
At GiveWell, the standard of evidence is relatively well understood. We can all see the Cost Effectiveness Analysis spreadsheet (even if it isn't taken 100% literally), compare QALYs and see that some charities are likely much more effective than others.
In contrast, Open Philanthropy is purposefully opaque. As Holden describes in "Anti-principles" for hits-based giving:
We don't: require a strong evidence base before funding something. Quality evidence is hard to come by, and usually requires a sustained and well-resourced effort. Requiring quality evidence would therefore be at odds with our interest in neglectedness.
And:
We don't: expect to be able to fully justify ourselves in writing... Process-wise, we've been trying to separate our decision-making process from our public writeup process. Typically, staffers recommend grants via internal writeups. Late in our process, after decision-makers have approved the basic ideas behind the grant, other staff take over and "translate" the internal writeups into writeups that are suitable to post publicly. One reason I've been eager to set up our process this way is that I believe it allows people to focus on making the best grants possible, without worrying at the same time about how the grants will be explained.
These are reasonable anti-principles. I'm not here to bemoan obfuscation or question the quality of evidence.
(Also note this recent post which clarifies a distinction within Open Phil between "causes focused on maximizing verifiable impact within our lifetimes" and "causes directly aimed at affecting the very long-run future". I'm primarily asking about the latter, which could be thought of as HoldenOpenPhil in contrast to the former AlexOpenPhil.)
My question is really: Given that so much of the decision making process for these causes is private, what are we actually debating when we talk about them on the EA Forum?
Of course there are specific points that could be made. Someone could, in relative isolation, estimate the cost of an intervention, or do some work towards estimating its impact.
But when it comes to actually arguing that X is a high priority cause, or even suggesting that it might be, it's totally unclear to me both:
- What level of evidence is required.
- What level of estimated impact is required.
To give some more specific examples, it's unclear to me how someone outside of Open Philanthropy could go about advocating for the importance of an organization like New Science or Qualia Research Institute.
Or in the more recent back-and-forth between Rethink Priorities and Mark Lutter on Charter Cities, Linch (of RP) wrote that:
I don't get why analyses from all sides [keep] skipping over detailed analysis of indirect effects.* To me by far the strongest argument for charter cities is the experimentation value/"laboratories of governance"angle, such that even if individual charter cities are in expectation negative, we'd still see outsized returns from studying and partially generalizing from the outsized successful charter cities that can be replicated elsewhere, host country or otherwise (I mean that's the whole selling point of the Shenzhen stylized example after all!).
At least, I think this is the best/strongest argument. Informally, I feel like this argument is practically received wisdom among EAs who think about growth. Yet it's pretty suspicious that nobody (to the best of my knowledge) has made this argument concrete and formal in a numeric way and thus exposed it to stress-testing.
I agree that this is a strong argument for charter cities. My (loose) impression is that it's been neglected precisely because it's harder to express in a formal and numeric way than the existing debate (from both sides) over economic growth rates and subsequent increases to time-discounted log consumption.
Again, I'm not here to complain about streetlight effects or express worry that EA tries too hard to quantify things. I understand the value of that approach. I'm specifically asking, as far as it concerns the Holden Open Phil world, which is expressly (as I understand it) more speculative, risk-neutral and non-transparent than some other EA grant makers, what is the role of public EA Forum discussion?
Some possibilities:
- Public discussion is meant to settle specific questions, but not address broader questions of grant-worthiness.
- Even in public, discussions can be productive as long as they have sufficient context on Holden Open Phil priorities, either through informal channels, or interfacing with HOP directly (perhaps as a consultancy).
- Absent that context, EA Forum serves more like training grounds on the path to work within a formal EA organization.
I was really confused by your post because it seemed to ask for normative rules about not talking about philanthropy and grants to EA causes, which doesn't seem reasonable.
Now, after reading your comments, I think what you meant is closer to:
“It seems unworkably hard to talk about grants in the new cause areas. What do we do?”
I’m still not sure if this is what you want, but since no one has really answered, I want to try to give thoughts that might serve your purposes.
From your comment:
I don't understand the statement this "these are not the kinds of issue we are (or should be) discussing".
To be specific:
This is a cause area question and this seems totally up for discussion.
For example, someone could criticize a cause area by pointing to a substantial period of time, like 3 or 5 years where progress in a cause area is low or stagnant, or that experts say this, or that it is plausibly funded or solved.
(This seems possible but very difficult this is because of the moral and epistemic uncertainty but also because cause areas are not non-zero sum games.)
On the positive side, people can post new cause areas and discuss why they are important.
This seems much more productive, and there may even be strong demand for this.
It seems unlikely that an EA forum discussion alone will establish a new cause area but such a discussion seems like an extremely valuable use of the forum.
It seems reasonable to say that existing advisors are low in value or that new advisors can be added. This can be done diplomatically:
It seems easy to unduly pick holes in new orgs, but there are situations where things are very defective and the outlook is bad, and it’s very reasonable to point this out, again diplomatically:
(Note that I think I have examples of most of the above that actually occurred. I don't think it's that productive or becoming to link them all.)
In the above, I tried to focus on criticism, because that is harder.
I think your post might be asking for more positive ways to communicate meta issues—this seems sort of easy (?).
To be clear, you say:
I think a red herring is that in the “Case for the grant”, the wording is very terse. But I don't think this terseness is a norm outside of grant descriptions, or necessarily the only way to talk or signal the value of organizations.
For example, a post, a few pages long, with a perspective about New Science that point out things that are useful and interesting would certainly would be well received (the org does seem extremely interesting!). For example, it can mention tangible projects, researchers and otherwise write truthful narratives that suggest they are attracting and influencing talent or otherwise improving the Life Sciences ecosystem.
I might have more to say but I am worried I still "don't get" your question.
But isn't the GiveWell-style philanthropy exactly not applicable for your example of charter cities?
My sense is that the case for charter cities has some macro/systems process that is hard to measure (and that is why it is only now a new cause area and why the debate exists).
I specifically didn't want ... (read more)