Hide table of contents

Note from Aaron: We have a huge collection of older content on the Forum, some of which we uploaded with archival dates (bypassing the front page).

To get these classic articles in front of more people, we'll occasionally edit their posting dates and move them to the front page. Even if you've read them before, consider doing so again — the summary in your mind may not cover all the insight from the original!

You can also:

  • Add more tags, so that people find these posts more often.
  • Vote, so that posts and their authors get the karma they deserve.
  • Comment, while the post is in front of the community — this is the best time to share your thoughts.

One of our core values is our tolerance for philanthropic “risk.” Our overarching goal is to do as much good as we can, and as part of that, we’re open to supporting work that has a high risk of failing to accomplish its goals. We’re even open to supporting work that is more than 90% likely to fail, as long as the overall expected value is high enough.

And we suspect that, in fact, much of the best philanthropy is likely to fail. We suspect that high-risk, high-reward philanthropy could be described as a “hits business,” where a small number of enormous successes account for a large share of the total impact — and compensate for a large number of failed projects.

If this is true, I believe it calls for approaching our giving with some counterintuitive principles — principles that are very different from those underlying our work on GiveWell. In particular, if we pursue a “hits-based” approach, we will sometimes bet on ideas that contradict conventional wisdom, contradict some expert opinion, and have little in the way of clear evidential support. In supporting such work, we’d run the risk of appearing to some as having formed overconfident views based on insufficient investigation and reflection.

In fact, there is reason to think that some of the best philanthropy is systematically likely to appear to have these properties. With that said, we think that being truly overconfident and underinformed would be extremely detrimental to our work; being well-informed and thoughtful about the ways in which we could be wrong is at the heart of what we do, and we strongly believe that some “high-risk” philanthropic projects are much more promising than others.

This post will:

  • Outline why we think a “hits-based” approach is appropriate.
  • List some principles that we think are sound for much decision-making, but — perhaps counterintuitively — not appropriate for hits-based giving.
  • List principles that we think are helpful for making sure we focus on the best possible high-risk opportunities.

There is a natural analogy here to certain kinds of for-profit investing, and there is some overlap between our thinking and the ideas Paul Graham laid out in a 2012 essay, Black Swan Farming.

The basic case for hits-based giving

Conceptually, we’re focused on maximizing the expected value of how much good we accomplish. It’s often not possible to arrive at a precise or even quantified estimate of expected value, but the concept is helpful for illustrating what we’re trying to do. Hypothetically, and simplifying quite a bit, we would see the following opportunities as equally promising: (1) a $1 million grant that would certainly prevent exactly 500 premature deaths; (2) a $1 million grant that would have a 90% chance of accomplishing nothing and a 10% chance of preventing 5000 premature deaths. Both would have an expected value of preventing 500 premature deaths. As this example illustrates, an “expected value” focus means we do not have a fundamental preference for low-risk philanthropy or high-risk, potentially transformative philanthropy. We can opt for either, depending on the details. As a side note, most other funders we‘ve met have strong opinions on whether it’s better to take big risks or fund what’s reliable and proven; we may be unusually agnostic on this question.

That said, I see a few basic reasons to expect that an “expected value” approach will often favor high-risk and potentially transformative giving.

1. History of philanthropy. We previously gave a high-level overview of some major claimed successes for philanthropy. Since then, we’ve investigated this topic further via our History of Philanthropy project, and we expect to publish an updated summary of what we’ve learned by the end of 2016. One of our takeaways is that there are at least a few cases in which a philanthropist took a major risk — funding something that there was no clear reason to expect to succeed — and ended up having enormous impact, enough to potentially make up for many failed projects.

Here are some particularly vivid examples (note that these focus on magnitude of impact, rather than on whether the impact was positive):

  • The Rockefeller Foundation invested in research on improving agricultural productivity in the developing world, which is now commonly believed to have been the catalyst for a “Green Revolution” that Wikipedia states is “credited with saving over a billion people from starvation.” (The Wikipedia article discusses the role of the Rockefeller Foundation, as does this post on the HistPhil blog, which is supported by the Open Philanthropy Project.)
  • In The Birth of the Pill, Jonathan Eig credits philanthropist and feminist Katharine McCormick — advised by Margaret Sanger — with being the sole funder of crucial early-stage research leading to the development of the combined oral contraceptive pill, now one of the most common and convenient birth control methods.
  • In The Rise of the Conservative Legal Movement, Prof. Steve Teles argues that conservatives put a great deal of funding into long-term, high-risk goals with no way of predicting their success. He also argues that their ultimate impact was to profoundly change the way the legal profession operates and the general intellectual stature of political conservatism.

If accurate, these stories would imply that philanthropy — and specifically, philanthropy supporting early-stage research and high-risk projects — played a major role in some of the more significant developments of the last century.[1] A philanthropic “portfolio” containing one of these projects, plus a large number of similar failed projects, would probably have a very strong overall performance, in terms of impact per dollar.

2. Comparative advantage. When trying to figure out how to give as well as possible, one heuristic to consider is, “what are philanthropists structurally better-suited (and worse-suited) to do compared with other institutions?” Even major philanthropists tend to have relatively less funding available than governments and for-profit investors, but philanthropists are far less constrained by the need to make a profit or justify their work to a wide audience. They can support work that is very “early,” such as new and unproven ideas or work that is likely to take many decades to have an impact. They can support a number of projects that fail in order to find the ones that succeed. They can support work that requires great depth of knowledge to recognize as important and is hard to justify to a wide audience. All of these things seem to suggest that when philanthropists are funding low-probability, high-upside projects, they’re doing what they do best, relative to other institutions.

3. Analogy to for-profit investing. Many forms of for-profit investing, such as venture capital investing, are “hits businesses.” For some description of this, see the article I mentioned previously. Philanthropy seems similar in some relevant ways to for-profit investing: specifically, it comes down to figuring out how to allocate a set amount of funds between projects that can have a wide variety of outcomes. And many of the differences between for-profit investing and philanthropy (as discussed above) seem to imply that hits-based giving is even more likely to be appropriate than hits-based investing.

“Anti-principles” for hits-based giving

This section discusses principles that we think are sound for much decision-making, but not appropriate for hits-based giving. For clarity, these are phrased as “We don’t _____” where _____ is the principle we see as a poor fit with this approach.

A common theme in the items below is that for a principle to be a good fit, it needs to be compatible with the best imaginable giving opportunities — the ones that might resemble cases listed in the previous section, such as the Green Revolution. Any principle that would systematically discourage the biggest hits imaginable is probably not appropriate for hits-based giving, even if it is a good principle in other contexts.

We don’t: require a strong evidence base before funding something. Quality evidence is hard to come by, and usually requires a sustained and well-resourced effort. Requiring quality evidence would therefore be at odds with our interest in neglectedness. It would mean that we were generally backing ideas that others had already explored and funded thoroughly — which would seem to decrease the likelihood of “hit”-sized impact from our participation. And some activities, such as funding work aiming to influence policy or scientific research, are inherently hard to “test” in predictively valid ways. It seems to me that most past cases of philanthropic “hits” were not evidence-backed in the sense of having strong evidence directly predicting success, though evidence probably did enter into the work in less direct ways.

We don’t: seek a high probability of success. In my view, strong evidence is usually needed in order to justifiably assign a high probability to having a reasonably large positive impact. As with venture capital, we need to be willing to back many failures per success — and the successes need to be big enough to justify this.

We don’t: defer to expert opinion or conventional wisdom, though we do seek to be informed about them. Similar to the above point, following expert opinion and conventional wisdom is likely to cut against our goal of seeking neglected causes. If we funded early groundwork for changing expert opinion and/or conventional wisdom on an important topic, this would be a strong candidate for a “hit.” We do think it would be a bad sign if no experts (using the term broadly to mean “people who have a great deal of experience engaging with a given issue”) agreed with our take on a topic, but when there is disagreement between experts, we need to be willing to side with particular ones. In my view, it’s often possible to do this productively by learning enough about the key issues to determine which arguments best fit our values and basic epistemology.

We don’t: avoid controversial positions or adversarial situations. All else equal, we would rather not end up in such situations, but making great effort to avoid them seems incompatible with a hits-based approach. We’re sympathetic to arguments of the form, “You should be less confident in your position when intelligent and well-meaning people take the opposite side” and “It’s unfortunate when two groups of people spend resources opposing each other, resulting in no net change, when they instead could have directed all of their resources to something they agree on, such as directly helping those in need.” We think these arguments give some reason to prefer GiveWell’s top charities. But we feel they need to be set aside when aiming for “hits.”

We feel many “hits” will involve getting a multiplier on our impact by changing social norms or changing key decision-makers’ opinions. And our interest in neglectedness will often point us to issues where social norms, or well-organized groups, are strongly against us. None of the “hits” listed above were without controversy. Note that the combined oral contraceptive is an example of something that was highly controversial at the time (leading, in my view, to the necessary research being neglected by government and other funders) and is now accepted much more broadly; this, to me, is a key part of why it has been such a momentous development.

We don’t: expect to be able to fully justify ourselves in writing. Explaining our opinions in writing is fundamental to the Open Philanthropy Project’s DNA, but we need to be careful to stop this from distorting our decision-making. I fear that when considering a grant, our staff are likely to think ahead to how they’ll justify the grant in our public writeup and shy away if it seems like too tall an order — in particular, when the case seems too complex and reliant on diffuse, hard-to-summarize information. This is a bias we don’t want to have. If we focused on issues that were easy to explain to outsiders with little background knowledge, we’d be focusing on issues that likely have broad appeal, and we’d have more trouble focusing on neglected areas.

A good example is our work on macroeconomic stabilization policy: the issues here are very complex, and we’ve formed our views through years of discussion and engagement with relevant experts and the large body of public argumentation. The difficulty of understanding and summarizing the issue is related, in my view, to why it is such an attractive cause from our perspective: macroeconomic stabilization policy is enormously important but quite esoteric, which I believe explains why certain approaches to it (in particular, approaches that focus on the political environment as opposed to economic research) remain neglected.

Process-wise, we’ve been trying to separate our decision-making process from our public writeup process. Typically, staffers recommend grants via internal writeups. Late in our process, after decision-makers have approved the basic ideas behind the grant, other staff take over and “translate” the internal writeups into writeups that are suitable to post publicly. One reason I’ve been eager to set up our process this way is that I believe it allows people to focus on making the best grants possible, without worrying at the same time about how the grants will be explained.

A core value of ours is to be open about our work. But “open” is distinct from “documenting everything exhaustively” or “arguing everything convincingly.” More on this below.

We don’t: put extremely high weight on avoiding conflicts of interest, intellectual “bubbles” or “echo chambers.” There will be times when we see a given issue very differently from most people in the world, and when the people we find most helpful on the issue will be (not coincidentally) those who see the issue similarly. This can lead to a risk of putting ourselves in an intellectual “bubble” or “echo chamber,” an intellectually insulated set of people who reinforce each others’ views, without bringing needed alternative perspectives and counterarguments.[2]

In some cases, this risk may be compounded by social connections. When hiring specialists in specific causes, we’ve explicitly sought people with deep experience and strong connections in a field. Sometimes, that means our program officers are friends with many of the people who are best suited to be our advisors and grantees.

Other staff, including myself, specialize in choosing between causes rather than in focusing on a specific cause. The mission of “choosing between causes to do the most good possible” is itself an intellectual space with a community around it. Specifically, many of our staff — including myself — are part of the effective altruism community, and have many social ties in that community.

As a result, it sometimes happens that it’s difficult to disentangle the case for a grant from the relationships around it. When these situations occur, there’s a greatly elevated risk that we aren’t being objective, and aren’t weighing the available evidence and arguments reasonably. If our goal were to find the giving opportunities most strongly supported by evidence, this would be a major problem. But the drawbacks for a “hits-based” approach are less clear, and the drawbacks of too strongly avoiding these situations would, in my view, be unacceptable.

To use myself as an example:

  • My strong interest in effective altruism and impact-focused giving has led me to become friends — and live in the same house — with similarly interested people.
  • I spend a lot of time with the people I have found to most strongly share my values and basic epistemology, and to be most interesting and valuable as intellectual peers.
  • If I had a policy of asking my friends to recuse themselves from advising me or seeking support from the Open Philanthropy Project, this would mean disallowing input from some of the people whose opinions I value most.
  • Under a “hits-based” approach, we can expect the very few best projects to account for much (or most) of our impact. So disallowing ideas from some of the people who most closely share our values could dramatically lower the expected value of our work.

This issue is even more pronounced for some of our other staff members, since the staffers who are responsible for investigating funding opportunities in a given area tend to be the ones with the deepest social connections in the relevant communities.

To be clear, I do not believe we should ignore the risks of intellectual “bubbles” or conflicts of interest. To mitigate these risks, we seek to (a) always disclose relevant connections to decision-makers; (b) always make a strong active effort to seek out alternative viewpoints before making decisions, including giving strong consideration to the best counterarguments we can identify; (c) aim for key staff members to understand the most important issues themselves, rather than relying on the judgment of friends and advisors, to the extent that this is practical; (d) always ask ourselves how our relationships might be distorting our perception of a situation; (e) make sure to seek input from staff who do not have relevant conflicts of interest or social relationships.

But after doing all that, there still will be situations where want to recommend a grant that is strongly supported by many of our friends, while attracting little interest from those outside our intellectual and social circles. I think if we avoided recommending such grants, we would be passing over some of our best chances at impact — an unacceptable cost for a “hits-based” approach.

We don’t: avoid the superficial appearance — accompanied by some real risk — of being overconfident and underinformed. When I picture the ideal philanthropic “hit,” it takes the form of supporting some extremely important idea, where we see potential while most of the world does not. We would then provide support beyond what any other major funder could in order to pursue the idea and eventually find success and change minds.

In such situations, I’d expect the idea initially to be met with skepticism, perhaps even strong opposition, from most people who encounter it. I’d expect that it would not have strong, clear evidence behind it (or to the extent it did, this evidence would be extremely hard to explain and summarize), and betting on it therefore would be a low-probability play. Taking all of this into account, I’d expect outsiders looking at our work to often perceive us as making a poor decision, grounded primarily in speculation, thin evidence and self-reinforcing intellectual bubbles. I’d therefore expect us to appear to many as overconfident and underinformed. And in fact, by the nature of supporting an unpopular idea, we would be at risk of this being true, no matter how hard we tried (and we should try hard) to seek out and consider alternative perspectives.

I think that a “hits-based” approach means we need to be ready to go forward in such situations and accept the risks that come with them. But, as discussed below, I think there are better and worse ways to do this, and important differences between engaging in this sort of risk-taking and simply pursuing self-serving fantasies.

Working principles for doing hits-based giving well

The previous section argues against many principles that are important in other contexts, and that GiveWell fans might have expected us to be following. It is reasonable to ask — if one is ready to make recommendations that aren’t grounded in evidence, expert consensus, or conventional wisdom — is there any principled way to distinguish between good and bad giving? Or should we just be funding what we’re intuitively excited about?

I think it’s hard to say what sort of behavior is most likely to lead to “hits,” which by their nature are rare and probably hard to predict. I don’t know enough about the philanthropists who have been behind past “hits” to be able to say much with confidence. But I can outline some principles we’re working with to try to do “hits-based” giving as well as possible.

Assess importance, neglectedness and tractability. These are the key criteria of the Open Philanthropy Project. I think each of them, all else equal, makes “hits” more likely, and each in isolation can often be assessed fairly straightforwardly. Much of the rest of this section pertains to how to assess these criteria in difficult situations (for example, when there is no expert consensus or clear evidence).

Consider the best and worst plausible cases. Ideally, we’d assign probabilities to each imaginable outcome and focus on the overall expected value. In practice, one approximation is to consider how much impact a project would have if it fully achieved its long-term goals (best plausible case), and how much damage it could do if it were misguided (worst plausible case). The latter gives us some indication of how cautiously we should approach a project, and how much work we should put into exploring possible counterarguments before going forward. The former can serve as a proxy for importance, and we’ve largely taken this approach for assessing importance so far. For example, see the Google spreadsheets linked here, and particularly our estimates of the value of major policy change on different issues.

Goals can often be far more achievable than they appear early on (some examples here), so I believe it’s often worth aiming for a worthy but near-impossible-seeming goal. If successes are rare, it matters a great deal whether we choose to aim for reasonably worthy goals or maximally impactful ones. Despite the uncertainty inherent in this sort of giving, I believe that the question, “How much good could come of the best case?” will have very different answers for different giving opportunities.

Aim for deep understanding of the key issues, literatures, organizations, and people around a cause, either by putting in a great deal of work or by forming a high-trust relationship with someone else who can. If we support projects that seem exciting and high-impact based on superficial understanding, we’re at high risk of being redundant with other funders. If we support projects that seem superficially exciting and high-impact, but aren’t being supported by others, then we risk being systematically biased toward projects that others have chosen not to support for good reasons. By contrast, we generally aim to support projects based on the excitement of trusted people who are at a world-class level of being well-informed, well-connected, and thoughtful in relevant ways.

Achieving this is challenging. It means finding people who are (or can be) maximally well-informed about issues we’ll never have the time to engage with fully, and finding ways to form high-trust relationships with them. As with many other philanthropists, our basic framework for doing this is to choose focus areas and hire staff around those focus areas. In some cases, rather than hiring someone to specialize in a particular cause, we try to ensure that we have a generalist who puts a great deal of time and thought into an area. Either way, our staff aim to become well-networked and form their own high-trust relationships with the best-informed people in the field.

I believe that the payoff of all of this work is the ability to identify ideas that are exciting for reasons that require unusual amounts of thought and knowledge to truly appreciate. That, to me, is a potential recipe for being positioned to support good ideas before they are widely recognized as good, and thus to achieve “hits.”

Minimize the number of people setting strategy and making decisions. When a decision is made as a compromise between a large number of people with very different perspectives, it may have a high probability of being a defensible and reasonable decision, but it seems quite unlikely to be an extraordinarily high-upside decision. I would guess that the latter is more associated with having a distinctive perspective on an issue based on deep thought and context that would be hard to fully communicate to others. Another way to put this is that I’d be more optimistic about a world of individuals pursuing ideas that they’re excited about, with the better ideas gaining traction as more work is done and value is demonstrated, than a world of individuals reaching consensus beforehand on which ideas to pursue.

Formally, grant recommendations currently require signoff from Cari Tuna and myself before they go forward. Informally, our long-term goal is to defer to the staff who know the most about a given case, such that strategy, priorities and grants for a given cause are largely determined by the single person who is most informed about the cause. This means, for example, that we aspire for our criminal justice reform work to be determined by Chloe Cockburn, and our farm animal welfare work to be determined by Lewis Bollard. As stated above, we expect that staff will seek a lot of input from other people, particularly from field experts, but it is ultimately up to them how to consider that input.

Getting to that goal means building and maintaining trust with staff, which in turn means asking them a lot of questions, expecting them to explain a significant amount of their thinking, and hashing out key disagreements. But we never require them to explain all of their thinking; instead, we try to drill down on the arguments that seem most noteworthy or questionable to us. Over time, we aim to lower our level of engagement and scrutiny as we build trust.

I hope to write more about this basic approach in the future.

When possible, support strong leadership with no strings (or minimal strings) attached, rather than supporting unremarkable people/organizations to carry out plans that appeal to us. The case for this principle is an extension of the case for the previous principle, and fits into the same basic approach that I hope to write more about in the future. It’s largely about shifting decision-making power to the people who have the deepest context and understanding.

Understand the other funders in a space, and hesitate to fund things that seem like a fit for them. This is an aspect of “Aim for deep understanding …” that seems worth calling out explicitly. When we fund something that is a conceptual fit for another funder, there’s a good chance that we are either (a) moving only a little more quickly than the other funder, and thus having relatively little impact; or (b) funding something that another funder declined to fund for good reasons. Having a good understanding of the other funders in a space, and ideally having good relationships with them, seems quite important.

Be wary of giving opportunities that seem unlikely (from heuristics) to be neglected. This is largely an extension of the previous principle. When an idea seems to match quite well with conventional wisdom or expert consensus, or serves a particular well-resourced interest, this raises questions about why it hasn’t already attracted support from other funders, and whether it will stay under-funded for long.

Bottom line. The ideal giving opportunity, from my perspective, looks something like: “A trusted staff member with deep knowledge of cause X is very excited to support — with few or no strings attached — the work of person Y, who has an unusual perspective and approach that few others appreciate. The staff member could easily imagine this approach having a massive impact, even if it doesn’t seem likely to. When I first hear the idea, it sounds surprising, and perhaps strange, counterintuitive or unattractive, but when I question the staff member about possible failure modes, concerns, and apparent gaps in the case for the idea, it seems that they are already well-informed and thoughtful about the questions I ask.” This basic setup seems to me to maximize odds of supporting important work that others won’t, and having a chance down the line of changing minds and getting a “hit.”

Reconciling a hits-based approach with being open about our work

A core value of ours is to be open about our work. Some reasons for this:

  • We’d like others to be able to take advantage of what we’ve learned, in order to better inform themselves.
  • We’d like others to be able to understand, question and critique our thinking.
  • We’d like there to be a more sophisticated public dialogue about how to give well.

There is some tension between these goals and the fact that, as discussed above, we expect to do many things that are hard to justify in a convincing way to outsiders. We expect that our writeups will frequently not be exhaustive or highly persuasive, and will often leave readers unsure of whether we’ve made a good decision.

However, we think there is room to achieve both goals — being open and having a “hits-based” approach — to a significant degree. For a given decision, we aim to share our thinking to the point where readers can understand:

  • The major pros and cons we perceive.
  • The premises that are key to our views.
  • The process we’ve followed.
  • What sorts of things the reader might do in order to come to the point of confidently agreeing or disagreeing with our thinking, even if they aren’t sure how to feel based on a writeup alone.

A couple of examples:

We believe that this sort of openness can accomplish a lot in terms of the goals above, even though it often won’t be exhaustive or convincing on its own.

In general, this discussion might help clarify why the Open Philanthropy Project is aimed primarily at major philanthropists — people who have the time to engage deeply with the question of where to give — rather than at individual donors. Individual donors do, of course, have the option to trust us and support us even when our views seem unusual and hard to justify. But for those who don’t already trust us, our writeups (unlike, in my view, GiveWell’s writeups) will not always provide sufficient reason to take us at our word.

“Hits-based mentality” vs. “arrogance”

As discussed above, I believe “hits-based giving” will often entail the superficial appearance — and a real risk of — having overconfident views based on insufficient investigation and reflection. I use “arrogance” as shorthand for the latter qualities.

However, I think there are important, and observable, differences between the two. I think a “hits-based mentality” can be a reasonable justification for some behaviors commonly associated with arrogance — in particular, putting significant resources into an idea that is controversial and unsupported by strong evidence or expert consensus — but not for other behaviors.

Some specific differences that seem important to me:

Communicating uncertainty. I associate arrogance with being certain that one is right, and communicating accordingly. I find it arrogant when people imply that their favorite causes or projects are clearly the best ones, and especially when they imply that work being done by other people, on other causes, is unimportant. A hits-based mentality, by contrast, is consistent both with being excited about an idea and being uncertain about it. We aim to clearly communicate our doubts and uncertainties about our work, and to acknowledge there could be much we’re getting wrong, even as we put resources into our ideas.

Trying hard to be well-informed. I associate arrogance with jumping to conclusions based on limited information. I believe a well-executed “hits-based mentality” involves putting significant work into achieving a solid understanding of the case both for and against one’s ideas. We aspire to think seriously about questions and objections to our work, even though we won’t be able to answer every one convincingly for all audiences.

Respecting those we interact with and avoiding deception, coercion, and other behavior that violates common-sense ethics. In my view, arrogance is at its most damaging when it involves “ends justify the means” thinking. I believe a great deal of harm has been done by people who were so convinced of their contrarian ideas that they were willing to violate common-sense ethics for them (in the worst cases, even using violence).

As stated above, I’d rather live in a world of individuals pursuing ideas that they’re excited about, with the better ideas gaining traction as more work is done and value is demonstrated, than a world of individuals reaching consensus on which ideas to pursue. That’s some justification for a hits-based approach. But with that said, I’d also rather live in a world where individuals pursue their own ideas while adhering to a baseline of good behavior and everyday ethics than a world of individuals lying to each other, coercing each other, and actively interfering with each other to the point where coordination, communication and exchange break down.

On this front, I think our commitment to being honest in our communications is important. It reflects that we don’t think we have all the answers, and we aren’t interested in being manipulative in pursuit of our views; instead, we want others to freely decide, on the merits, whether and how they want to help us in our pursuit of our mission. We aspire to simultaneously pursue bold ideas and remember how easy it would be for us to be wrong.

This work is licensed under a Creative Commons Attribution 4.0 International License.


  1. There is some possibility of survivorship bias, and a question of how many failed projects there were for each of these successes. However, I note that the examples above aren’t drawn from a huge space of possibilities. (All of the philanthropists covered above would probably — prior to their gifts — have made fairly short lists of the most prominent philanthropists interested in their issues.) ↩︎

  2. As of August 2017, we no longer write publicly about personal relationships with partner organizations. This blog post was updated to reflect this change in practice. ↩︎

Comments2
Sorted by Click to highlight new comments since: Today at 9:49 AM

Holden Karnofsky says that:

We don’t: expect to be able to fully justify ourselves in writing.

We don’t: put extremely high weight on avoiding conflicts of interest, intellectual “bubbles” or “echo chambers.”

We don’t: avoid the superficial appearance — accompanied by some real risk — of being overconfident and underinformed.

Despite this, Open Phil's communications seem seem to show great honesty and thoughtfulness in this article and others in 2016. Immense attention is given to communicate their decision and perspective and other meta issues, even on awkward, complex, or difficult to articulate topics.

For example, look at the effort given to explaining nuances in the "hits-based" decision of hiring Open Phil's first program officer:

https://www.openphilanthropy.org/blog/process-hiring-our-first-cause-specific-program-officer

In these conversations, a common pattern we saw was that a candidate would have a concrete plan for funding one broad kind of work (for example, ballot initiatives, alternative metrics for prosecutors, or research on alternatives to incarceration) but would have relatively little to say about other broad kinds of work. This was where we found the work that had gone into our landscape document particularly useful. When a candidate didn’t mention a major aspect of the criminal justice reform field, we would ask about it and see whether they were omitting it because they (a) had strong knowledge of it and were making a considered decision to de-prioritize this aspect of the field; (b) didn’t have much experience or knowledge of this area of the field.

There's a lot given to us in just this one paragraph: a peek into a failure of breadth (in probably very high quality candidates), a filter which helps explain how Open Phil finds someone "well-positioned to develop a good strategy". It also concretely shows the value of the structure in the "landscape document", which shows the value of process and how it is adhered to.

The article even gives us with a peek into the internal decision making process:

Early on, Alexander Berger and I expect to work closely with her, asking many questions and having many critical discussions about the funding areas and grants she proposes prioritizing. That said, we don’t expect to understand the full case for her proposals, and we will see our role more as “spot-checking reasoning” than as “verifying every aspect of the case.” Over time, we hope to build trust and reduce the intensity of (while never completely eliminating) our critical questions. Ultimately, we hope that our grants in the space of criminal justice reform will be less and less about our view of the details, and more and more about the bet we’re making on Chloe.

This clearly shows us meta-awareness of the process of 1) how decisions are made, 2) how some control and validation of the program officer occurs, and 3) acknowledgement of the limitations about the executives. 

This is an impressive level of honesty and explicitness about a very sensitive process. 

It's hard to think of another organization that would write something like this.

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important, but which I don't have time to re-read or say very nuanced things about.]

I think this piece either helped to correct some early EA biases towards legible/high-certainty work (e.g. cash transfers), or more publicly signalled that this correction was taking place (I'm not quite sure of the intellectual history, or what caused what). 

Skimming it again, I think it makes some compelling points, and does a good job adding nuance and responding to possible concerns.

I think this is probably one of the most important pieces of the last decade.