May 21, 2017
By Tom Sittler
Cross-posted to the Oxford Prioritisation Project blog. We're centralising all discussion on the Effective Altruism forum. To discuss this post, please comment here.
Summary: Congratulations to 80,000 Hours for winning the £10,000 Oxford Prioritisation Project grant! This is a summary post about our final grant decision. It links to other posts which provide more detail about each particular topic.
On March 25th we shortlisted four organisations: the Good Food Institute, the Machine Intelligence Research Institute, 80,000 Hours, and StrongMinds.
The Good Food Institute is an advocacy group for alternatives to animal products.
80,000 Hours provides career advice for altruistically motivated young professionals. It was founded by members of the effective altruism movement.
StrongMinds treats depression in women in Kenya and Uganda through group-based Interpersonal Psychotherapy programs.
Regrettably, we were not able to choose shortlisted organisations as planned. My original intention was that we would choose organisations in a systematic, principled way, shortlisting those which had highest expected impact given our evidence by the time of the shortlist deadline. This proved too difficult, however, so we resorted to choosing the shortlist based on a mixture of our hunches about expected impact and the intellectual value of finding out more about an organisation and comparing it to the others.
Later, we realised that understanding the impact of the Good Food Institute was too difficult, so we replaced it with Animal Charity Evaluators on our shortlist. Animal Charity Evaluators finds advocates for highly effective opportunities to improve the lives of animals.
To decide between our four shortlisted organisations, we have built quantitative models to estimate their impact.
A strong focus on quantitative models is part of what I hope makes the Oxford Prioritisation Project distinctive. I believe quantifying cost-effectiveness estimates, even when good numbers are extremely difficult to come by, has several advantages. I won’t go into the details here, since other have already written eloquently about this (see for example here and here). Quickly, the main advantages are:
Using numbers helps reduce our vulnerability to scope insensitivity
Using Bayesian updating, we can formalise our intuition that more robust estimates should receive greater weight
When something is hard to estimate, you can break it down into easier-to-estimate components.
Quantitative models encourage better disagreements
I want to especially emphasise point (4). In the Oxford Prioritisation Project, and even more so in the larger effective altruism community, the average quality of our decisions probably depends more on our group epistemics than the individual epistemics of the best members. Good disagreements often cause both parties to update their views towards the truth in some way, and suggest relevant areas for further research. Bad disagreements are a waste of time because none of this information transfer is going on, and unfriendly disagreements destroy communities. When running the Oxford Prioritisation Project, meetings where we focused on analysing and improving a quantitative model were vastly better than meetings where we merely discussed qualitative considerations. Models forced us to talk with precision about the inputs that were actually crucial to the final answer, instead of posturing and getting lost in generalities. I’ll go into more detail on how quantitative models improved the epistemic atmosphere of the Oxford Prioritisation Project in a future blog post.
Below are links to the blog posts about each of the four models:
We have low confidence in most of the inputs to the models, but slightly higher confidence that the models are structurally adequate, or at least sufficiently adequate to present a clear improvement over using no models. We strongly encourage you to make copies of the models and substitute your own inputs, in the style of GiveWell who provide the personal estimates of each staff member. You can do this in Guesstimate by going to File -> Make a copy; and in repl.it, you can just edit the code and save it, which will create a new version at a new URL.
In order to give adequately more weight to more robust estimates, we use the model outputs to update a prior distribution over grantee organisation impacts. See my other post on that topic. We then gave the money to the shortlisted organisation with the highest posterior impact.
Since the start, I’ve always thought that much of the expected impact of the Project comes from discovering whether this kind of group can work, and whether it can be replicated in local groups around the world in order to achieve the object-level impacts many times over.
As a result, I’m committed to providing as thorough and honest an assessment of the (object-level) impact of this project as I can.
Over the last months, I’ve been producing detailed notes on what I’ve learned from the project as well as discussing its possible impacts with friends and colleagues. My conclusions will be published in some form, but I am still deciding what would be best. Stay tuned!