In this short post I try to set out a model of understanding charity evaluation (and cause prioritisation) research, how and when such this research is useful to people and how it can be done better.
A MODEL OF DO-GOODERS
There are many people who want to make the world a better place by giving money to charity and who would (or could be persuaded) to use some amount of charity evaluation research to help guide their decisions.
Each such person has a set of moral beliefs about what it means to do good. I like to break these down into their core intrinsic values and the causes they believe in.
- Their core values are a result of moral introspection. For example wanting a happy world or a just world. If someone is asked why they hold their core values there is no underlying reason except a strong intuitive belief. These values are rarely changed by facts about the world.
- The causes they believe in stem from their beliefs about the world and their core values. For example wanting to end poverty or fight crime or fight capitalism.
The greater extent that such individuals are willing to step back and make decisions based on their core values rather than based on a cause or charity area they stumbled upon at some point then the better decisions they will make and the more applicable any charity evaluation research will be.
However, the difficulty with producing comprehensive charity evaluation or cause prioritisation advice is that all of these people have different core values, different moral intuitions. They may be subtly different. For example I may want to maximise happiness of everyone in the world (classical utilitarianism) and my friend Sally may want to maximise the fulfilment of preferences of everyone in the world (preference utilitarianism). They may be extremely different. I may not care at all about prevent suffering of animals but my friend Sammy might believe that animals are of equal moral importance to humans.
Here is how I have seen charity evaluation research happening:
1• Have an audience _ Find a group of people who care about doing good and would use charity evaluation research and who’s core values are at most subtly different. For example Giving What We Can (GWWC) started with utilitarian philosophers.
2• Find a consensus _ Make sure your audience agree on the change they want to see, and compromise where necessary. For example all the utilitarian philosophers founding GWWC wanted a world with more happiness and less suffering.
3• Narrow the scope _ The utilitarian founders of GWWC realised that their £ could go further in the developing world than the developed world which let them narrow the scope of the charities they were considering.
4• Choose a metric _ Ideally a metric that different charities can be ranked upon. Eg QALYS for early GWWC research. Eg. years of animal suffering spared for early Animal Charity Evaluators research. Eg. reducing suicide rates for people who care about preventing extreme suffering.
5• Rank charities _ Use your metric to put charities or intervention types in order of apparent effectiveness. Eg by QALYs: http://dcp-3.org/sites/default/files/dcp2/DCP02.pdf.
6• In depth investigation _ look in detail at the charities that come top of the list. Check they are actually any good. For example the best intervention at reducing suicide rates does not tackle depression but makes suicide harder by reducing the amount of harmful chemicals in fertiliser. This step requires understanding issues like room for more funding, fungibility and so on. See: http://www.givewell.org/charity-evaluation-questions
THE LIMITS OF THIS PROCESS
Firstly, even of the people who are going to use the results of such research no one is going to be perfectly happy with the results.
Some of the utilitarian philosophers involved in the early days of GWWC would care more about the long run economic effects of the interventions (would think it is better to give to a charity with stronger evidence of leading to long run economic growth, eg SCI over AMF) some of them may worry about the knock on effects on animals (would think it is better to give to a charity that saves lives in vegetarian regions like India, eg Deworm the World over SCI)
Secondly, for anyone who has different motivating core values the charity evaluation research is going to unpersuasive and of limited use.
If I care about making sure that society is just, if I care about the long run more than the short term, if I care about helping the worse off the most, if I care about protecting freedom, then the research done by a bunch of people with utilitarian values is going to be of little use to me.
AGAINST TRYING TO CHANGE OTHERS VALUES
One response to this might be to assume that if someone else has different moral intuitions to you they clearly have incorrect moral intuitions. I think there is a time and a place for challenging an others moral intuitions and values. However I think in almost all cases it is a poor idea. These beliefs are deeply held hard to change and trying to change them can come across as unaccepting argumentative and unwelcoming. I am not going to defend this position in this post but for a discussion on this see: http://effective-altruism.com/ea/18u/intuition_jousting_what_it_is_and_why_it_should/
• Apply or improve this methodology. I hope having a written out idea of how charity evaluation happens is useful for analysing and improving how the EA community evaluates charities. I think that the early stages have happened each time the EA community has evaluated charities but I have not seen something like this written up. This is at a higher level than I have seen discussed previously such as by GiveWell or on this forum.
• Be aware of the limits of existing charity evaluation research and recognise the differences in values of others. If you are trying to convince someone who cares primarily about creating a just society of the value of GiveWell's research this may well be a waste of time (or at least require a much softer approach). Much of the research will not be that relevant to helping them do good.
• Spot the gaps in existing charity evaluation research. See the section below on the London equality and justice cause prioritisation project
• I have written this about charity evaluation. I think the model, process and conclusions above rough apply to all cause prioritisation research.
THE LONDON EQUALITY AND JUSTICE CAUSE PRIORITISATION PROJECT
I think effective altruism is too utilitarian / too welfarist. In particular it feels to me that the effective altruism community has not done research that speaks to people who say they care and are motivated by, above all else, a desire for a just and equal society.
So I wanted to spark some research to address this. I have put £1000 of my donations at the whims of people who care about equality and justice, if they can do some research to find the best charity to give to for them. So far the plan is to roughly follow the process set out above.
To follow this group please go to: https://www.facebook.com/groups/699926826860322/, where you can read the write up of the first session and see the map of our values. If you are in London Please come to our next event on the 18th May.
I do not yet know how this will go:
- Perhaps everyone is secretly a utilitarian at heart and is they think about their values for long enough they will realise that.
- Maybe at the end of extensive research a group would just say the same charities currently recommend in the EA community are the best place to give.
- Maybe this task is too difficult and no progress will be made. Maybe the process set out above will not work to deliver conclusions.
Comments, criticisms of the above, pointing out of spelling mistakes, and so on would be hugely appreciated.
Is sharing models like this useful? I have a model of how humans think about doing good in the world and about how useful charity evaluation work is. I have tried to put this model into words. However I am uncertain about how useful sharing models like this is?
I have already received one earnest anonymous criticism that a smaller homogenous EA community is better as it has much less risk of collapse, so trying to expand EA research to people with different values is a bad thing. Do others agree with this?