(Cross-posted from my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)
Don’t give to just one charity, there are so many good charities to give to! Also, what if the charity turns out to be ineffective? Then you just wasted all your money and did no good. Don’t worry, there’s a simple solution. Just give to multiple charities and spread risk!
Thinking along these lines is natural. Whether it’s risk aversion, or just an inherent desire to support multiple charities or causes, most of us diversify our philanthropic giving. If your goal is to do the most good however, you should fight this urge with all you’ve got.
Diversification does make some sense, some of the time. If you’re going to fill a charity’s budget with your giving then any further giving should probably go elsewhere.
But most of us are small donors. Our giving usually won't fill a budget, or hit diminishing returns. It certainly won’t hit diminishing returns at the level of an entire cause area, unless perhaps you’re a billionaire philanthropist, in which case well done you.
When you’re deciding where to give you likely have some idea of what the best option is. Maybe you want to help animals and are quite uncertain about how best to do so, but you lean towards thinking that giving to The Humane League (THL) to support their corporate campaigns is slightly better on the margin than giving to Faunalytics to support their research, even though you think there’s a possibility either option is ineffective. In this case, you should give your full philanthropic budget to THL. Fight that urge to give to both charities to cover your back if you make the wrong choice.
Giving to both charities reduces the risk of you doing no good. But, because you subjectively think that THL is slightly better than Faunalytics, it also reduces the amount of good you will actually do in expectation. If you think THL is the best, then why give to anything else? Giving to both means trading away expected good done to get more certainty that you yourself will have done some good. It’s putting your own satisfaction ahead of the expected good of the world. Don’t be that person.
At this point you might push back and say that I haven’t convincingly shown that there’s anything wrong with being risk averse in this way. That is, risk averse with respect to the amount of good a particular individual does. Fair enough, so let me try something a bit more formal.
A recent academic paper by Hilary Greaves, William MacAskill, Andreas Mogenson and Teruji Thomas explores the tension between “difference-making risk aversion” and benevolence. Consider the below.
Outcome goodness | Heads | Tails |
Do nothing | 10 | 0 |
Give to Charity A | 20 | 10 |
Give to Charity B | 10+x | 20+x |
A fair coin is to be flipped which determines the payoffs if we do nothing, give to Charity A, or give to Charity B. The coin essentially represents our current uncertainty.
We do have a hunch that giving to Charity B is better. Charity B differs from Charity A in that, instead of a ½ probability of getting 20, Charity B involves a ½ probability of getting 20+x, and instead of a ½ probability of getting 10, Charity B involves a ½ probability of getting 10+x. Given this, it’s clearly better to give to Charity B. In technical language we say that giving to Charity B stochastically dominates giving to Charity A.
Now instead of ‘outcome goodness’ let’s consider the ‘difference made’ of giving to either charity, relative to doing nothing (this is just some simple subtraction using the table above).
Difference made | Heads | Tails |
Do nothing | 0 | 0 |
Give to Charity A | 10 | 10 |
Give to Charity B | x | 20+x |
A key thing to notice is that an individual with ‘difference-making risk aversion’ might prefer to give to Charity A. Giving to Charity A means you will do 10 units of good for sure. But if x is small, giving to Charity B would mean doing little good if the coin lands heads. A risk averse individual will have a tendency to want to avoid this bad outcome.
So being risk averse in this case might mean wanting to give to Charity A. But we already concluded above that giving to Charity A is silly, because giving to Charity B stochastically dominates giving to Charity A!
What we see here is that ‘difference-making risk aversion’ can lead one to go astray. In one’s effort to avoid doing little good, one makes a very poor decision under uncertainty. The key takeaway is that we shouldn’t respect our ‘difference-making risk aversion’. If we truly care about ensuring the most good is done, we should avoid tendencies to diversify whether it be across charities or cause areas.
To you reader I say this.

Ah right. Yeah I'm not really sure I should have worded it that way. I meant that as a sort of heuristic one can use to choose a preferred option under normative uncertainty using an MEC approach.
For example I tend to like AI alignment work because it seems very robust to moral views I have some non-negligible credence in (totalism, person-affecting views, symmetric views, suffering-focused views and more). So using an MEC approach, AI alignment work will score very well indeed for me. Something like reducing extinction risk from engineered pathogens scores less well for me under MEC because it (arguably) only scores very well on one of those moral views (totalism). So I'd rather give my full philanthropic budget to AI alignment rather than give any to risks from engineered pathogens. (EDIT: I realise this means there may be better giving opportunities for me than giving to LTFF which will give across different longtermist approaches)
So "pick a single option that works somewhat well under multiple moral views that I have credence in" is a heuristic, and admittedly not a good one given that one can think up a large number of counterexamples e.g. when things get a bit fanatical.