[Crossposted at Data Secrets Lox]
Since 2015 I am a member of Giving What We Can, which means that I have signed the following pledge:
I recognise that I can use part of my income to do a significant amount of good. Since I can live well enough on a smaller income, I pledge that for the rest of my life or until the day I retire, I shall give at least ten percent of what I earn to whichever organisations can most effectively use it to improve the lives of others, now and in the years to come. I make this pledge freely, openly, and sincerely.
Until now, though, I never paid that much attention to what the pledge actually said. I gave to GiveWell's top recommended charities, and to the EA Funds Global Development Fund. But now I am paying attention, and I notice that the pledge doesn't say I should donate to whichever organizations others recommend as being effective, but rather to the organizations which are (or can be) actually effective. In other words, it is asking me to use my own judgement.
Using my own judgement, I see a whole array of options I never considered before. I have always thought that Eliezer Yudkowsky's idea of an AI to take over the world was misguided because the idea of taking over the world is a political problem, not a technical one, so I predict that it cannot be solved by purely technical means. That is why I never donated to the EA Funds Long Term Future Fund, because it seemed to me they were placing too much of an emphasis on technical AI safety concerns, which I don't expect to be that important, especially in their current formulation.
David Friedman introduced me to a new way of thinking. He said the charity he donates to is the Institute for Justice, and when I asked him to quantify their output in EA terms he stated that the number of successful Supreme Court cases would be a good metric for measuring their success. I imagine standard EA philosophy would look down on this metric for some reason that I can't exactly quantify. But it does seem like the best attempt to make sense of a confusing situation.
A standard argument against political donations is that it is counterproductive to have one person spending money to support one side of a political argument, and another person spending money to support the other side of the argument, both moneys (which represent real resources) getting cancelled out against each other and producing nothing. This is equivalent to the observation that from the Outside View, you can never really be sure that you are on the right side of a political argument. I know Scott (Alexander) takes such symmetry considerations seriously.
But to steelman the case for political donations, what if the big problem facing the world is that it is not healthy, and needs to be healed before it can continue on its journey to the stars and beyond? Politics is an attempt to solve that ill health, by bringing parts (or at least the important parts) of the world into agreement, and what's more, agreement on something that is true. This brings us to a distinction between politics that works by force and politics that works by persuasion. The latter is what's needed to bring people to true agreement, but the former is needed in order to tell people to "fuck off" while we are trying to build our own utopia. Note that our conception of utopia may include Jews in the Holocaust, and Uyghurs and others persecuted in China, today.
Besides the Institute for Justice, the main political organization I was thinking of donating to was the Committee on the Present Danger: China. To be honest I do not know a lot about them right now, except that they are trying to move the Olympics away from Beijing, or at least try to have the US boycott the Olympics if they are in Beijing. Their reasoning seems sound to me, though again I don't know how to evaluate it from an EA perspective.
I think I've done enough talking for now, I'll open it up to the floor.