[Crossposted at Data Secrets Lox]

Since 2015 I am a member of Giving What We Can, which means that I have signed the following pledge:

I recognise that I can use part of my income to do a significant amount of good. Since I can live well enough on a smaller income, I pledge that for the rest of my life or until the day I retire, I shall give at least ten percent of what I earn to whichever organisations can most effectively use it to improve the lives of others, now and in the years to come. I make this pledge freely, openly, and sincerely.

Until now, though, I never paid that much attention to what the pledge actually said. I gave to GiveWell's top recommended charities, and to the EA Funds Global Development Fund. But now I am paying attention, and I notice that the pledge doesn't say I should donate to whichever organizations others recommend as being effective, but rather to the organizations which are (or can be) actually effective. In other words, it is asking me to use my own judgement.

Using my own judgement, I see a whole array of options I never considered before. I have always thought that Eliezer Yudkowsky's idea of an AI to take over the world was misguided because the idea of taking over the world is a political problem, not a technical one, so I predict that it cannot be solved by purely technical means. That is why I never donated to the EA Funds Long Term Future Fund, because it seemed to me they were placing too much of an emphasis on technical AI safety concerns, which I don't expect to be that important, especially in their current formulation.

David Friedman introduced me to a new way of thinking. He said the charity he donates to is the Institute for Justice, and when I asked him to quantify their output in EA terms he stated that the number of successful Supreme Court cases would be a good metric for measuring their success. I imagine standard EA philosophy would look down on this metric for some reason that I can't exactly quantify. But it does seem like the best attempt to make sense of a confusing situation.

A standard argument against political donations is that it is counterproductive to have one person spending money to support one side of a political argument, and another person spending money to support the other side of the argument, both moneys (which represent real resources) getting cancelled out against each other and producing nothing. This is equivalent to the observation that from the Outside View, you can never really be sure that you are on the right side of a political argument. I know Scott (Alexander) takes such symmetry considerations seriously.

But to steelman the case for political donations, what if the big problem facing the world is that it is not healthy, and needs to be healed before it can continue on its journey to the stars and beyond? Politics is an attempt to solve that ill health, by bringing parts (or at least the important parts) of the world into agreement, and what's more, agreement on something that is true. This brings us to a distinction between politics that works by force and politics that works by persuasion. The latter is what's needed to bring people to true agreement, but the former is needed in order to tell people to "fuck off" while we are trying to build our own utopia. Note that our conception of utopia may include Jews in the Holocaust, and Uyghurs and others persecuted in China, today.

Besides the Institute for Justice, the main political organization I was thinking of donating to was the Committee on the Present Danger: China. To be honest I do not know a lot about them right now, except that they are trying to move the Olympics away from Beijing, or at least try to have the US boycott the Olympics if they are in Beijing. Their reasoning seems sound to me, though again I don't know how to evaluate it from an EA perspective.

I think I've done enough talking for now, I'll open it up to the floor.

7

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 3:00 AM

He said the charity he donates to is the Institute for Justice, and when I asked him to quantify their output in EA terms he stated that the number of successful Supreme Court cases would be a good metric for measuring their success. I imagine standard EA philosophy would look down on this metric for some reason that I can't exactly quantify. But it does seem like the best attempt to make sense of a confusing situation.

 

I spontaneously would've thought that something broadly in the direction of positively influencing the US political system in terms of EA causes would be met with broad approval. I think the metric is a bit useless as this seems more of a qualitative context in terms of evaluating projects.

E.g. this case of the German constitutional court deciding in favor of the flourishing of the young and future generations was rated positively, among others by members of the Legal Priorities Project

Politics is an attempt to solve that ill health , by bringing parts (or at least the important parts) of the world into agreement, and what's more, agreement on something that is true.

Also, just in case you weren't aware, I think increasing cooperation among governments/institutions/societies is part of the motivation for the cause area of improving institutional decision-making.
 

EA cause areas now include quite a bit of policy advocacy:

In each case, I think EA emphasizes estimating the impact in terms of human outcomes like lives saved. Successful Supreme Court cases could be a useful intermediate outcome, but ultimately I'd want to know something like the impact of the average case on well-being, as well as the likelihood of cases going the other way in the absence of funding the Institute for Justice. Similarly, an EA perspective on the Committee on the Present Danger: China could try to estimate the impact of each dollar donated on the likelihood that the US boycotts the Olympics, and then the impact of that boycott on China's human rights policies, and then the impact of those policies on human well-being; there could also be an existential risk angle if it affects the likelihood of war. This quantification is inherently uncertain, but starting points with Bayesian reasoning and help from forums like these can typically uncover the order of magnitude, which we can use to compare it against other interventions.

Open Phil supports Criminal Justice Reform ...

Historically yes, but not any more:

[W]e think the top global aid charities recommended by GiveWell (which we used to be part of and remain closely affiliated with) present an opportunity to give away large amounts of money at higher cost-effectiveness than we can achieve in many programs, including CJR, that seek to benefit citizens of wealthy countries.

-"In each case, I think EA emphasizes estimating the impact in terms of human outcomes like lives saved. Successful Supreme Court cases could be a useful intermediate outcome, but ultimately I'd want to know something like the impact of the average case on well-being, as well as the likelihood of cases going the other way in the absence of funding the Institute for Justice."

But a Supreme Court case could have potentially infinite effects in the future, since it will be used as precedent for further cases etc. Is it really possible to model this? If it is not possible, then is it possible that IJ is the most effective charity, even though it cannot be analyzed under an EA framework?

More from Dacyn
Curated and popular this week
Relevant opportunities