Hi all!
I have just published an article on EA called 'Charity vs Revolution: Effective Altruism and the Systemic Change Objection'.
I re-state the systemic change objection in more charitable terms than one often sees and offer an epistemic critique of EA as well as somewhat more speculative critique of charity in general.
Some of you might find it interesting!
A pre-print is here: https://goo.gl/51AUDe
And the final, pay-walled version is here: https://link.springer.com/arti…/10.1007%2Fs10677-019-09979-5
Comments, critiques and complaints very welcome!
Only a minority of EA's total impact comes from immediate poverty relief.
Sure. Now that we are really talking about donations to movement building rather than bed nets. But it's not prima facie obvious that these things will point against EA rather than in favor of it. So we start with a basic presumption that people who aim at making the world better will on average make the world better overall, compared to those who don't. Then, if the historical and qualitative arguments tell us otherwise about EA, we can change our opinion. We may update to think EA is worse than we though before, or we may update to think that it's even better.
However, critics only seem to care about dimensions by which it would be worse. Picking out the one or two particular dimensions where you can make a provocative enough point to get published in a humanities journal is not a reliable way to approach these questions. It is easy to come up with a long list of positive effects, but "EA charity creates long-run norms of more EA charity" is banal, and nobody is going to write a paper making a thesis out of it. A balanced overview of different effects along multiple dimensions and plausible worldviews is the valid way to approach it.
You still don't get it. You think that if we stop at the first step - "our basic presumption that people who aim at making the world better will on average make the world better overall" - that it's some sort of big assumption or commitment. It's not. It's a prior. It is based on simple decision theory and thin social models which are well independent of whether you accept liberalism or capitalism or whatever. It doesn't mean they are telling you that you're wrong and have nothing to say, it means they are telling you that they haven't yet identified overall reason to favor what you're saying over some countervailing possibilities.
You are welcome to talk about the importance of deeper investigation but the idea that EAs are making some thick assumption about society here is baseless. Probably they don't have the time or background that you do to justify everything in terms of lengthy reflectivist theory. Expecting everyone else to spend years reading the same philosophy that you read is inappropriate; if you have a talent then just start applying it, it don't attack people just because they don't know it already. (or, worse, attack people for not simply assuming that you're right and all the other academics are wrong.)