Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.
The abstract:
Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.
I cover:
- Five objections to moral prioritization (including the systems critique)
- Earning to give
- Billionaire philanthropy
- Longtermism; and
- Political critique.
Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":
Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.
On earning to give:
Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.
On billionaire philanthropy:
EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.
I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!
You come across as arrogant for a few reasons which are in principle fixable.
1: You seem to believe people who don't share your values are simply ignorant of them, and not in a deep "looking for a black cat in an unlit room through a mirror darkly" sort of way. If you think your beliefs are prima facie correct, fine, most people do - but you still have to argue for them.
2: You mischaracterize utilitarianism in ways that are frankly incomprehensible, and become evasive when those characterizations are challenged. At the risk of reproducing exactly that pattern, here's an example:
As you have been more politely told many times in this comment section already: claiming that utilitarians assign intrinsic value to cost-effectiveness is absurd. Utilitarians value total well-being (though what exactly that means is a point of contention) and nothing else. I would happily incinerate all the luxury goods humanity has ever produced if it meant no one ever went hungry again. Others would go much further.
What I suspect you're actually objecting to is aggregation of utility across persons - since that, plus the grossly insufficient resources available to us, is what makes cost-effectiveness a key instrumental concern in almost all situations - but if so the objection is not articulated clearly enough to engage with.
3: Bafflingly, given (1), you also don't seem to feel the need to explain what your values are! You name them (or at least it seems these are yours) and move on, as if we all understood
in precisely the same way. But we don't. For example: utilitarianism is clearly "impartial" and "neutral" as I understand them (i.e. agent-neutral and impartial with respect to different moral patients) whereas folk-morality is clearly not.
I'm guessing, having just googled that quote, that you mean something like this
in which case there's a further complication: you're almost certainly using "intrinsic value" and "instrumental value" in a very different sense from the people you're talking to. The above versions of "independence" and "neutrality" are, by my lights, obviously instrumental - these are cultural norms for one particular sort of organization at one particular moment in human history, not universal moral law.