Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.
The abstract:
Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.
I cover:
- Five objections to moral prioritization (including the systems critique)
- Earning to give
- Billionaire philanthropy
- Longtermism; and
- Political critique.
Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":
Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.
On earning to give:
Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.
On billionaire philanthropy:
EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.
I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!
Thanks for writing this! My sense from talking to non-EAs about longtermism is that most buy into asymmetric views of population ethics. I'm not sure what you say here will be very reassuring to them:
If you only care about S-risks, not X-risks, and still want to get longtermism, you need to think that the level of suffering in the future could be much greater than the level of suffering at present, such that our diminished ability to prevent future suffering is offset by the scale of that suffering. In other words, if you think that there is already astronomical suffering in the world, due to, e.g., the tens of billions of factory-farmed animals living lives full of suffering, then you have to think that there is a "non-trivial" chance of a far more dystopian future in order to be a longtermist. It's pretty understandable to me why these people would think that we should work on fixing the dystopia we're already in rather than working to prevent a theoretically worse dystopia. I would probably tweak the language in the above paragraph to acknowledge that.
Separately, I didn't read the whole paper, so maybe you say this somewhere, but it might be worth mentioning that you don't need longtermism to think that many of the "longtermist" things EAs are working on (e.g., preventing pandemics; reducing AI risk) are worth working on.
Thanks again for writing this!
Yes, I meant amount.