In the 2019 EA Survey, 70% of EAs "identified with" (it isn't clear what exactly the question was) utilitarianism. The ideology is closely related to EA. I can't find the source but was once told that some big shots (Toby Ord?) once wanted to call the movement "effective utilitarianism." The shortlist for naming the organization that would become the Centre for Effective Altruism apparently included the "Effective Utilitarian Community."

But I argue that utilitarianism is wrong and often motivated by a desire for elegant or mathematical qualities at the expense of obvious intuitions about particular cases.

At present this is unimportant since common EA conclusions are compatible with most normal moral views, but it could become a problem in the future if dogmatic utilitarianism conflicts in an important way with common-sense or rights-based ethics. 

Full post: https://arjunpanickssery.substack.com/p/just-say-no-to-utilitarianism

3

0
0

Reactions

0
0
Comments15
Sorted by Click to highlight new comments since: Today at 12:10 AM

Meta comment: I'd prefer if you cross-posted the whole post because I'm unlikely to go to a new link.

There's no totally satisfying answer, I think, to the question of how ethical intuitions should affect what we think is good/right/best/etc. Among other reasons, those of us who prefer principles over myopic intuition-consultation point to the fact that human intuitions seem to turn on irrelevant facts (e.g., different framings of the trolley problem provoke different responses from many humans, even when the differences seem like they shouldn't be morally relevant). I recommend Peter Singer's Ethics and Intuitions. (And if it seems too long, I would prioritize section 3.)

What do you think is the source of ethical knowledge if not intuitions

Clearly we can't just throw intuitions out, since then we have nothing left! But I--and Peter Singer in the linked piece--think intuitions should be a guide to help us generate broader principles (and I happen to find utilitarianism appealing as such a principle), and I think intuitions shouldn't be seen of as data that give us direct access to moral truth and that ethical theories must explain, since considering the source of our intuitions, they're not directly generated by moral truth.

intuitions shouldn't be seen as data that give us direct access to moral truth,

I think that our initial moral intuitions about particular situations are—along with immediate occurrences like "Courage is better than cowardice"—the lowest-level moral information, and these intuitions provide reasons for believing that the moral facts are the way they appear.

I'll read the Singer paper.

Sometimes intuitions conflict, like if someone intuits

  • I should pull the lever to save five in a trolley problem
  • I shouldn't kill someone and use their organs to save five others
  • The two scenarios have no morally relevant differences

So we need principles, and I find the principle "consult your intuition about the specific case" unappealing because I feel more strongly about more abstract intuitions like the third one above (or maybe like universalizability or linearity) than intuitions about specific cases.

When your intuitions conflict you can think of your relative credence and then maximize the expected quality of your choice.

Your chosen method - refuting a rule with a counterexample - throws out all moral rules, since every moral theory has counterexamples. This includes common sense ethics - recall the protracted cross-cultural justification of slavery, for one upon thousands of instances. (Here construing "go with your gut dude" as a rule.)

If we were nihilists, we could sigh in relief and stop here. But we're not - so what next? Clearly something not so rigid as rules.

You're also underselling the mathematical results: as a nonconsequentialist, you will make incoherent actions if you don't make sure that Harsanyi doesn't bite your ethics. You're free to deny one of the assumptions, but there ends the conversation.

(All that said, I'm not a utilitarian.)

Your chosen method - refuting a rule with a counterexample - throws out all moral rules, since every moral theory has counterexamples.

This sounds a lot like "every hypothesis can be eventually falsified with evidence, therefore, trying to falsify hypotheses rules out every hypothesis. So we shouldn't try to falsify hypotheses." 

But we are Bayesians, are we not? If we are, we should update away from ethical principles when novel counterexamples are brought to our attention, with the magnitude of the update proportional to the unpleasantness of the counterexample.

Agreed

Your chosen method - refuting a rule with a counterexample - throws out all moral rules, since every moral theory has counterexamples.

I'm not sure what exactly you mean by a moral rule, e.g. "Courage is better than cowardice, all else equal" doesn't have any counterexamples. But for certain definitions of moral rule you should reject all moral rules as incorrect.

You're free to deny one of the assumptions, but there ends the conversation.

Looking at the post, I'll deny "My choices shouldn't be focused on ... how to pay down imagined debts I have to particular people, to society." You have real debts to particular people. I don't see how this makes ethics inappropriately "about my own self-actualization or self-image."

I argue that utilitarianism is wrong and often motivated by a desire for elegant or mathematical qualities at the expense of obvious intuitions about particular cases. 

 

I don't think this is an accurate representation of the arguments for utilitarianism.

On intuitions and utilitarianism, see my debate with Michael Huemer.  I argue that deontology violates deeper intuitions in a more irresolvable way.

Here are features of EA that are justifiable from a utilitarian perspective, but not from other moral frameworks: 

1. DALY-based evaluations of policies. The idea of a DALY assumes that quality of life is interchangeable between people and aggregatable across people, which is not common sense and not true in a rights-based framework, because rights are not interchangeable between people.

2. Longtermism. Most arguments for longtermism are of the form "there will be far more people in the future, so the future is more important to preserve" which is a utilitarian argument. Maybe you could make a "future people have rights" argument, but that doesn't answer why their rights are potentially more important than neartermist concerns - only a population-weighting view does that.
 
3. (Relatedly) Population ethics. Almost every non-utilitarian view entails person-affecting views: an act is only bad if it's bad for someone. Creating happy lives is not a moral good in other philosophies, whereas (many though not all) EAs are motivated by that. 

4. Animal welfare. Animal welfare concerns as we imagine them stem from trying to reduce animal pain. You could bend over backward to describe why animals have rights, but most rights-based frameworks derive rights from some egalitarian source (making them unsuitable to say animals have "less rights than people but still some rights", which is the intuition most of us have). Moreover, even if you could derive animal rights, it would be super unclear what actions are best to support animal rights (how do you operationalize animal dignity?), whereas a utilitarian view allows you to say "the best actions are the ones that minimize the pain that animals experience" leading you to solutions like eliminating battery cages. 

I don't think you can reject utilitarianism without rejecting these features of EA. Utilitarianism could be "wrong" in an abstract sense but I think 70% of EAs see it as the best practical guide to making the world better. It often does conflict with common-sense ethics - the common sense of most people would suggest animal suffering doesn't matter, and that future people matter significantly less than people alive today! Utilitarianism is not an unwanted appendage to EA that could hamper it in the future. It's the foundation of EA's best qualities: an expanding moral circle and the optimizing of altruistic resources.

The use of DALYs and QALYs  is not specifically utilitarian. They can be used in other frameworks. The difference is how they are weighted. For example, a utilitarian may only care about the net gain across the whole population, whereas someone motivated by (say) a Rawlsian perspective would place more moral weight on achieving gains to the worst off.

I don't think you can easily dismiss the argument that acting virtuously and honestly produces more utility in the long run. If everyone abandons this, then society falls apart and becomes a lot more miserable. 

It seems like a tax system helps solve the problems in all of these hypotheticals by balancing people's needs and desires instead of going to one extreme or the other. Of course, people have different ideas about what the exact numbers should be, and we can have an open and democratic debate about that. And maybe different adjustments make more sense at different times. 

We shouldn't force people to starve because one person owns all the food, and we shouldn't take all of someone's money just because other people are in need. But it's still good to help others and it's even better to help more. 

Curated and popular this week
Relevant opportunities