Hide table of contents

Note: The idea to investigate applying the Nash bargaining solution to the problem of moral uncertainty, and that the result might be distinctive in particular in cases like Example 7 below, originated with OCB. HG led the investigation of the remainder of the issues discussed herein, and the writing of the paper.

Abstract

This paper explores a new approach to the problem of decision under relevant moral uncertainty. We treat the case of an agent making decisions in the face of moral uncertainty on the model of bargaining theory, as if the decision-making process were one of bargaining among different internal parts of the agent, with different parts committed to different moral theories. The resulting approach contrasts interestingly with the extant “maximise expected choiceworthiness” and “my favourite theory” approaches, in several key respects. In particular, it seems somewhat less prone than the MEC approach to ‘fanaticism’: allowing decisions to be dictated by a theory in which the agent has extremely low credence, if the relative stakes are high enough. Overall, however, we tentatively conclude that the MEC approach is superior to a bargaining-theoretic approach.

1 The problem of moral uncertainty

We often have to act under conditions of relevant uncertainty. Sometimes the uncertainty in question is purely empirical. When one decides whether or not to pack waterproofs, for instance, one is uncertain whether or not it will rain. Each action one might choose is a gamble: the outcome of one’s action depends, in ways that affect how highly one values the outcome, on factors of which one is ignorant and over which one has no control.

Suppose Alice packs the waterproofs but, as the day turns out, it does not rain. Does it follow that Alice made the wrong decision? In one (objective) sense of “wrong”, yes: thanks to that decision, she experienced the mild but unnecessary inconvenience of carrying bulky raingear around all day. But in a second (more subjective) sense, clearly it need not follow that the decision was wrong: if the probability of rain was sufficiently high and Alice sufficiently dislikes getting wet, her decision could easily be the appropriate one to make given her state of ignorance about how the weather would in fact turn out. Normative theories of decision-making under uncertainty aim to capture this second, more subjective, type of evaluation; the standard such account is expected utility theory.

We also have to act under conditions of relevant moral uncertainty. When one decides whether or not to eat meat, for instance, one is (or should be) uncertain whether or not eating meat is morally permissible.

How should one choose, when facing relevant moral uncertainty? In one (objective) sense, of course, what one should do is simply what the true moral hypothesis says one should do. But it seems there is also a second sense of “should”, analogous to the subjective “should” for empirical uncertainty, capturing the sense in which it is appropriate for the agent facing moral uncertainty to be guided by her moral credences, whatever the moral facts may be.

This way of setting out the issues hints that there is a close analogy between the cases of moral and empirical uncertainty, so that those who recognise a subjective reading of “ought” in the context of empirical uncertainty should also recognise a nontrivial question of appropriate action under moral uncertainty.[1] There is a lively debate about whether this analogy is valid.[2] There is also debate about what precisely kind of “should” is involved: rational, moral, or something else again.[3]

For the purpose of this article, we will simply take for granted that there is a nontrivially credence-relative sense of “should” in the moral case. We will also not take a stand on what kind of “should” it is. Our question is how the “should” in question behaves in purely extensional terms. Say that an answer to that question is a metanormative theory.

There are various existing proposed metanormative theories, but none commands widespread assent. The purpose of the present paper is to articulate and evaluate a new approach, based on bargaining theory.

The structure of the paper is as follows. Section 2 briefly surveys the main extant theories of moral uncertainty that we will use as standards for comparison, viz. the “maximise expected choiceworthiness” (MEC) and “my favourite theory” (MFT) approaches. Section 3 sets out a bargaining-theoretic approach. Section 4 establishes some general results that will prove illuminating, for the purpose of understanding and evaluating the way in which the bargaining-theoretic approach treats the problem of moral uncertainty. In sections 5–8, we use these results to analyse the performance of this approach vis-a-vis (respectively) issues of dependence of results on the presence ‘irrelevant’ alternatives (section 5), the problem of small worlds (section 6), moral risk aversion (section 7), and sensitivity to relative stakes and (relatedly) fanaticism (section 8). Section 9 summarises, and compares the merits of a bargaining-theoretic approach with those of MEC. Our own tentative conclusion is that overall the bargaining-theoretic approach is inferior to at least one version of MEC, but this is not completely clear-cut.

Read the rest of the paper


  1. Not everyone does recognise a subjective reading of the moral ‘ought’, even in the case of empirical uncertainty. One can distinguish between objectivist, (rational-)credence-relative and pluralist views on this matter. According to objectivists (Moore, 1903; Moore, 1912; Ross, 1930, p.32; Thomson, 1986, esp. pp. 177-9; Graham, 2010; Bykvist and Olson, 2011) (respectively, credence-relativists (Prichard, 1933; Ross, 1939; Howard-Snyder, 2005; Zimmermann, 2006; Zimmerman, 2009; Mason, 2013), the “ought” of morality is uniquely an objective (respectively, a credence-relative) one. According to pluralists, “ought” is ambiguous between these two readings (Russell, 1966; Gibbard, 2005; Parfit, 2011; Portmore, 2011; Dorsey, 2012; Olsen, 2017), or varies between the two readings according to context (Kolodny and Macfarlane, 2010). ↩︎

  2. The view that while some form of subjectivism about empirical uncertainty is perhaps correct, objectivism the (uniquely) correct view about moral uncertainty, is defended by (Harman, 2011; Weatherson, 2014; Hedden, 2016). For replies, see (Sepielli, 2016; Bykvist, 2017; Sepielli, 2017; MacAskill and Ord, 2018). ↩︎

  3. The view that it is a rational “should” is defended by e.g. Bykvist (2014; 2018). On the other hand, one might well worry that even a person who does not care about morality in some sense ought to play it safe in suitable contexts of moral uncertainty; this consideration mitigates against the view that the ought in question is a rational rather than a moral one. For a survey of the issues, see (Bykvist, 2017, Section 2). ↩︎

3

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities