Two key questions of normative decision theory are: 1) whether the probabilities relevant to decision theory are evidential or causal; and 2) whether agents should be risk-neutral, and so maximise the expected value of the outcome, or instead risk-averse (or otherwise sensitive to risk). These questions are typically thought to be independent - that our answer to one bears little on our answer to the other. But there is a surprising argument that they are not. In this paper, I show that evidential decision theory implies risk neutrality, at least in moral decision-making and at least on plausible empirical assumptions. Take any risk-aversion-accommodating decision theory, apply it using the probabilities prescribed by evidential decision theory, and every verdict of moral betterness you reach will match those of expected value theory.
When making moral decisions about aiding others, you might think it appropriate to be risk-averse. For instance, suppose you face a decision between: rescuing one person from drowning for sure; and spinning a roulette wheel—if the roulette wheel lands on 0 (one of 37 possibilities), you thereby rescue 37 people (with similarly valuable lives) from drowning and, if it lands on any of the other 36 numbers, you rescue no one. On the face of it, it seems plausible that it is better (instrumentally) to rescue the one person for sure, rather than to risk saving no one at all.
In this paper, I will present a novel argument against risk aversion in moral cases, and in favour of risk neutrality: that one risky option is instrumentally better than another if and only if it results in a greater expected sum of moral value. The argument starts from a surprising place, usually thought to have no bearing on issues of risk aversion or risk neutrality. It starts from the claim that the probabilities used to compare options are those given by your evidence, including the evidence provided by that option being chosen; that we should accept evidential decision theory (EDT).
To illustrate what EDT asks of us, consider the much-discussed Newcomb’s Problem.
Before you are two boxes, one opaque and one transparent. You can see that the transparent box contains $1,000. You cannot see into the opaque box, but you know that it contains either $0 or $1,000,000. You can either take the opaque box, or take both boxes. But the contents of the opaque box have been decided by a highly reliable predictor (perhaps with a long record of predicting the choices of others who have faced the same problem). If she predicted that you would take both boxes, it contains $0. If she predicted that you would take just take the opaque box, it contains $1,000,000.
Which is better: to take one or to take both? EDT tells us that taking the one is better. Why? You know that the predictor is highly reliable. So, if you take just the opaque box, you thereby obtain strong evidence that the $1,000,000 is contained within—we can suppose the probability that it does, conditional on taking just one box, is very close to 1. But, if you take both boxes, you thereby obtain strong evidence that the opaque box is empty—the probability that it contains $0, conditional on taking both boxes, is again close to 1. Using these probabilities, taking both boxes will almost certainly win you only $1,000, while taking just the opaque box will almost certainly win you $1,000,000. The latter then seems far better.
Alternatively, you might endorse causal decision theory (CDT): that the probabilities used to compare options are how probable it is that choosing that option will cause each outcome; evidence provided by the choice itself is ignored (see Joyce, 1999, p. 4). In Newcomb’s Problem, to the causal decision theorist, the probability of the opaque box containing $1,000,000 is the same for both options—making either choice has no causal influence on what the predictor puts in the box, so the probability cannot change between options. Using these probabilities, taking both boxes is guaranteed to turn out at least as well as taking just the one. So, the option of taking both must be better than that of taking one.
On the face of it, whether we endorse EDT’s or CDT’s core claims seems to be independent of whether we should endorse risk aversion or risk neutrality. At its core, the question of EDT or CDT is a question about what notion of probability we take as normatively relevant. And this doesn’t seem to bear on how we should respond to said probabilities, and so whether it is appropriate to be risk-averse. Any theory of risk aversion could perhaps be applied to either notion of probability.
But this turns out not to be true. As I will argue, if EDT is true then in practice so too is risk neutrality, at least for moral decision-making. And so we have a novel argument for risk neutrality; that or, if you think risk neutrality deeply implausible, a novel argument against EDT.
Assume that, if not rescued, each of those people is guaranteed to drown. So, the possibility of saving no one (in the second option) does not arise because no rescue attempt is necessary; it arises because your rescue attempt would be unsuccessful. Without this assumption, it turns out that what risk aversion recommends is under-determined—see Greaves et al. (n.d.).
More generally, the argument has force against any form of risk sensitivity (any deviation from risk neutrality). But, in the moral case, risk aversion seems more plausible than risk seeking (cf. Buchak, 2019), so I will focus here on risk aversion.
One technical, and not very compelling, reason to think otherwise is this: CDT is typically axiomatised in the framework of Savage (1954), while EDT is typically axiomatised in the framework of Jeffrey (1965); and, where risk aversion is accommodated in normative decision theory, it is often done so in the basic framework of Savage (see, e.g., Buchak, 2013, p. 88 & p. 91). But there is no necessary connection between EDT and the Jeffrey framework— EDT can be expressed in Savage’s framework (e.g., Spencer and Wells, 2019, pp. 28-9), and CDT can be expressed in Jeffrey’s framework (e.g., Edgington, 2011). Nor is there a necessary connection between Jeffrey’s framework and risk neutrality—theories accommodating risk aversion can be formulated in that framework too (see Stefánsson and Bradley, 2019).
Risk neutrality is often assumed without argument in existing discussions of EDT and CDT. But I take it that this is typically not for any principled reason, but instead in the interests of brevity (as indicated by, e.g., Williamson, 2021: Footnote 27), or simply due to a lack of imagination.