JN

Jake Nebel

Professor of Philosophy @ Princeton University
25 karmaJoined Jun 2022jakenebel.com

Comments
4

Hi Joe,

I find this really interesting! I'm not sure I completely understand what your view is, though, so let me ask you about a different case, not involving nihilism. 

Suppose you assign some credence to the view that all worlds are equally good ("indifference-ism") . And suppose the angel offers you a gamble that does nothing if that view is false, but will kill your family if that view is true. You use statewise indifference reasoning to conclude that accepting the gamble is just as good as rejecting it, so you accept it. 

Here are some possible things to say about this:

  1. Bite the bullet: It's perfectly OK to accept this gamble. 
  2. Problem with your credences: You should've been certain that indifference-ism is false. 
  3. Reject statewise indifference: Gambles that are equally good on every theory  to which you assign nonzero credence are not equally good. 
  4. Divorce choice from value: You should've rejected the gamble even though accepting it was just as good as rejecting it—say, because you care about your family. 

Am I right that, from what you write here, you'd lean towards option 4? In that case, what would you say if you don't care about your family? Or what if you're not sure what you care about? (Replace "indifference-ism" with the view that you are in fact indifferent between all possible outcomes, for example.) And are you thinking more generally that what you should do depends on what your preferences are, rather than what's best? Sorry if I'm confused! 

Hi Greg! I basically agree with all of this. But one natural worry about (e.g.) x-risk reduction is not that the undesirable event itself has negligible/Pascalian probability, but rather that the probability of making a difference with respect to that event is negligible/Pascalian. So I don't think it fully settles things to observe that the risks people are working on are sufficiently probable to worry about, if one doesn't think there's any way to sufficiently reduce that probability. (For what it's worth, I myself don't think it's reasonable to ignore the tiny reductions in probability that are available to us—I just think this slightly different worry may be more plausible than the one you are explicitly addressing.) 

Hi Holden, thanks for this very nice overview of Harsanyi's theorem!  

One view that is worth mentioning on the topic of interpersonal comparisons is John Broome's idea (in Weighing Goods) that the conclusion of the theorem itself tells us how to make interpersonal comparisons (though it presupposes that such comparisons can be made). Harsanyi's premises imply that the social/ethical preference relation can be represented by the sum of individual utilities, given a suitable choice of utility function for each person. Broome's view is that this provides  provide the basis for making interpersonal comparisons of well-being: 

As we form a judgement about whether a particular benefit to one person counts for more or less than some other particular benefit to someone else, we are at the same time determining whether it is a greater or lesser benefit. To say one benefit 'counts for more' than another means it is better to bring about this benefit rather than the other. So this is an ethical judgement. Ethical judgements help to determine our metric of good, then. (220). 

And: "the quantity of people's good acquires its meaning in such a way that the total of people's good is equal to general good" (222).  

I don't think Broome is right (see my Aggregation Without Interpersonal Comparisons of Well-Being), but the view is worth considering if you aren't satisfied with the other possibilities. I tend to prefer the view that there is some independent way of making interpersonal comparisons. 

On another note: I think the argument for the existence of utility monsters and legions (note 17) requires something beyond Harsanyi's premises (e.g., that utilities are unbounded). Otherwise I don't see why "Once you have filled in all the variables except for U_M [or U_K], there is some value for U_M [or U_K] that makes the overall weighted sum come out to as big a number as you want." Sorry if I'm missing something! 

Hi Richard, thanks for this post! Just a quick comment about your discussion of ex ante Pareto violations. You write:

What is really responsible for the issue here is that we permit individuals to differ in their attitudes to risk. It is because Ann and Bob differ in these attitudes that we can find a decision problem in which they both prudentially prefer one option to another, while the latter option is guaranteed to give greater total utility than the former.

That's not quite true. Even if all individuals have the same (non-neutral) risk attitude, we can get cases where everyone prefers one option to another even though it guarantees a worse outcome (and not just by the lights of total utilitarianism). See my paper "Rank-Weighted Utilitarianism and the Veil of Ignorance," section V.B.