M

MichaelStJules

9729 karmaJoined May 2016

Sequences
1

Welfare and moral weights

Comments
2227

Topic contributions
12

Arguably we can only say a world state X is "better" than a world state Y iff both

  1. switching from X to Y is bad and
  2. switching from Y to X is good.

FWIW, people with person-affecting views would ask "better for whom?". Each set of people could have their own betterness order. Person-affecting views basically try to navigate these different possible betterness orders.

Thanks for the feedback! I've edited the post with some clarification.

I think standard decision theory (e.g. expected utility theory) is actually often framed as deciding between (or ranking) outcomes, or prospects more generally, not between actions. But actions have consequences, so we just need actions with the above outcomes as consequences. Maybe it's pressing buttons, pulling levers or deciding government policy. Either way, this doesn't seem very important, and I doubt most people will be confused about this point.

On the issue of switching between worlds, for the sake of the thought experiment, assume the current world has 1 million people, the same people common to all three outcomes, but it’s not yet decided whether the world will end up like A, A+ or Z. That's what you're deciding. Choosing between possible futures (or world histories, past, present and future, but ignoring the common past).

I don't intend for you to be able to switch from A+ or Z to A by killing people. A is defined so that the extra people never exist. It's the way things could turn out. Creating extra people and then killing them would be a different future.

We could make one of the three options the "default future", and then we have the option to pick one of the others. If we’re consequentialists, we (probably) shouldn’t care about which future is default.

Or, maybe I add an uncontroversially horrible future, a 4th option, the 1 million people being tortured forever, as the default future. So, this hopefully removes any default bias.

I wrote a bit more about Dasgupta's approach and how to generalize it here.

You can conservatively multiply through by the probability that the time of perils is short enough and for the risk dropping enough orders of magnitude.

In Parfit's case, we have a good explanation for why you're rationally required to bind yourself: doing so is best for you.

The more general explanation is that it's best according to your preferences, which can also reflect or just be your moral views. It's not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you

  1.  expect to do more to further your own values/preferences with your own money than the driver would to further your own values/preferences with your money
  2. don't disvalue breaking promises (or don't disvalue it enough), and
  3. can't bind yourself to paying and know this,

then you'd predict you won't pay and be left behind.

Perhaps you're morally required to bind yourself in Two-Shot Non-Identity, but why?

Generically, if and because you hold a wide PAV, and it leads to the best outcome ahead of time on that view. There could be various reasons why someone holds a wide PAV. It's not about it being better for Bobby or Amy. It's better "for people", understood in wide person-affecting terms.

One rough argument for wide PAVs could be something like this, based on Frick, 2020 (but without asymmetry):

  1. If a person A existed, exists or will exist in an outcome,[1] then the moral standard of "A's welfare" applies in that outcome, and its degree of satisfaction is just A's lifetime (or future) welfare.
  2. Between two outcomes, X and Y, if 1) standard x applies in X and standard y applies in Y (and either x and y are identical standards or neither applies in both X and Y), 2) standards x and y are of "the same kind", 3) x is at least as satisfied in X as y is in Y, and 4) all else is equal, then X  Y (X is at least as good as Y).
    1. If keeping promises matters in itself, then it's better to make a promise you'll keep than a promise you'll break, all else equal.
    2. With 1 (and assuming different people result in "the same kind" of welfare standards with comparable welfare), "Just Bobby" is better than "Just Amy", because the moral standard of Bobby's welfare would be more satisfied than the moral standard of Amy's welfare.
    3. This is basically Pareto for standards, but anonymous/insensitive to the specific identities of standards, as long as they are of "the same kind".
  3. It's not better (or worse) for a moral standard to apply than to not apply, all else equal.
    1. So creating Bobby isn't better than not doing so, unless we have some other moral standard(s) to tell us that.[2]
    2. This is similar to Existence Anticomparativism.

And suppose that (for whatever reason) you can't bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices.

I would say the permissibility of choices depends on what options are still available, and so can change if options that were available before become unavailable. "Just Amy" can be impermissible ahead of time because "Just Bobby" is still available, and then become permissible after "Just Bobby" is no longer available. If Amy already exists as you assume, then "Just Bobby" is no longer available. I explain more here.

I guess that means it depends on lever-lashing? But if that's it, I don't find that very objectionable, and it's similar to Parfit's hitchhiker.

  1. ^

    Like B-theory of time or eternalism.

  2. ^

    This would need to be combined with the denial of many particular standards, e.g. total welfare as the same standard of standards of "the same kind" across all populations. If we stop with only the standards in 1, then we just get anonymous Pareto, but this leaves many welfare tradeoffs between people incomparable. We could extend in various ways, e.g. for each set S of people who will ever exist in an outcome, the moral standard of S's total welfare applies, but it's only of "the same kind" for set of people with the same number of people.

Ah, I should have read more closely. I misunderstood and was unnecessarily harsh. I'm sorry.

I think your response to Risberg is right.

I would still say that permissibility could depend on lever-lashing (in some sense?) because it affects what options are available, though, but in a different way. Here is the view I'd defend:

Ahead of time, any remaining option or sequence of choices that ends up like "Just Amy" will be impermissible if there's an available option or sequence of choices that ends up like "Just Bobby" (assuming no uncertainty). Available options/sequences of choices are otherwise permissible by default.

Here are the consequences in your thought experiments:

  1. In the four button case, the "Just Amy" button is impermissible, because there's a "Just Bobby" button.
  2. In the lashed levers case, it's impermissible to pull either, because this would give "Just Amy", and the available alternative is "Just Bobby".
  3. In the unlashed levers case, 
    1. Ahead of time, each lever is permissible to pull and permissible to not pull, as long as you won't pull both (or leave both pulled, in case you can unpull). Ahead of time, pulling both levers is impermissible, because that would give "Just Amy", and "Just Bobby" is still available. This agrees with 1 and 2.
    2. But if you have already pulled one lever (and this is irreversible), then "Just Bobby" is no longer available (either Amy is/will be created, or Bobby won't be created), and pulling the other is permissible, which would give "Just Amy". "Just Amy" is therefore permissible at this point.

As we see in 3.b., "Just Bobby" gets ruled out, and then "Just Amy" becomes permissible after and because of that, but only after "Just Bobby" is ruled out, not before. Permissibility depends on what options are still available, specifically if "Just Bobby" is still available in these thought experiments. "Just Bobby" is still available in 2 and 3.a.

In your post, you wrote:

Pulling both levers should either be permissible in both cases or wrong in both cases.

This is actually true ahead of time, in 2 and 3.a, with pulling both together impermissible. But already having pulled a lever and then pulling the other is permissible, in 3.b. 

Maybe this is getting pedantic and off-track, but "already having pulled a lever" is not an action available to you, it's just a state of the world. Similarly, "pulling both levers" is not an action available to you after you pulled one; you only get to pull the other lever. "Pulling both levers" (lashed or unlashed) and "pulling the other lever, after already having pulled one lever" have different effects on the world, i.e. the first creates Amy and prevents Bobby, while the second only does one of the two. I don't think it's too unusual to be sensitive to these differences. Different effects -> different evaluations.

Still, the end state "Just Amy" itself later becomes permissible/undominated without lever-lashing, but is impermissible/dominated ahead of time or with lever lashing.

A steelman could be to just set it up like a hypothetical sequential choice problem consistent with Dasgupta's approach:

  1. Choose between A and B
  2. If you chose B in 1, choose between B and C.

or

  1. Choose between A and (B or C).
  2. If you chose B or C in 1, choose between B and C.

In either case, "picking B" (including "picking B or C") in 1 means actually picking C, if you know you'd pick C in 2, and then use backwards induction.

The fact that A is at least as good as (or not worse than and incomparable to) B could follow because B actually just becomes C, which is equivalent to A once we've ruled out B. It's not just facts about direct binary choices that decide rankings ("betterness"), but the reasoning process as a whole and how we interpret the steps.

At any rate, I don’t think it’s that important whether we interpret the rankings as "betterness", as usually understood, with its usual sensitivities and only those. I think you've set up a kind of false dichotomy between permissibility and betterness as usually understood. A third option is rankings not intended to be interpeted as betterness as usual. Or, we could interpret betterness more broadly.

Having separate rankings of options apart from or instead of strict permissibility facts can still be useful, say because we want to adopt something like a scalar consequentialist view over those rankings. I still want to say that C is "better" than B, which is consistent with Dasgupta's approach. There could be other options like A, with the same 100 people, but everyone gets 39 utility instead of 40, and another where everyone gets 20 utility instead. I still want to say 39 is better than 20, and ending up with 39 instead of 40 is not so bad, compared to ending up with 20, which would be a lot worse.

EDIT: Actually my best reply is that just Amy is impermissible whenever just Bobby is available, ahead of time considering all your current and future options (and using backwards induction). The same reason applies for all of the cases, whether buttons, levers, or lashed levers.

EDIT2: I think I misunderstood and was unfairly harsh below.


I do still think the rest of this comment below is correct in spirit as a general response, i.e. a view can make different things impermissible for different reasons. I also think you should have followed up to your own reply to Risberg or anticipated disjunctive impermissibility in response, since it seems so obvious to me, given its simplicity and I think it’s a pretty standard way to interpret (im)permissibility. Like I would guess Risberg would have pointed out the same (but maybe you checked?). Your response seems uncharitable/like a strawman.

Still, the reasons are actually the same in the cases here, but for a more sophisticated reason that seems easier to miss, i.e. considering all future options ahead of time.

‐--------

I agree that my/Risberg's reply doesn't help in this other case, but you can have different replies for different cases. In this other case, you just use the wide view's solution to the nonidentity problem, which tells you to not pick just Amy if just Bobby is available. Just Amy is ruled out for a different reason.

And the two types of replies fit together in a single view, which is a wide view considering the sequences of options ahead of time and using backwards induction (everyone should use backwards induction in (finite) sequential choice problems, anyway). This view will give the right reply when it's needed.

Or, you could look at it like if something is impermissible for any reason (e.g. via either reply), then it is impermissible period, so you treat impermissibility disjunctively. As another example, someone might say each of murder and lying are impermissible and for different reasons. The impermissibility of lying wouldn't "make" murder permissible. Different replies for different situations.

My understanding of a standard interpetation of (im)permissibility is that options are by default permissible, but then reasons rule out some options as impermissible. Reasons don't "make" options permissible; they can only count against. So, impermissibility is disjunctive, and permissibiliy is conjunctive.

[This comment is no longer endorsed by its author]Reply

I think you'd still just choose A at the start here if you're considering what will happen ahead of time and reasoning via backwards induction on behalf of the necessary people. (Assuming C is worse than A for the original necessary people.)

If you don't use backwards induction, you're going to run into a lot of suboptimal behaviour in sequential choice problems, even if you satisfy expected utility theory axioms in one-shot choices.

Rationally and ideally, you should just maximize the expected value of your actions, taking into account your potential influence on others and their costs, including opportunity costs. This just follows assuming expected utility theory axioms. It doesn’t matter that there are other agents; you can just capture them as part of your outcomes under consideration.

When you're assigning credit across other actors whose impacts aren't roughly independent, including for estimating their cost-effectiveness for funding, Shapley values (or something similar) can be useful. You want assigned credit to sum to 100% to avoid double counting or undercounting. (Credit for some actors can even be negative, though.)

But, if you were going to calculate Shapley values, which means estimating a bunch of subgroup counterfactuals that didn't or wouldn't happen, anyway, you may be able to just directly estimate how to best allocate resources instead. You could skip credit assignments (EDIT especially ex ante credit assignments, or when future work will be similar to past work in effects).

Load more