All of MichaelStJules's Comments + Replies

He said he was on a panel at EA Global and mentions PlayPumps, a favourite EA example in this 2015 post. Here's the YouTube video of the EA Global panel discussion.

I don't think it's true that other things are equal on the intuition of neutrality, after saying there are more deaths in A than B. The lives and deaths of the contingent/future people in A wouldn't count at all on symmetric person-affecting views (narrow or wide). On some asymmetric person-affecting views, they might count, but the bad lives count fully, while the additional good lives only offset (possibly fully offset) but never outweigh the additional bad lives, so the extra lives and deaths need not count on net.

On the intuition of neutrality, there a... (read more)

Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can't infer much from it.

I can add any number of other options, as long as they respect the premises of your argument and are "unfair" to the necessary number of contingent people. What specific added complexity matters here and why?

I think you'd want to adjust your argument, replacing "present" with something like "the minimum number of contingent people" (and decide how to match counterparts if there are different numbers of... (read more)

Still, I think your argument is in fact an argument for antinatalism, or can be turned into one, based on the features of the problem to which you've been sensitive here so far. If you rejected antinatalism, then your argument proves too much and you should discount it, or you should be more sympathetic to antinatalism (or both).

You say B prevents more deaths, because it will prevent deaths of future people from the virus. But it prevents those future deaths by also preventing those people from existing.

So, for B to be better than A, you're saying it's wor... (read more)

1
Matthew Rendall
4d
I think we're talking past each other. My claim is that taking precautionary measures in case A will prevent more deaths in expectation (17 billion/1000 = 17 million) than taking precautionary measures in case B (8 billion/1000 = 8 million). We can all agree that it's better, other things being equal, to save more deaths in expectation than fewer. On the Intuition of Neutrality, other things seemingly are equal, making it more important to take precautionary measures against the virus in A than against the virus in B. But this is a reductio ad absurdum. Would it really be better for humanity to go extinct than to suffer ten million deaths from the virus per year for the next thousand years? And if not, shouldn’t we accept that the reason is that additional (good) lives have value?

Thanks for providing these external benchmarks and making it easier to compare! Do you mind if I updated the text to include a reference to your comments?

Feel free to!

Oh, I didn't mean for you to define the period explicitly as a fixed interval period. I assume this can vary by catastrophe. Like maybe population declines over 5 years with massive crop failures. Or, an engineered pathogen causes massive population decline in a few months.

I just wasn't sure what exactly you meant. Another intepretation would be that P_f is the total post-catastrophe population, summing over all future generations, and I just wanted to check that you meant the population at a given time, not aggregating over time.

2
Vasco Grilo
4d
Hi @MichaelStJules, I am tagging you because I have updated the following sentence. If there is a period longer than 1 year over which population decreases, the power laws describing the ratio between the initial and final population of each of the years following the 1st could have different tail indices, with lower tail indices for years in which there is a larger population loss. I do not think the duration of the period is too relevant for my overall point. For short and long catastrophes, I expect the PDF of the ratio between the initial and final population to decay faster than the benefits of saving a life, such that the expected value density of the cost-effectiveness decreases with the severity of the catastrophe (at least for my assumption that the cost to save a life does not depend on the severity of the catastrophe). I see! Yes, both Pi and Pf are population sizes at a given point in time.

Expected value density of the benefits and cost-effectiveness of saving a life

You're modelling the cost-effectiveness of saving a life conditional on catastrophe here, right? I think it would be best to be more explicit about that, if so. Typically x-risk interventions aim at reducing the risk of catastrophe, not the benefits conditional on catastrophe. Also, it would make it easier to follow.

Denoting the pre- and post-catastrophe population by  and , I assume

Also, to be clear, this is supposed to be ~immediately pre-catastrophe and ~imm... (read more)

2
Vasco Grilo
5d
Thanks for the comment, Michael! I have updated the post changing "pre- and post-catastrophe population" to "population at the start and end of a period of 1 year", which I now also refer to as the initial and final population. No. It is supposed to be the cost-effectiveness as a function of the ratio between the initial and final population. Yes, interpreting catastrophe as a large population loss. In my framework, xrisk interventions aim to save lives over periods whose initial population is significantly higher than the final one.

Another benchmark is GiveWell-recommended charities, which save a life for around $5,000. Assuming that's 70 years of life saved (mostly children), that would be 70 years of human life/$5000 = 0.014 years of human life/$. People spend about 1/3rd of their time sleeping, so it's around 0.0093 years of waking human life/$.

Then, taking ratios of cost-effectiveness, that's about 7 years of disabling chicken pain prevented per year of waking human life saved.

Then, we could consider:

  1. How bad disabling pain is in a human vs a chicken
  2. How bad human disabling pain is
... (read more)
5
lukasj10
5d
Thanks for providing these external benchmarks and making it easier to compare! Do you mind if I updated the text to include a reference to your comments? Indeed, since these were initial estimates, we excluded reporting the other pain intensities to keep it brief. However, once we go through the follow-up data and have the second set of estimates, we'll make sure to include all of the ranges, so that more comprehensive comparisons could be made. But my understanding is that for water and feed, it could be ~1:5:7 (disabling:hurtful:annoying) and ~1:1:0.1 for heat stress.  But your caveat is very important - we've only just identified this as an interesting intervention for the Global South, the feasibility of cost effectively mitigating such welfare issues remains untested.

Measures aimed at addressing thermal stress, and improving hen access to feed and water show promise in reducing significant amounts of hours spent in pain cost-effectively. Example initial estimates:

Welfare issueTotal impact 

[hours of disabling pain averted/farm]
Cost efficacy 

[$/hen]
Cost efficacy 

[$cents/hour of disabling pain]
Thermal stress87.5k (46.25k-150k) 0.771.11 (0.65-2.09)
Limited access to water23.75k (12.5k-35k)0.170.9 (0.61-1.71)
Limited access to feed (feeders)162.5k (103.75k-212.5k)0.220.17 (0.13-0.27)
Limited access to feed
... (read more)
5
MichaelStJules
5d
Another benchmark is GiveWell-recommended charities, which save a life for around $5,000. Assuming that's 70 years of life saved (mostly children), that would be 70 years of human life/$5000 = 0.014 years of human life/$. People spend about 1/3rd of their time sleeping, so it's around 0.0093 years of waking human life/$. Then, taking ratios of cost-effectiveness, that's about 7 years of disabling chicken pain prevented per year of waking human life saved. Then, we could consider: 1. How bad disabling pain is in a human vs a chicken 2. How bad human disabling pain is vs how valuable additional waking human life is 3. Indirect effects (of the additional years of human life, influences on attitudes towards nonhuman animals, etc.)

We should separate whether the view is well-motivated from whether it's compatible with "ethics being about affecting persons". It's based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with "ethics being about affecting persons".

We should also separate plausibility from whether it would follow on stricter interpretations of "ethics being about affecting persons". An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same a... (read more)

1
Kaspar Brandner
7d
Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can't infer much from it. Well, there is a necessary number of "contingent people", which seems similar to having necessary (identical) people. Since in both cases not creating anyone is not an option. Unlike in Huemer's three choice case where A is an option. I think there is a quite straightforward argument why IIA is false. The paradox arises because we seem to have a cycle of binary comparisons: A+ is better than A, Z is better than A+, A is better than Z. The issue here seems to be that this assumes we can just break down a three option comparison into three binary comparisons. Which is arguably false, since it can lead to cycles. And when we want to avoid cycles while keeping binary comparisons, we have to assume we do some of the binary choices "first" and thereby rule out one of the remaining ones, removing the cycle. So we need either a principled way of deciding on the "evaluation order" of the binary comparisons, or reject the assumption that "x compared to y" is necessarily the same as "x compared y, given z". If the latter removes the cycle, that is. Another case where IIA leads to an absurd result is preference aggregation. Assume three equally sized groups (1, 2, 3) have these individual preferences: 1. x≻y≻z 2. y≻z≻x 3. z≻x≻y The obvious and obviously only correct aggregation would be x∼y∼z, i.e. indifference between the three options. Which is different from what would happen if you'd take out either one of three options and make it a binary choice, since each binary choice has a majority. So the "irrelevant" alternatives are not actually irrelevant, since they can determine a choice relevant global property like a cycle. So IIA is false, since it would lead to a cycle. This seems not unlike the cycle we get in the repugnant conclusion paradox, although there the solution is arguably not that all three options a

Then, I think there are ways to interpret Dasgupta's view as compatible with "ethics being about affecting persons", step by step:

  1. Step 1 rules out options based on pairwise comparisons within the same populations, or same number of people. Because we never compare existence to nonexistence — we only compare the same people or with the same number like in nonidentity — at this step, this step is arguably about affecting persons.
  2. Step 2 is just necessitarianism on the remaining options. Definitely about affecting persons.

These other views also seem compatible... (read more)

1
Kaspar Brandner
8d
I wouldn't agree on the first point, because making Desgupta's step 1 the "step 1" is, as far as I can tell, not justified by any basic principles. Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+. Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?). The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options. Alternatively there is the regret argument, that we would "realize", after choosing A+, that we made a mistake, but that intuition seems not based on some strong principle either. (The intuition could also be misleading because we perhaps don't tend to imagine A+ as locked in). I agree though that the classification "person-affecting" alone probably doesn't capture a lot of potential intricacies of various proposals.

Do you intend for the population to recover in B, or extinction with no future people? In the post, you write that the second virus "will kill everybody on earth". I'd assume that means extinction.

If B (killing 8 billion necessary people) does mean extinction and you think B is better than A, then you prefer extinction to extra future deaths. And your argument seems general, e.g. we should just go extinct now to prevent the deaths of future people. If they're never born, they can't die. You'd be assigning negative value to additional deaths, but no positiv... (read more)

1
Matthew Rendall
4d
Thanks! I was indeed assuming total extinction in B. As you say, antinatalist views will prefer A to B. If antinatalism is correct, then my argument against the intuition of neutrality fails.  Our discussion has been helpful to me, because it's made me realise that my argument is really directed against views that accept the intuition of neutrality, but aren't either (a) antinatalist or (b) narrow person-affecting.  That does limit its scope. Nevertheless, common sense morality seems to accept the intuition of neutrality, but not anti-natalism. Nor does it seem to accept narrow person-affecting views (thus most laypeople's embrace of the No Difference View when it comes to the non-identity problem). It's that 'moderate middle', so to speak, at whom my argument is directed.

If additional human lives have no value in themselves, that implies that the government would have more reason to take precautionary measures against a virus that would kill most of us than one that would kill all of us, even if the probabilities were equal.

Maybe I'm misunderstanding, but if

  • we totally discounted what happens to future/additional people (even stronger than no reason to create them), and only cared about present/necessary people, and
  • killing everyone/extinction means killing all present/necessary people (extinction now, not extinction in the
... (read more)
1
Matthew Rendall
8d
Thanks--that's very helpful. On a wide person-affecting view, A would be worse, but if we limit our analysis to present/necessary people, then outcome B would be worse. That had not occurred to me, probably because I find narrow person-affecting views so implausible. However, it doesn't seem very damaging to my argument. If we take a hardcore narrow person-affecting view, the extra ten billion deaths shouldn’t count at all in our assessment. But surely that's very hard to believe.  Alternatively, if we adopt what Parfit calls a 'two tier view', then we’d give some weight to the deaths of the contingent people in scenario A, but less than to the deaths of present/necessary people. Even if we discounted them by a factor of five, however, scenario A would still be worse than scenario B. What is more, we can adjust the numbers: Scenario A: Seven billion necessary people die immediately and ten million die annually for the next 10,000 years for a total of 107 billion. Most of the future people are contingent. Scenario B: Eight billion die at once. All are necessary people. On the two tier-view, deaths of necessary people would have to be more than a hundred times as bad as those of contingent ones for B to be worse. That is hard to believe. Bottom line:  1. Plausible person-affecting views will judge A better than B. 2. That A is better than B is, however, implausible. 3. ∴ No otherwise plausible person-affecting view renders a plausible judgement about this case. 4. ∴ Person-affecting views do not provide a convincing rationale for rejecting my argument against the Intuition of Neutrality. 

I don't think 6 follows. Preventing the early deaths of future people does not imply creating new lives or making happy people. The two statements in each version of the intuition of neutrality separated by the "but" here are not actually exhaustive of how we should treat future people.

Your argument would only establish that we shouldn't be indifferent to (or discount or substantially discount) future lives, not that we have reason to ensure future people are born in the first place or to create people. Multiple views that don't find extinction much worse than almost everyone dying + population recovery could still recommend avoiding the extra deaths of future people. Especially "wide" person-affecting views.[1]

On "wide" person-affecting views, if you have one extra person Alice in outcome A, but a different extra person Bob in outcome B... (read more)

1
Matthew Rendall
8d
Thanks! Perhaps I haven't grasped what you're saying. In my example, if the first virus mutates, it'll be the one that kills more people--17 billion. If the second virus mutates, the entire human population dies at once from the virus, so only 8 billion people die in toto.  On either wide or narrow person-affecting views, it seems like we have to say that the first outcome--seven billion deaths and then ten million deaths a year for the next millennium--is worse than the second (extinction). But is that plausible? Doesn't this example undermine person-affecting views of either kind?

Dasgupta's view makes ethics about what seems unambiguously best first, and then about affecting persons second. It's still person-affecting, but less so than necessitarianism and presentism.

It could be wrong about what's unambiguously best, though, e.g. we should reject full aggregation, and prioritize larger individual differences in welfare between outcomes, so A+' (and maybe A+) looks better than Z.

Do you think we should be indifferent in the nonidentity problem if we're person-affecting? I.e. between creating a person a person with a great life and a ... (read more)

1
Kaspar Brandner
8d
In the non-identity problem we have no alternative which doesn't affect a person, since we don't compare creating a person with not-creating it, but creating a person vs creating a different person. Not creating one isn't an option. So we have non-present but necessary persons, or rather: a necessary number of additional persons. Then even person-affecting views should arguably say, if you create one anyway, then a great one is better than a marginally good one. But in the case of comparing A+ and Z (or variants) the additional people can't be treated as necessary because A is also an option.

I largely agree with this, but

  1. If we were only concerned with what's best for the original people when in S, the probability that, if we pick A+, we can and should switch to something like Z later matters. For the original people, it may be worth the risk. It would depend on the details.
  2. I also suspect we should first rule out A+ with Z available from S, even if we were sure we couldn't later switch to something like Z. A+ does seem unfair with Z available, from S. Whether or not we can switch to something like Z later, we'll have realized it was a mistake t
... (read more)
1
Kaspar Brandner
8d
Let's replace A with A' and A+ with A+'. A' has welfare level 4 instead of 100, and A+' has, for the original people, welfare level 200 instead of 101 (for a total of 299). According to your argument we should still rule out A+' because it's less fair than Z. Even though the original people get 196 points more welfare in A+' than in A'. So we end up with A' and a welfare level of 4. That seems highly incompatible with ethics being about affecting persons.

You could also do brain imaging to check for pain responses.

You might not even need to know what normal pain responses in the species look like, because you could just check normally painful stimuli vs control stimuli.

However, knowing what normal pain responses in the species look like would help. Also, across mammals, including humans and raccoons, the substructures responsible for pain (especially the anterior cingulate cortex) seem roughly the same, so I think we'd have a good idea of where to check.

Maybe one risk is that the brain would just adapt and ... (read more)

1
Ives Parr
9d
Great thoughts. I will need to think more deeply about how to make this possible cost wise. We need a large sample to find the genes, but the brain imaging might make this challenging. 

The arguments for unfairness of X relative to Y I gave in my previous comment (with the modified welfare levels, X=(3, 6) vs Y=(5,5)) aren't sensitive to the availability of other options: Y is more equal (ignoring other people), Y is better according to some impartial standards, and better if we give greater priority to the worse off or larger gains/losses.

All of these apply also substituting A+ for X and Z for Y, telling us that Z is more fair than A+, regardless of the availability of other options, like A, except for priority for larger gains/losses (e... (read more)

1
Kaspar Brandner
9d
It seems the relevant question is whether your original argument for A goes through. I think you pretty much agree that ethics requires persons to be affected, right? Then we have to rule out switching to Z from the start: Z would be actively bad for the initial people in S, and not switching to Z would not be bad for the new people in Z, since they don't exist. Furthermore, it arguably isn't unfair when people are created (A+) if the alternative (A) would have been not to create them in the first place.[1] So choosing A+ wouldn't be unfair to anyone. A+ would only be unfair if we couldn't rule out Z. And indeed, it seems in most cases we in fact can't rule out Z with any degree of certainty for the future, since we don't have a lot of evidence that "certain kinds of value lock-in" would ensure we stay with A+ for all eternity. So choosing A+ now would mean it is quite likely that we'd have to choose between (continuing) A+ and switching to Z in the future, and switching would be equivalent to fair redistribution, and required by ethics. But this path (S -> A+ -> Z) would be bad for the people in initial S, and not good for the additional people in S+/Z who at this point do not exist. So we, in S, should choose A. In other words, if S is current, Z is bad, and A+ is good now (in fact currently a bit better than A), but choosing A+ would quite likely lead us on a path where we are morally forced to switch from A+ to Z in the future. Which would be bad from our current perspective (S). So we should play it safe and choose A now. ---------------------------------------- 1. Once upon a time there was a group of fleas. They complained about the unfairness of their existence. "We all are so small, while those few dogs enjoy their enormous size! This is exceedingly unfair and therefore highly unethical. Size should have been distributed equally between fleas and dogs." The dog, which they inhabited, heard them talking and replied: "If it weren't for us dogs, you fleas

X isn't so much bad because it's unfair, but because they don't want to die. After all, fairly killing both people would be even worse.

Everyone dies, though, and their interests in not dying earlier trade off against others, as well as other interests. And we can treat those interests more or less fairly.

There are also multiple ways of understanding "fairness", not all of which would say killing both is more fair than killing one:

  1. Making things more equal, even if it's no better for anyone and worse for some (some versions of egalitarianism). This is what y
... (read more)
1
Kaspar Brandner
10d
Your argument seems to be: 1. Wenn restricted to A+ and Z, Z is better than A+ because A+ is unfair. 2. When restricted to A and Z, A is better than Z. 3. Therefore, A is better than A+ and better than Z. But that doesn't follow, because in 1 and 2 you did restrict yourself to two options, while there are three options in 3.

That's a person-affecting intuition.

I can, now that I exist, assign myself welfare level 0 in the counterfactuals in which I was never born. I can also assign welfare level 0 to potential people who don't come to exist.

People talk about being grateful to have been born. One way to make sense of this is that they compare to a counterfactual in which they were never born. Or maybe it's just adding up the good and bad in their life and judging there's more good than bad. But then an "empty life", with no goods or bads, would be net 0, and you could equate tha... (read more)

Maybe we're using these words differently?

I think it’s not true in general that for X to be more fair wrt utility than Y, it must be the case that we can in practice start from X and redistribute utility to obtain Y.

Suppose in X, you kill someone and take their stuff, and in Y, you don't. Or in X, they would die, but not by your killing, and in Y, you save them, at some personal cost.

Whole lifetime aggregate utilities, (them, you):

  1. X: (4, 6).
  2. Y: (5, 5).

X would (normally) be unfair to the other person, even if you can't bring them back to life to get bac... (read more)

1
Kaspar Brandner
10d
X isn't so much bad because it's unfair, but because they don't want to die. After all, fairly killing both people would be even worse. There are other cases where the situation is clearly unfair. Two people committed the same crime, the first is sentenced to pay $1000, the second is sentenced to death. This is unfair to the people who are about to receive their penalty. Both subjects are still alive, and the outcome could still be changed. But in cases where it is decided whether lives are about to be created, the subjects don't exist yet, and not creating them can't be unfair to them.

They can agree, but they need not. Again, if everyone were purely selfish, it seems like they would disagree. The extra people would prefer to exist, given their positive welfare levels. The original people would prefer the extra not to exist, if it's paired with a loss to their own welfare. Or, if we took the perspectives of what's best for each person's personal/selfish welfare on their behalf, we'd have those two groups of perspectives.

And we can probably rig up a version that's other-regarding for the people, say the extra people are total utilitarians, and the original people have person-affecting views.

1
Kaspar Brandner
11d
It makes sense to want to keep existing if you already exist. But believing that it would have been bad, had you never existed in the first place, is a different matter. For whom would it have been bad? Apparently for nobody.

Does it matter to you what the starting welfare levels of the 1 million people are? Would your intuitions about which outcome is best be different?

There are a few different perspectives you could take on the welfare levels in the outcomes. I intended them to be aggregate whole life welfare, including the past, present and future. Not just future welfare, and not welfare per future moment, day or year or whatever. But this difference often doesn't matter.

Z already seems more fair than A+ before you decide which comes about; you're deciding between them ahea... (read more)

1
Kaspar Brandner
11d
Z seeming more fair than A+ arguably depends on the assumption that utility in A+ ought to (and therefore could) be redistributed to increase fairness. Which contradicts the assumption of "aggregate whole lifetime welfare", as this would mean that switching (and increasing fairness) is ruled out from the start. For example, the argument in these paragraphs mentions "fairness" and "regret", which only seems to make sense insofar things could be changed: "Once the contingent people exist, Z would have been better than A+." -- This arguably means "Switching from A+ to Z is good" which assumes that switching from A+ to Z would be possible. The quoted argument for A seems correct to me, but the "unfairness" consideration requires that switching is possible. Otherwise one could simply deny that the concept of unfairness is applicable to A+. It would be like saying it's unfair to fish that they can't fly.

Between A and Z, the people in A are much better off in A, and the extra people in Z are much better off in Z (they get to exist, with positive lives). It seems like they'd disagree about switching, if everyone only considers the impact on their own welfare.

(Their welfare levels could also be the degree of satisfaction of their impartial or partially other-regarding preferences, but say they have different impartial preferences.)

1
Kaspar Brandner
11d
Switching from A to Z means that A is current when the decision to switch or not switch is made. So the additional people in Z don't exist and are not impacted if the switch isn't made. Even if Z is current, people in Z can still evaluate whether switching from A to Z is good (= would have been good), since this just means "assuming A is current, is it good to switch to Z?". Even if Z is in fact current, the people in Z can still agree that, if A had been current, a switch to Z should not have been made. Intuitions to the contrary seem to mistake "I should not have existed" for "I should not exist". The former can be true while the latter is false.

Arguably we can only say a world state X is "better" than a world state Y iff both

  1. switching from X to Y is bad and
  2. switching from Y to X is good.

FWIW, people with person-affecting views would ask "better for whom?". Each set of people could have their own betterness order. Person-affecting views basically try to navigate these different possible betterness orders.

1
Kaspar Brandner
12d
But it doesn't seem to matter whether X or Y (or neither) is actually current, everyone should be able to agree whether, e.g., "switching from X to Y is bad" is true or not. The switch choices always hypothetically assume that the first world (in this case X) is current, because that's where the potential choice to switch is made.

Thanks for the feedback! I've edited the post with some clarification.

I think standard decision theory (e.g. expected utility theory) is actually often framed as deciding between (or ranking) outcomes, or prospects more generally, not between actions. But actions have consequences, so we just need actions with the above outcomes as consequences. Maybe it's pressing buttons, pulling levers or deciding government policy. Either way, this doesn't seem very important, and I doubt most people will be confused about this point.

On the issue of switching between w... (read more)

1
Kaspar Brandner
12d
Okay, having an initial start world (call it S), that is assumed to be current, makes it possible to treat the other worlds (futures) as choices. So S has 1 million people, but how much utility points do they have each? Something like 10? Then A and A+ would be an improvement for them, and Z would be worse (for them). But if we can't switch worlds in the future that does seem like an unrealistic restriction? Future people have just as much control over their future as we have over ours. Not being able to switch worlds in the future (change the future of the future) would mean we couldn't, once we were at A+, switch from A+ to a more "fair" future (like Z). Since not-can implies not-ought, there would then be no basis in calling A+ unfair, insofar "unfair" means that we ought to switch to a more fair future. The fairness consideration assumes utility can be redistributed, like money. Otherwise utility would presumably be some inherent property of the brains of people, and it wouldn't be unfair to anyone to not having been born with a different brain (assuming brains can't be altered).

I wrote a bit more about Dasgupta's approach and how to generalize it here.

You can conservatively multiply through by the probability that the time of perils is short enough and for the risk dropping enough orders of magnitude.

In Parfit's case, we have a good explanation for why you're rationally required to bind yourself: doing so is best for you.

The more general explanation is that it's best according to your preferences, which can also reflect or just be your moral views. It's not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you

  1.  expect to do more to further your own values/preferences with your own money than the driver would to further your own values/preferences with your money
  2. don
... (read more)

Ah, I should have read more closely. I misunderstood and was unnecessarily harsh. I'm sorry.

I think your response to Risberg is right.

I would still say that permissibility could depend on lever-lashing (in some sense?) because it affects what options are available, though, but in a different way. Here is the view I'd defend:

Ahead of time, any remaining option or sequence of choices that ends up like "Just Amy" will be impermissible if there's an available option or sequence of choices that ends up like "Just Bobby" (assuming no uncertainty). Available opti

... (read more)

A steelman could be to just set it up like a hypothetical sequential choice problem consistent with Dasgupta's approach:

  1. Choose between A and B
  2. If you chose B in 1, choose between B and C.

or

  1. Choose between A and (B or C).
  2. If you chose B or C in 1, choose between B and C.

In either case, "picking B" (including "picking B or C") in 1 means actually picking C, if you know you'd pick C in 2, and then use backwards induction.

The fact that A is at least as good as (or not worse than and incomparable to) B could follow because B actually just becomes C, whic... (read more)

EDIT: Actually my best reply is that just Amy is impermissible whenever just Bobby is available, ahead of time considering all your current and future options (and using backwards induction). The same reason applies for all of the cases, whether buttons, levers, or lashed levers.

EDIT2: I think I misunderstood and was unfairly harsh below.


I do still think the rest of this comment below is correct in spirit as a general response, i.e. a view can make different things impermissible for different reasons. I also think you should have followed up to your own ... (read more)

[This comment is no longer endorsed by its author]Reply
3
EJT
17d
Here's my understanding of the dialectic here: Me: Some wide views make the permissibility of pulling both levers depend on whether the levers are lashed together. That seems implausible. It shouldn't matter whether we can pull the levers one after the other. Interlocutor: But lever-lashing doesn't just affect whether we can pull the levers one after the other. It also affects what options are available. In particular, lever-lashing removes the option to create both Amy and Bobby, and removes the option to create neither Amy nor Bobby. So if a wide view has the permissibility of pulling both levers depend on lever-lashing, it can point to these facts to justify its change in verdicts. These views can say: it's permissible to create just Amy when the levers aren't lashed because the other options are on the table; it's wrong to create just Amy when the levers are lashed because the other options are off the table. Me: (Side note: this explanation doesn't seem particularly satisfying. Why does the presence or absence of these other options affect the permissibility of creating just Amy?). If that's the explanation, then the resulting wide view will say that creating just Amy is permissible in the four-button case. That's against the spirit of wide PAVs, so wide views won't want to appeal to this explanation to justfiy their change in verdicts given lever-lashing. So absent some other explanation of some wide views' change in verdicts occasioned by lever-lashing, this implausible-seeming change in verdicts remains unexplained, and so counts against these views.

I think you'd still just choose A at the start here if you're considering what will happen ahead of time and reasoning via backwards induction on behalf of the necessary people. (Assuming C is worse than A for the original necessary people.)

If you don't use backwards induction, you're going to run into a lot of suboptimal behaviour in sequential choice problems, even if you satisfy expected utility theory axioms in one-shot choices.

Rationally and ideally, you should just maximize the expected value of your actions, taking into account your potential influence on others and their costs, including opportunity costs. This just follows assuming expected utility theory axioms. It doesn’t matter that there are other agents; you can just capture them as part of your outcomes under consideration.

When you're assigning credit across other actors whose impacts aren't roughly independent, including for estimating their cost-effectiveness for funding, Shapley values (or something similar) can be ... (read more)

2
Zach Stein-Perlman
19d
(I endorse this.)

I haven't looked into this, and based on your comment, you seem more informed than me on this issue.

(FWIW, I don't give much weight to critical-set views, anyway.)

In section 3. The Drop, you assume biographical identity is determinately-all-or-determinately-nothing, but this doesn't seem very plausible to me. What could a justification for a specific such account even look like, with specific precise cutoffs for a given person? The only I could imagine is someone very sharply going from fully personally identifying to not at all identifying with their past with the additional tiny change. However, I would be surprised if that happened for most people or ... (read more)

There's another writeup of (some of?) these issues by the author here and some discussion/responses in the comments of that other post.

Also, if your intention wasn't really binding and you did abandon it, then you undermine your own ability to follow through on your own intentions, which can make it harder for you to act rightly and do good in the future. But this is an indirect reason.

In my opinion, a strong downvote is too harsh for a plausibly good faith comment with some potentially valuable criticism, even if (initially) vague.

  1. They elaborated on some of their concerns in the replies.
  2. You could ask them to elaborate more if they can (without deanonymizing people without their consent) on specific issues instead of strong downvoting.

Despite my specific responses, I want to make a general comment that I agree that these seem like good arguments against many person-affecting views, according to my own intuitions, which are indeed person-affecting. They also leave the space for plausible (to me) person-affecting accounts pretty small.

I think some of the remaining views, e.g. using something like Dasgupta's approach with resolute choice precommitments as necessary, can still be (to me) independently justified, too, but they also need to face further scrutiny.

I think an earlier comment you... (read more)

3
EJT
17d
Thanks! I'd like to think more at some point about Dasgupta's approach plus resolute choice. 

In section 3, you illustrate with Tomi's argument:

 One hundred peopleTen billion different people
A40-
B4141
C40100

And in 3.1, you write:

How might advocates of PAVs respond to Tomi’s argument? One possibility is to claim that betterness is option-set dependent: whether an outcome X is better than an outcome Y can depend on what other outcomes are available as options to choose. In particular, advocates of PAVs could claim:

  • B is better than A when B and A are the only options
  • B is not better than A when C is also an option.

And advocates of PAVs could defend

... (read more)
1
EJT
19d
Taken as an argument that B isn't better than A, this response doesn't seem so plausible to me. In favour of B being better than A, we can point out: B is better than A for all of the necessary people, and pretty good for all the non-necessary people. Against B being better than A, we can say something like: I'd regret picking B over C. The former rationale seems more convincing to me, especially since it seems like you could also make a more direct, regret-based case for B being better than A: I'd regret picking A over B. But taken as an argument that A is permissible, this response seems more plausible. Then I'd want to appeal to my arguments against deontic PAVs.

In 5.2.3. Intermediate wide views, you write:

If permissibility doesn’t depend on lever-lashing, then it’s also wrong to pull both levers when they aren’t lashed together.

Why wouldn't permissibility depend on lever-lashing under the intermediate wide views? The possible choices, including future choices, have to be considered together ahead of time. Lever-lashing restricts them, so it's a different choice situation. If we're person-affecting, we've already accepted that how we rank two options can depend on what others are available (or we rejected transiti... (read more)

1
EJT
19d
Yes, nice point. I argue against this kind of dependence in footnote 16 of the paper. Here's what I say there:

 In 5.2.3. Intermediate wide views, you write:

Views of this kind give more plausible verdicts in the previous cases – both the lever case and the enquiring friend case – but any exoneration is partial at best. The verdict in the friend case remains counterintuitive when we stipulate that your friend foresaw the choices that they would face. And although intentions are often relevant to questions of blameworthiness, I’m doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if

... (read more)
3
EJT
17d
In Parfit's case, we have a good explanation for why you're rationally required to bind yourself: doing so is best for you. Perhaps you're morally required to bind yourself in Two-Shot Non-Identity, but why? Binding yourself isn't better for Amy. And if it's better for Bobby, it seems that can only be because existing is better for Bobby than not-existing, and then there's pressure to conclude that we're required to create Bobby in Just Bobby, contrary to the claims of PAVs. And suppose that (for whatever reason) you can't bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices. If the view says that you're required to create Bobby (having previously created Amy), permissibility depends on past choices. If the view says that you're permitted to decline to create Bobby (having previously created Amy), permissibility depends on lever-lashing (since, on wide views, you wouldn't be permitted to pull both levers if they were lashed together).
2
MichaelStJules
22d
Also, if your intention wasn't really binding and you did abandon it, then you undermine your own ability to follow through on your own intentions, which can make it harder for you to act rightly and do good in the future. But this is an indirect reason.

Maybe worth writing this as a separate post (a summary post) you can link to, given its length?

From the May 13 2022 FTX Terms of Service:

8.2.6 All Digital Assets are held in your Account on the following basis:

(A) Title to your Digital Assets shall at all times remain with you and shall not transfer to FTX Trading. As the owner of Digital Assets in your Account, you shall bear all risk of loss of such Digital Assets. FTX Trading shall have no liability for fluctuations in the fiat currency value of Digital Assets held in your Account.

(B) None of the Digital Assets in your Account are the property of, or shall or may be loaned to, FTX Trading; FTX Tr

... (read more)
3
SteadyPanda
20d
I know this is a digression from the main question of intent but I'm still curious about it:  Do we know how much money was actually in the margin lending program?  How much of the fiat deposits were available for margin lending?  Prosecutors said "from June to November 2022, Alameda had taken between 8 and 12 billion, when there was at most 4 billion in the margin lending program" while the defense said "80 percent of the assets on FTX were margined assets used in futures trading. 80 percent are in this margin trading where customers are always borrowing other customers' assets."

that he was so misinformed about how much non-customer money FTX had

+ the customer money FTX had that the customers explicitly consented to be loaned out, through margin lending or staking?

He might not have been well-informed about how much this was. It's a number that changes by the minute. But maybe he would have a general idea, and enough to know that what he was asking for was more than customers would have consented to being loaned out.

It looks like it's already available in the Netherlands, Belgium, Luxembourg and Switzerland.

I guess the obvious approach would be to target jurisdictions where MAID is legal, but not just for mental illness. However, in the US, at least, it could be better to push for MAID excluding mental illness to other states where it isn't already legal first, to avoid political polarization. This is just speculation, though.

1
anonymous throwaway
24d
Good points! FWIW depending on how long these have been available, this makes me think it's less useful to pursue, since that would make it less neglected (albeit more tractable) and make me think that other countries are probably going to start adopting it soon anyway.
6
Constance Li
24d
Thanks for catching that. I also saw there are a few (billion) more farmed shrimp to add in to those numbers. We have accordingly adjusted the post to convert estimates of [hours worked/human career] to [shrimps lost/year].

@Jason left an answer here, referring to this sentencing memo:

I'd look at pp. 5-12 of the linked sentencing memo for customers, pp. 15-18 for investors/lenders for the government's statement of the offense conduct. The jury merely utters guilty / not guilty on each count, it does not provide detailed findings of fact. Judge Kaplan heard all the evidence as he presided at trial, and can rely on his own factual findings at sentencing under a more-likely-than-not standard. Of course, that is just a summary; Ellison alone testified for ~3 days.

Basically, SBF &

... (read more)

Note that the linked document was written by the prosecution, and is therefore presumably a biased summary of the evidence.[1] The defense equivalent (presumably with the opposite bias) can be found here.

  1. ^

    Jason comments on this here to argue that it probably isn't that biased.

3
SteadyPanda
24d
I want to quickly address a couple of points of disagreement I have with Jason's take.  (Please don't take this to mean that I accept the rest -- there's a lot I disagree with -- I just don't have time to respond to it all right now.) This is misleading.  The complete line from Ellison is, "I understood that he was telling me to use FTX customer funds to repay our loans," then she proceeds to explain how she inferred this. At no point in the trial does she say that he explicitly instructed her to touch customer fiat deposits. The defense doesn't seem to think so. "Caroline testified to you that what happened was, Alameda ‘would have to take the money from our line of credit to pay the lenders.’ And that's at transcript page 763. And I asked her, Well, if that's true, if the lenders were being repaid off the line of credit, wouldn't the amount of the line of credit go up? She said, Yes, it would. And I said, How much did it go up by? Oh, 5 to 10 billion. But when you look at the actual data that was pulled by Dr. Pimbley——and that's Defendant's Exhibit 617——you see that in fact the line of credit did not go up during the period when the loans were being repaid. In fact, it went down for much of the period."
Load more