I don't think it's true that other things are equal on the intuition of neutrality, after saying there are more deaths in A than B. The lives and deaths of the contingent/future people in A wouldn't count at all on symmetric person-affecting views (narrow or wide). On some asymmetric person-affecting views, they might count, but the bad lives count fully, while the additional good lives only offset (possibly fully offset) but never outweigh the additional bad lives, so the extra lives and deaths need not count on net.
On the intuition of neutrality, there a...
Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can't infer much from it.
I can add any number of other options, as long as they respect the premises of your argument and are "unfair" to the necessary number of contingent people. What specific added complexity matters here and why?
I think you'd want to adjust your argument, replacing "present" with something like "the minimum number of contingent people" (and decide how to match counterparts if there are different numbers of...
Still, I think your argument is in fact an argument for antinatalism, or can be turned into one, based on the features of the problem to which you've been sensitive here so far. If you rejected antinatalism, then your argument proves too much and you should discount it, or you should be more sympathetic to antinatalism (or both).
You say B prevents more deaths, because it will prevent deaths of future people from the virus. But it prevents those future deaths by also preventing those people from existing.
So, for B to be better than A, you're saying it's wor...
Thanks for providing these external benchmarks and making it easier to compare! Do you mind if I updated the text to include a reference to your comments?
Feel free to!
Oh, I didn't mean for you to define the period explicitly as a fixed interval period. I assume this can vary by catastrophe. Like maybe population declines over 5 years with massive crop failures. Or, an engineered pathogen causes massive population decline in a few months.
I just wasn't sure what exactly you meant. Another intepretation would be that P_f is the total post-catastrophe population, summing over all future generations, and I just wanted to check that you meant the population at a given time, not aggregating over time.
Expected value density of the benefits and cost-effectiveness of saving a life
You're modelling the cost-effectiveness of saving a life conditional on catastrophe here, right? I think it would be best to be more explicit about that, if so. Typically x-risk interventions aim at reducing the risk of catastrophe, not the benefits conditional on catastrophe. Also, it would make it easier to follow.
Denoting the pre- and post-catastrophe population by and , I assume
Also, to be clear, this is supposed to be ~immediately pre-catastrophe and ~imm...
Another benchmark is GiveWell-recommended charities, which save a life for around $5,000. Assuming that's 70 years of life saved (mostly children), that would be 70 years of human life/$5000 = 0.014 years of human life/$. People spend about 1/3rd of their time sleeping, so it's around 0.0093 years of waking human life/$.
Then, taking ratios of cost-effectiveness, that's about 7 years of disabling chicken pain prevented per year of waking human life saved.
Then, we could consider:
...Measures aimed at addressing thermal stress, and improving hen access to feed and water show promise in reducing significant amounts of hours spent in pain cost-effectively. Example initial estimates:
Welfare issue Total impact
[hours of disabling pain averted/farm]Cost efficacy
[$/hen]Cost efficacy
[$cents/hour of disabling pain]Thermal stress 87.5k (46.25k-150k) 0.77 1.11 (0.65-2.09) Limited access to water 23.75k (12.5k-35k) 0.17 0.9 (0.61-1.71) Limited access to feed (feeders) 162.5k (103.75k-212.5k) 0.22 0.17 (0.13-0.27) Limited access to feed
We should separate whether the view is well-motivated from whether it's compatible with "ethics being about affecting persons". It's based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with "ethics being about affecting persons".
We should also separate plausibility from whether it would follow on stricter interpretations of "ethics being about affecting persons". An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same a...
Then, I think there are ways to interpret Dasgupta's view as compatible with "ethics being about affecting persons", step by step:
These other views also seem compatible...
Do you intend for the population to recover in B, or extinction with no future people? In the post, you write that the second virus "will kill everybody on earth". I'd assume that means extinction.
If B (killing 8 billion necessary people) does mean extinction and you think B is better than A, then you prefer extinction to extra future deaths. And your argument seems general, e.g. we should just go extinct now to prevent the deaths of future people. If they're never born, they can't die. You'd be assigning negative value to additional deaths, but no positiv...
If additional human lives have no value in themselves, that implies that the government would have more reason to take precautionary measures against a virus that would kill most of us than one that would kill all of us, even if the probabilities were equal.
Maybe I'm misunderstanding, but if
I don't think 6 follows. Preventing the early deaths of future people does not imply creating new lives or making happy people. The two statements in each version of the intuition of neutrality separated by the "but" here are not actually exhaustive of how we should treat future people.
Your argument would only establish that we shouldn't be indifferent to (or discount or substantially discount) future lives, not that we have reason to ensure future people are born in the first place or to create people. Multiple views that don't find extinction much worse than almost everyone dying + population recovery could still recommend avoiding the extra deaths of future people. Especially "wide" person-affecting views.[1]
On "wide" person-affecting views, if you have one extra person Alice in outcome A, but a different extra person Bob in outcome B...
Dasgupta's view makes ethics about what seems unambiguously best first, and then about affecting persons second. It's still person-affecting, but less so than necessitarianism and presentism.
It could be wrong about what's unambiguously best, though, e.g. we should reject full aggregation, and prioritize larger individual differences in welfare between outcomes, so A+' (and maybe A+) looks better than Z.
Do you think we should be indifferent in the nonidentity problem if we're person-affecting? I.e. between creating a person a person with a great life and a ...
I largely agree with this, but
You could also do brain imaging to check for pain responses.
You might not even need to know what normal pain responses in the species look like, because you could just check normally painful stimuli vs control stimuli.
However, knowing what normal pain responses in the species look like would help. Also, across mammals, including humans and raccoons, the substructures responsible for pain (especially the anterior cingulate cortex) seem roughly the same, so I think we'd have a good idea of where to check.
Maybe one risk is that the brain would just adapt and ...
The arguments for unfairness of X relative to Y I gave in my previous comment (with the modified welfare levels, X=(3, 6) vs Y=(5,5)) aren't sensitive to the availability of other options: Y is more equal (ignoring other people), Y is better according to some impartial standards, and better if we give greater priority to the worse off or larger gains/losses.
All of these apply also substituting A+ for X and Z for Y, telling us that Z is more fair than A+, regardless of the availability of other options, like A, except for priority for larger gains/losses (e...
X isn't so much bad because it's unfair, but because they don't want to die. After all, fairly killing both people would be even worse.
Everyone dies, though, and their interests in not dying earlier trade off against others, as well as other interests. And we can treat those interests more or less fairly.
There are also multiple ways of understanding "fairness", not all of which would say killing both is more fair than killing one:
That's a person-affecting intuition.
I can, now that I exist, assign myself welfare level 0 in the counterfactuals in which I was never born. I can also assign welfare level 0 to potential people who don't come to exist.
People talk about being grateful to have been born. One way to make sense of this is that they compare to a counterfactual in which they were never born. Or maybe it's just adding up the good and bad in their life and judging there's more good than bad. But then an "empty life", with no goods or bads, would be net 0, and you could equate tha...
Maybe we're using these words differently?
I think it’s not true in general that for X to be more fair wrt utility than Y, it must be the case that we can in practice start from X and redistribute utility to obtain Y.
Suppose in X, you kill someone and take their stuff, and in Y, you don't. Or in X, they would die, but not by your killing, and in Y, you save them, at some personal cost.
Whole lifetime aggregate utilities, (them, you):
X would (normally) be unfair to the other person, even if you can't bring them back to life to get bac...
They can agree, but they need not. Again, if everyone were purely selfish, it seems like they would disagree. The extra people would prefer to exist, given their positive welfare levels. The original people would prefer the extra not to exist, if it's paired with a loss to their own welfare. Or, if we took the perspectives of what's best for each person's personal/selfish welfare on their behalf, we'd have those two groups of perspectives.
And we can probably rig up a version that's other-regarding for the people, say the extra people are total utilitarians, and the original people have person-affecting views.
Does it matter to you what the starting welfare levels of the 1 million people are? Would your intuitions about which outcome is best be different?
There are a few different perspectives you could take on the welfare levels in the outcomes. I intended them to be aggregate whole life welfare, including the past, present and future. Not just future welfare, and not welfare per future moment, day or year or whatever. But this difference often doesn't matter.
Z already seems more fair than A+ before you decide which comes about; you're deciding between them ahea...
Between A and Z, the people in A are much better off in A, and the extra people in Z are much better off in Z (they get to exist, with positive lives). It seems like they'd disagree about switching, if everyone only considers the impact on their own welfare.
(Their welfare levels could also be the degree of satisfaction of their impartial or partially other-regarding preferences, but say they have different impartial preferences.)
Arguably we can only say a world state X is "better" than a world state Y iff both
- switching from X to Y is bad and
- switching from Y to X is good.
FWIW, people with person-affecting views would ask "better for whom?". Each set of people could have their own betterness order. Person-affecting views basically try to navigate these different possible betterness orders.
Thanks for the feedback! I've edited the post with some clarification.
I think standard decision theory (e.g. expected utility theory) is actually often framed as deciding between (or ranking) outcomes, or prospects more generally, not between actions. But actions have consequences, so we just need actions with the above outcomes as consequences. Maybe it's pressing buttons, pulling levers or deciding government policy. Either way, this doesn't seem very important, and I doubt most people will be confused about this point.
On the issue of switching between w...
You can conservatively multiply through by the probability that the time of perils is short enough and for the risk dropping enough orders of magnitude.
In Parfit's case, we have a good explanation for why you're rationally required to bind yourself: doing so is best for you.
The more general explanation is that it's best according to your preferences, which can also reflect or just be your moral views. It's not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you
Ah, I should have read more closely. I misunderstood and was unnecessarily harsh. I'm sorry.
I think your response to Risberg is right.
I would still say that permissibility could depend on lever-lashing (in some sense?) because it affects what options are available, though, but in a different way. Here is the view I'd defend:
...Ahead of time, any remaining option or sequence of choices that ends up like "Just Amy" will be impermissible if there's an available option or sequence of choices that ends up like "Just Bobby" (assuming no uncertainty). Available opti
A steelman could be to just set it up like a hypothetical sequential choice problem consistent with Dasgupta's approach:
or
In either case, "picking B" (including "picking B or C") in 1 means actually picking C, if you know you'd pick C in 2, and then use backwards induction.
The fact that A is at least as good as (or not worse than and incomparable to) B could follow because B actually just becomes C, whic...
EDIT: Actually my best reply is that just Amy is impermissible whenever just Bobby is available, ahead of time considering all your current and future options (and using backwards induction). The same reason applies for all of the cases, whether buttons, levers, or lashed levers.
EDIT2: I think I misunderstood and was unfairly harsh below.
I do still think the rest of this comment below is correct in spirit as a general response, i.e. a view can make different things impermissible for different reasons. I also think you should have followed up to your own ...
I think you'd still just choose A at the start here if you're considering what will happen ahead of time and reasoning via backwards induction on behalf of the necessary people. (Assuming C is worse than A for the original necessary people.)
If you don't use backwards induction, you're going to run into a lot of suboptimal behaviour in sequential choice problems, even if you satisfy expected utility theory axioms in one-shot choices.
Rationally and ideally, you should just maximize the expected value of your actions, taking into account your potential influence on others and their costs, including opportunity costs. This just follows assuming expected utility theory axioms. It doesn’t matter that there are other agents; you can just capture them as part of your outcomes under consideration.
When you're assigning credit across other actors whose impacts aren't roughly independent, including for estimating their cost-effectiveness for funding, Shapley values (or something similar) can be ...
I haven't looked into this, and based on your comment, you seem more informed than me on this issue.
(FWIW, I don't give much weight to critical-set views, anyway.)
In section 3. The Drop, you assume biographical identity is determinately-all-or-determinately-nothing, but this doesn't seem very plausible to me. What could a justification for a specific such account even look like, with specific precise cutoffs for a given person? The only I could imagine is someone very sharply going from fully personally identifying to not at all identifying with their past with the additional tiny change. However, I would be surprised if that happened for most people or ...
Also, if your intention wasn't really binding and you did abandon it, then you undermine your own ability to follow through on your own intentions, which can make it harder for you to act rightly and do good in the future. But this is an indirect reason.
In my opinion, a strong downvote is too harsh for a plausibly good faith comment with some potentially valuable criticism, even if (initially) vague.
Despite my specific responses, I want to make a general comment that I agree that these seem like good arguments against many person-affecting views, according to my own intuitions, which are indeed person-affecting. They also leave the space for plausible (to me) person-affecting accounts pretty small.
I think some of the remaining views, e.g. using something like Dasgupta's approach with resolute choice precommitments as necessary, can still be (to me) independently justified, too, but they also need to face further scrutiny.
I think an earlier comment you...
In section 3, you illustrate with Tomi's argument:
One hundred people Ten billion different people A 40 - B 41 41 C 40 100
And in 3.1, you write:
...How might advocates of PAVs respond to Tomi’s argument? One possibility is to claim that betterness is option-set dependent: whether an outcome X is better than an outcome Y can depend on what other outcomes are available as options to choose. In particular, advocates of PAVs could claim:
- B is better than A when B and A are the only options
- B is not better than A when C is also an option.
And advocates of PAVs could defend
In 5.2.3. Intermediate wide views, you write:
If permissibility doesn’t depend on lever-lashing, then it’s also wrong to pull both levers when they aren’t lashed together.
Why wouldn't permissibility depend on lever-lashing under the intermediate wide views? The possible choices, including future choices, have to be considered together ahead of time. Lever-lashing restricts them, so it's a different choice situation. If we're person-affecting, we've already accepted that how we rank two options can depend on what others are available (or we rejected transiti...
In 5.2.3. Intermediate wide views, you write:
...Views of this kind give more plausible verdicts in the previous cases – both the lever case and the enquiring friend case – but any exoneration is partial at best. The verdict in the friend case remains counterintuitive when we stipulate that your friend foresaw the choices that they would face. And although intentions are often relevant to questions of blameworthiness, I’m doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if
From the May 13 2022 FTX Terms of Service:
...8.2.6 All Digital Assets are held in your Account on the following basis:
(A) Title to your Digital Assets shall at all times remain with you and shall not transfer to FTX Trading. As the owner of Digital Assets in your Account, you shall bear all risk of loss of such Digital Assets. FTX Trading shall have no liability for fluctuations in the fiat currency value of Digital Assets held in your Account.
(B) None of the Digital Assets in your Account are the property of, or shall or may be loaned to, FTX Trading; FTX Tr
that he was so misinformed about how much non-customer money FTX had
+ the customer money FTX had that the customers explicitly consented to be loaned out, through margin lending or staking?
He might not have been well-informed about how much this was. It's a number that changes by the minute. But maybe he would have a general idea, and enough to know that what he was asking for was more than customers would have consented to being loaned out.
It looks like it's already available in the Netherlands, Belgium, Luxembourg and Switzerland.
I guess the obvious approach would be to target jurisdictions where MAID is legal, but not just for mental illness. However, in the US, at least, it could be better to push for MAID excluding mental illness to other states where it isn't already legal first, to avoid political polarization. This is just speculation, though.
@Jason left an answer here, referring to this sentencing memo:
...I'd look at pp. 5-12 of the linked sentencing memo for customers, pp. 15-18 for investors/lenders for the government's statement of the offense conduct. The jury merely utters guilty / not guilty on each count, it does not provide detailed findings of fact. Judge Kaplan heard all the evidence as he presided at trial, and can rely on his own factual findings at sentencing under a more-likely-than-not standard. Of course, that is just a summary; Ellison alone testified for ~3 days.
Basically, SBF &
He said he was on a panel at EA Global and mentions PlayPumps, a favourite EA example in this 2015 post. Here's the YouTube video of the EA Global panel discussion.