All of Jeffhe's Comments + Replies

Because you told me that it's the same amount of pain as five minor toothaches and you also told me that each minor toothache is 1 base unit of pain.

Where in supposition or the line of reasoning that I laid out earlier (i.e. P1) through to P5)) did I say that 1 major headache involves the same amount of pain as 5 minor toothaches?

I attributed that line of reasoning to you because I thought that was how you would get to C) from the supposition that 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person.

B... (read more)

the reason why 5 minor toothaches spread among 5 people is equivalent to 5 minor toothache had by one person is DIFFERENT from the reason for why 5 minor headaches had by one person is equivalent to 1 major toothache had by one person.

No, both equivalencies are justified by the fact that they involve the same amount of base units of pain.

So you're saying that just as 5 MiTs/5 people is equivalent to 5 MiTs/1 person because both sides involve the same amount of base units of pain, 5 MiTs/1 person is equivalent to 1 MaT/1 person because both sides in... (read more)

0
kbog
6y
Because you told me that it's the same amount of pain as five minor toothaches and you also told me that each minor toothache is 1 base unit of pain. If you mean that it feels worse to any given person involved, yes it ignores the difference, but that's clearly the point, so I don't know what you're doing here other than merely restating it and saying "I don't agree." On the other hand, you do not care how many people are in pain, and you do not care how much pain someone experiences so long as there is someone else who is in more pain, so if anyone's got to figure out whether or not they "care" enough it's you. You've pretty much been repeating yourself for the past several weeks, so, sure.

I see the problem. I will fix this. Thanks.

I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.

I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally ... (read more)

Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I'm back :P

How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.

This is an interesting question, adding another layer of chance to the original scenario. As you know, if (t... (read more)

I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.

By "you switched", do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I've switched?

And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?

0
Alex_Barry
6y
Yes, "switched" was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were. To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a 'show help' button that explains some of these things.)

Thanks for the exposition. I see the argument now.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

I've since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, sin... (read more)

0
Alex_Barry
6y
I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further. Given that you were initially arguing (with kblog etc.) for this definition of total pain, independent of any other identity considerations, this seems very relevant to that discussion. But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them. The issue is this also applied to the case of deciding whether to set the island on fire at all

So you're suggesting that most people aggregate different people's experiences as follows:

FYI, I have since reworded this as "So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:"

I think it is a more precise formulation. In any case, we're on the same page.

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up

... (read more)
0
Alex_Barry
6y
Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.) I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched! I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible. In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here - my point was just a call for clarity.

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badn

... (read more)
2
Alex_Barry
6y
On 'people should have a chance to be helped in proportion to how much we can help them' (versus just always helping whoever we can help the most). (Again, my preferred usage of 'morally worse/better' is basically defined so as to mean one always 'should' always pick the 'morally best' action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point... ) How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most. In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of 'actually helping people' in favour of it, but then it seems strange you argue for it so forcefully. As a separate point, this form of reasoning seems rather incompatible with your claims about 'total pain' being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the 'total pain' does not change at all! For example: * Suppose Alice is experiencing 10 units of suffering (by some common metric) * 10n people (call them group B) are experiencing 1 units of suffering each * We can help exactly one person, and reduce their suffering to 0 In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as
0
Alex_Barry
6y
Well most EAs, probably not most people :P But yes, I think most EAs apply this 'merchandise' approach weighed by conscious experience. In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by 'morally worse'). Basically I think sentences like: are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X. Hence if you say 'what is morally relevant is the maximal pain being experienced by someone' when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone. Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles). I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I'll make a separate comment to discuss it,

Hey Alex,

Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

When you say that many people here would embrace the assumption that "two people expe... (read more)

1
Alex_Barry
6y
The argument is that if: * The amount of 'total pain' is determined by the maximum amount of suffering people experienced by any given person (Which I think is what you are arguing) * There could be an alien civilization containing a being experiencing more suffering than any human is capable of experiencing (you could also just use a human being tortured if you liked for a less extreme but clearly applicable case) * In this case, then the amount of 'total pain' is always at least that very large number, such that none of your actions can change it at all. * Thus (and you would disagree with this implication due to your adoption of the Pareto principle) since the level of 'total pain' is the morally important thing, all of your possible actions are morally equivalent. As I mention I think you escape this basic formulation of the problem by your adoption of the Pareto principle, but a more complicated version causes the same issue: This is essentially just applying the non-identity problem to the example above. (weirdly enough I think the best explanation I've seen of the non-identity problem is the second half of the 'the future' section of Derek Parfit wikipedia page ) The argument goes something like: * D1 If we adopt that 'total pain' is the maximal pain experienced by any person for whom we can effect how much pain their experience (an attempt to incorporate the Pareto principle into the definition for simplicity's sake). * A1 At some point in the far future there is almost certainly going to be someone experiencing extreme pain. (Even if humanity is wiped out, so most of the future has no one in it, that wiping out is likely to involve extreme pain for some). * A2 Due to chaotic nature of the world, and the strong dependence on birth timings of personal identity (if the circumstances of ones conception change even very slightly then your identity will almost certainly be completely different) any actions in the world now will within a few generation
0
Alex_Barry
6y
Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!) I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more). I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex. I'll discuss this in a separate comment since I think it is one of the strongest argument against your position. I don't know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism. Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Hey kbog, if you don't mind, let's ignore my example with the 5000 pains because I think my argument can more clearly be made in terms of my toothache example since I have already laid a foundation for it. Let me restate that foundation and then state my argument in terms of my toothache example. Thanks for bearing with me.

The foundation:

Suppose 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person.

Given the supposition, you would claim: 5 minor toothaches spread among 5 people involves the same amount o... (read more)

0
kbog
6y
No, both equivalencies are justified by the fact that they involve the same amount of base units of pain. Sure it does. The presence of pain is equivalent to feeling bad. Feeling bad is precisely what is at stake here, and all that I care about. Yes, that's what I meant when I said "that's a question of how we evaluate and represent an individual's well-being, not a question of interpersonal comparison and aggregation."

Hey Alex,

Thanks for your reply. I can understand why you'd be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of "more pain".

I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of "more pain", and then presenting an argument for why my sense of "more pain" is the one that really matters.

I'd be interested to know what you think.

1
Alex_Barry
6y
Thanks for getting back to me, I've read your reply to kblog, but I don't find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don't find it personally compelling. (Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).) A couple of brief points in favour of the classical approach: * It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem). * As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments. One additional thing to note is that dropping the comparability of 'non-purely experientially determined' and 'purely experientially determined' experiences (henceforth 'Comparability') does not seem to naturally lead to a specific way of evaluating different situations or weighing them against each other. For example, you suggest in your post that without Comparability the morally correct course of action would be to give each person a chance of being helped in proportion to their suffering, but this does not necessarily follow. One could imagine others who also disagreed with Comparability, but thought t

Hi kbog,

Sorry for taking awhile to get back to you – life got in the way... Fortunately, the additional time made me realize that I was the one who was confused as I now see very clearly the utilitarian sense of “involves more pain than” that you have been in favor of.

Where this leaves us is with two senses of “involves more pain than” and with the question of which of the two senses is the one that really matters. In this reply, I outline the two senses and then argue for why the sense that I have been in favor of is the one that really matters.

The two ... (read more)

0
kbog
6y
The 5000 pains are only worse if 5000 minor pains experienced by one person is equivalent to one excruciating pain. If so, then 5000 minor pains for 5000 people being equivalent to one excruciating pain doesn't go against the badness of how things feel; at least it doesn't seem counterintuitive to me. Maybe you think that no amount of minor pains can ever be equally important as one excruciating pain. But that's a question of how we evaluate and represent an individual's well-being, not a question of interpersonal comparison and aggregation.

Only my response to Objection 1 is more or less directed to the utilitarian. My response to Objection 2 is meant to defend against other justifications for saving the greater number, such as leximin or cancelling strategies. In any case, I think most EAs (even the non-utilitarians) will appeal to utilitarian reasoning to justify saving the greater number, so addressing utilitarian reasoning is important.

0
kbog
6y
It's not about responses to objections, it's about the thesis itself.

Hey Alex, thanks for your comment!

I didn't know what the source of my disagreement with EAs would be, so I hope you can understand why I couldn't structure my post in a way that would have already taken into account all the subsequent discussions. But thanks for your suggestion. I may write another post with a much simpler structure if my discussion with kbog reaches a point where either I realize I'm wrong or he realizes he's wrong. If I'm wrong, I hope to realize it asap.

Also, I agree with kbog. I think it's much likelier that one of us is just confused... (read more)

0
Alex_Barry
6y
Thanks for your reply - I'm extremely confused if you think there is no 'intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person" since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage. I had assumed you were arguing at the 'which is morally important' level, which I think might well come down to intuitions. I hope you manage to work it out with kblog!

Hi bejaq,

Thanks for your thoughtful comment. I think your first paragraph captures well why I think who suffers matters. The connection between suffering and who suffers it is to strong for the former to matter and for the latter not to. Necessarily, pain is pain for someone, and ONLY for that someone. So it seems odd for pain to matter, yet for it not to matter who suffers it.

I would also certainly agree that there are pragmatic considerations that push us towards helping the larger group outright, rather than giving the smaller group a chance.

hey kbog, I didn't anticipate you would respond so quickly... I was editting my reply while you replied... Sorry about that. Anyways, I'm going to spend the next few days slowly re-reading and sitting on your past few replies in an all-out effort to understand your point of view. I hope you can do the same with just my latest reply (which I've editted). I think it needs to be read to the end for the full argument to come through.

Also, just to be clear, my goal here isn't to change your mind. My goal is just to get closer to the truth as cheesy as that might sound. If I'm the one in error, I'd be happy to admit it as soon as I realize it. Hopefully a few days of dwelling will help. Cheers.

0
kbog
6y
What? It's the dimension of weight, where the weight of 5 oranges can be more than the weight of one big orange. Weight is still weight when you are weighing multiple things together. If you don't believe me, put 5 oranges on a scale and tell me what you see. The prior part of your comment doesn't have anything to change this.

You'll need to read to the very end of this reply before my argument seems complete.

In both cases I evaluate the quality of the experience multiplied by the number of subjects. It's the same aspect for both cases. You're just confused by the fact that, in one of the cases but not the other, the resulting quantity happens to be the same as the number provided by your "purely experiential sense".

Case 1: 5 minor headaches spread among 5 people

Case 2: 1 major headache had by one person

Yes, I understand that in each case, you are multiplying a cer... (read more)

0
kbog
6y
What I am working with "at bottom" is irrelevant here, because I'm not making a comparison with it. There are lots of things we compare that involve different properties "at bottom". And obviously the comparison we care about is not merely a comparison how bad it feels for any given person. No it doesn't. That is, if I were to apply the same logic to oranges that you do to people, I would say that there is Mono-Orange-Weight, defined as the most weight that is ever present in one of a group of oranges, and Multi-Orange-Weight, defined as the total weight that is present in a group of oranges, and insist that you cannot compare one to the other, so one orange weighs the same as five oranges. Of course that would be nonsense, as it's true that you can compare orange weights. But you can see how your argument fails. Because this is all you are doing; you are inventing a distinction between "purely experiential" and "non-purely experiential" badness and insisting that you cannot compare one against the other by obfuscating the difference between applying either metric to a single entity. But that isn't how I determined that one person with a minor headache has 2 units of pain total. You are right, I am comparing one person's "non purely experiential" headache to five people's "non purely experiential" headaches. It's not reasonable to expect me to change my mind when you're repeating the exact same argument that you gave before while ignoring the second argument I gave in my comment.

Just because two things are different doesn't mean they are incommensurate.

But I didn't say that. As long as two different things share certain aspects/dimensions (e.g. the aspect of weight, the aspect of nutrition, etc...), then of course they can be compared on those dimensions (e.g. the weight of an orange is more than the weight of an apple, i.e., an orange weighs more than an apple).

So I don't deny that two different things that share many aspects/dimensions may be compared in many ways. But that's not the problem.

The problem is that when you say t... (read more)

0
kbog
6y
No, I am effectively saying that the weight of five oranges is more than the weight of one orange. That is wrong. In both cases I evaluate the quality of the experience multiplied by the number of subjects. It's the same aspect for both cases. You're just confused by the fact that, in one of the cases but not the other, the resulting quantity happens to be the same as the number provided by your "purely experiential sense". If I said "this apple weighs 100 grams, and this orange weighs 200 grams," you wouldn't tell me that I'm making a false comparison merely because both the apple and the orange happen to have 100 calories. There is nothing philosophically noteworthy here, you have just stumbled upon the fact that any number multiplied by one is still one. As if that isn't decisive enough, imagine for instance that it was a comparison between two sufferers and five, rather than between one and five. Then you would obviously have no argument at all, since my evaluation of the two people's suffering would obviously not be in the "purely experiential sense" that you talk about. So clearly I am right whenever more than one person is involved. And it would be strange for utilitarianism to be right in all those cases, but not when there was just one person. So it must be right all the time.

The fact that they are separate doesn't mean that their content is any different from the experience of the one person. Certainly, the amount of pain they involve isn't any different.

Yes, each of the 5 minor headaches spread among the 5 people are phenomenally or qualitatively the same as each of the 5 minor headaches of the one person. The fact that the headaches are spread does not mean that any of them, in themselves, feel any different from any of the 5 minor headaches of the one person. A minor headache feels like a minor headache, irrespective of ... (read more)

0
Alex_Barry
6y
I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important. Overall I find this post confusing though, since the framing seems to be "Effective Altruism is making an intellectual mistake" whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage. The comments etc. then just seem to have mostly been people explaining why they don't find your moral intuition that 'non-purely experientially determined' and 'purely experientially determined' amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful. I think I would have found this post more conceptually clear if it had been structured: 1. EA conclusions actually require an additional moral assumption/axiom - and so if you don't agree with this assumption then you should not obviously follow EA advice. 2. (Optionally) Why you find the moral assumption unconvincing/unlikely 3. (Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption. Where throughout the assumption is the commensuratabilitly of 'non-purely experientially determined' and 'purely experientially determined' experience. In general I am not very sure what you had in mind as the ideal outcome of this post. I'm surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of est
0
kbog
6y
Just because two things are different doesn't mean they are incommensurate. It is easy to compare apples and oranges: for instance, the orange is healthier than the apple, the orange is heavier than the apple, the apple is tastier than the orange. You also compare two different things, by saying that a minor headache is less painful than torture, for instance. You think that different people's experiences are incommensurable, but I don't see why. In fact, there is good reason to think that any two values are necessarily commensurable. For if something has value to an agent, then it must provide motivation to them should they be perceiving, thinking and acting correctly, for that is basically what value is. If something (e.g. an additional person's suffering) does not provide additional motivation, then either I'm not responding appropriately to it or it's not a value. And if my motivation is to follow the axioms of expected utility theory then it must be a function over possible outcomes where my motivation for each outcome is a single number. And if my motivation for an outcome is a single number, then it must take the different values associated with that outcome and combine them into one figure denoting how valuable I find it overall.

1) "The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people."

One subject-of-experience having one headache five times = the experience of what-it's-like-of-going-through-5-headaches. (Note that the symbol is an equal sign in case it's hard to see.)

Five headaches among five people = 5 experientially independent experiences of what-it's-like-of-going-through-1-headache. (Note the 5 experiences are experientially independent of each o... (read more)

0
kbog
6y
The fact that they are separate doesn't mean that their content is any different from the experience of the one person. Certainly, the amount of pain they involve isn't any different. The total amount of suffering. Or, the total amount of well-being. Because are multiple people and each of them has their own pain. The amount of pain experienced among five people. In the sense that each of them involves more than 1/5 as much pain, and the total pain among 5 feelings is the sum of pain in each of them. Sure it's experiential, all 10 of the pain is experienced. It's just not experienced by the same person. In the same way that there are more sheep apparitions among five people, each of them dreaming of two sheep, than for one person who is dreaming of six sheep. But as far as cardinal utility is concerned, both quantities involve the same amount of pain. That's just what you get from the definition of cardinal utility. That just means I need a different account of "involves more pain than" (which I have) when interpersonal comparisons are being made, but it doesn't mean that my account can't be the same as your account when there is only one person. But as I have been telling you this entire time, I don't follow your definition of "experientially worse than". Well, I already did. But it's really just the same as what utilitarians have been writing for centuries so it's not like I had to provide it.

1) "But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.

If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches."

Arguing for what factors are morally relevant in determi... (read more)

0
kbog
6y
Your scenario didn't say that probabilistic strategies were a possible response, but suppose that they are. Then it's true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you've given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it's good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%. That's not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain. I really don't see it as extreme. I'm not sure that many people would. First, I don't see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals' suffering being more or less important. The problem is that no one is claiming that anyone's suffering will disappear or stop carrying moral force, rather we are claiming that each person's suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason. Again I cannot tell where you got these numbers from. But it does mean that they don't care. If agents don't have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as o

Hi Telofy, nice to hear from you again :)

You say that you have no intuition for what a subject-of-experience is. So let me say two things that might make it more obvious:

1.Here is how I defined a subject-of-experience in my exchange with Michael_S:

"A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock ... (read more)

Hi kbog, glad to hear back from you.

1) "But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate."

I don't quite understand how this is a response to what I said, so let me retrace some things:

You first claimed that if I believed that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people, then I would be committed to "believing that i... (read more)

0
kbog
6y
The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people. There isn't any morally relevant difference between these experiences, as the mere fact that the latter happens to be split among five people isn't morally relevant. So we should suppose that they are morally similar. You think it should be "involves more pain for one person than". But I think it should be "involves more pain total", or in other words I take your metric, evaluate each person separately with your metric, and add up the resulting numbers. It's just plain old cardinal utility: the sum of the amount of pain experienced by each person. Why? In the exact same way that you think they can. Correct, we haven't, because we're not yet doing any interpersonal comparisons. It is distributed - 20% of it is in each of the 5 people who are in pain.

Hi Brian,

I think the reason why you have such a strong intuition of just saving Amy and Susie in a choice situation like the one I described in my previous reply is that you believe Amy's burning to death plus Susie's sore throat involves more or greater pain than Bob's burning to death. Since you think minimizing aggregate pain (i.e. maximizing aggregate utility) is what we should do, your reason for just Amy and Susie is clear.

But importantly, I don't share your belief that Amy's burning to death and Susie's sore throat involves more or greater pain tha... (read more)

Hey Brian,

I just wanted to note that another reason why you might not want to use the veil-of-ignorance approach to justify why we should save the greater number is that it would force you to conclude that, in a trade off situation where you can either save one person from an imminent excruciating pain (i.e. being burned alive) or another person from the same severe pain PLUS a third person from a very minor pain (e.g. a sore throat), we should save the second and third person and give 0 chance to the first person.

I think it was F. M. Kamm who first rais... (read more)

0
Brian Wang
6y
Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It's interesting that you and I have such intuitions about such a case – I see that as in the category of "being so obvious to me that I wouldn't even have to hesitate to choose." But obviously you have different intuitions here. Part of what I'm confused about is what the positive case is for giving everyone an equal chance. I know what the positive case is for the approach of automatically saving two people vs. one: maximizing aggregate utility, which I see as the most rational, impartial way of doing good. But what's the case for giving everyone an equal chance? What's gained from that? Why prioritize "chances"? I mean, giving Bob a chance when most EAs would probably automatically save Amy and Susie might make Bob feel better in that particular situation, but that seems like a trivial point, and I'm guessing is not the main driver behind your reasoning. One way of viewing "giving everyone an equal chance" is to give equal priority to different possible worlds. I'll use the original "Bob vs. a million people" example to illustrate. In this example, there's two possible worlds that the donor could create: in one possible world Bob is saved (world A), and in the other possible world a million people are saved (world B). World B is, of course, the world that an EA would create every time. As for world A, well: can we view this possible world as anything but a tragedy? If you flipped a coin and got this outcome, would you not feel that the world is worse off for it? Would you not instantly regret your decision to flip the coin? Or even forget flipping the coin, we can take donor choice out of it; wouldn't you feel that a world where a hurricane ravaged and destroyed an urban community where a million people lived is worse than

Hi Michael,

I removed the comment about worrying that we might not reach a consensus because I worried that it might send you the wrong idea (i.e. that I don't want to talk anymore). It's been tiring I have to admit, but also enjoyable and helpful. Anyways, you clearly saw my comment before I removed it. But yeah, I'm good with talking on.

I agree that experiences are the result of chemical reactions, however the nature of the relations "X being experientially worse than Y" and "X being greater in number than Y" are relevantly different... (read more)

0
Michael_S
6y
FYI, I'm pretty busy over the next few days, but I'd like to get back to this conversation at one point. If I do, it may be a bit though.

1) "But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counte... (read more)

0
kbog
6y
But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance. If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches. How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured? I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet. Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don't see what could possibly lead one to prefer it. And merely having a "chance" of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don't sit on the mere fact of the chance and covet it as though it were something to value on its own.

1) "Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time."

I disagree. I was precisely trying to guard against such thoughts by enriching my first reply to Michael_S with a case of forgetfulness. I wrote, "Now, by the end of... (read more)

0
kbog
6y
But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate. Of course you can define a relation to have that property, but merely defining it that way gives us no reason to think that it should be the focus of our moral concern. If I were to define a relation to have the property of being the target of our moral concern, it wouldn't be impacted by how it were spread across multiple people. Well, so do I. The point is that the mere fact that 5 headaches in one person is worse for one person doesn't necessarily imply that it is worse overall for 5 headaches among 5 people.

1) "Because I don't have any reason to feel different."

Ok, well, that comes as a surprise to me. In any case, I hope after reading my first reply to Michael_S, you at least sort of see how it could be possible that someone like I would feel surprised by that, even if you don't agree with my reasoning. In other words, I hope you at least sort of see how it could be possible that someone who would clearly agree with you that, say, 5 minor headaches all had by 1 tall person is experientially just as bad as 5 minor headaches all had by 1 short person... (read more)

0
kbog
6y
Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time. There are two rooms, painted bright orange inside. One person goes into the first room for five minutes, five people go into the second for one minute. If we define orange-perception as the phenomenon of one conscious mind's perception of the color orange, the amount of orange-perception for the group is the same as the amount of orange-perception for the one person. Something being experiential doesn't imply that it is not quantitative. We can clearly quantify experiences in many ways, e.g. I had two dreams, I was awake for thirty seconds, etc. Or me and my friends each saw one bird, and so on. Yes, but the question here is whether 5 what-it's-lies-of-going-through-1-minor-headache is 5x worse than 1 minor headache. We can believe this moral claim without believing that the phenomenon of 5 separate headaches is phenomenally equivalent to 1 experience of 5 headaches. There are lots of cases where A is morally equivalent to B even though A and B are physically or phenomenally different.

1) "You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here."

I DO NOT simply assert this. In case 3, I wrote, "Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being... (read more)

0
kbog
6y
But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob's diseases in the first place, making your claim that Amy and Susie's diseases are not experientially worse than Bob's disease and so on. PU says that we should assign moral value on the basis of people's preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you're using the term correctly) that they're putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational. Yes to both.

1) "But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending."

To arbitrarily save fewer people is to save them on a whim. I am not suggesting that we should save them on a whim. I am suggesting that we should give each person an equal chance of being saved. They are completely different... (read more)

0
kbog
6y
You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here. My whole point here is that your response to Objection 1 doesn't do any work to convince us of your premises regarding the headaches. Yeah there's an argument, but its premise is both contentious and undefended. I'm not just speaking for utilitarians, I'm speaking for anyone who doesn't buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well. The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it's an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people's preferences anyway so that there isn't any dissonance between what people would select and what utilitarianism says.

1) The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.

The conclusions are rational under the stipulation that each person has an equal chance of being in anybody's position. But it is not actually rational given that the stipulation is false. So you can't just say that the conclusions have a bearing on reality because they are necessarily rational. They are rational under the stipulation, but not when you take into account what is actually the case.

And I don't se... (read more)

1
kbog
6y
The argument of both Rawls and Harsanyi is not that it just happens to be rational for everybody to agree to their moral criteria; the argument is that the morally rational choice for society is a universal application of the rule which is egoistically rational for people behind the veil of ignorance. Of course it's not egoistically rational for people to give anything up once they are outside the veil of ignorance, but then they're obviously making unfair decisions, so it's irrelevant to the thought experiment. Stipulations can't be true or false - they're stipulations. It's a thought experiment for epistemic purposes. The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system. Also, to be clear, the Original Position argument doesn't say "imagine if Bob had an equal chance of being in Amy's or Susie's position, see how you would treat them, and then treat him that way." If it did, then it would simply not work, because the question of exactly how you should actually treat him would still be undetermined. Instead, the argument says "imagine if Bob had an equal chance of being in Amy's or Susie's position, see what decision rule they would agree to, and then treat them according to that decision rule." The first paragraph of his first comment. This very idea, originally argued by Harsanyi (http://piketty.pse.ens.fr/files/Harsanyi1975.pdf).

1) "Reason and empathy don't tell you to arbitrarily save fewer people."

I never said they tell me to arbitrarily save fewer people. I said that they tell us to give each person an equal chance of being saved.

2) "This doesn't answer the objection."

That premise (as indicated by "P1."), plus my support for that premise, was not meant to answer an objection. It was just the first premise of an argument that was meant to answer objection 1.

3) "There is more suffering when it happens to two people, and more suffering is mora... (read more)

0
kbog
6y
But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending. But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken. But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount. Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not. OK, well: it's not. Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur. brianwang712's response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn't take the time to spell it out. Look, your comments towards him are very long and convoluted. I'm not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with "updates" alongside copies of your original comments, I find it almost painful to look through. I don't see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards. The time and knowledge of people who help the less fortunate is particularly valuable, so one should be wi

Hey kbog,

Thanks for your comment. I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.

Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.

0
kbog
6y
The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational. My reply to Bob would be to essentially restate brianwang's original comment, and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument, and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments.

Hey gworley3,

Here's the comment I made about the difference between effective-altruism and utilitarianism (if you're interested): http://effective-altruism.com/ea/1ll/cognitive_and_emotional_barriers_to_eas_growth/dij

Hey gworley3,

I decided to delete the post seeing that it wasn't getting many responses. Thanks for replying anyways!

Hey Khorton,

Thanks for sharing! For some reason, I totally did not expect faith/religion to come up. Clearly I have not thought broadly in enough ><. If I included a new option like

10) I donate/plan to donate because I am of a particular faith/religion that calls on me or requires me to do charitable deeds

do you think that would be more true of you than 1)? How important is it to you that doing charitable deeds is morally good or right? In other words, what if God did not create morality and simply requested that you help others without it being morally good or bad? Do you think you would still do it?

REVISED TO BE MORE CLEAR ON MAR 19:

You also write, "There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don't see any reason to treat this differently then the underlying chemical reactions."

Well, to me the reason is obvious: when we say that 5 minor pains in one person is greater than (i.e. worse than) a major pain in one person" we are using "greater than" in an EXPERIENTIAL sense. On the other hand, when we say that 10 ... (read more)

Just to make sure we're on the same page here, let me summarize where we're at:

In choice situation 2 of my paper, I said that supposing that any person would rather endure 5 minor headaches of a certain sort than 1 major headache of a certain sort when put to the choice, then a case in which Al suffers 5 such minor headaches is morally worse than a case in which Emma suffers 1 such major headache. And the reason I gave for this is that Al's 5 minor headaches is more painful (i.e. worse) than Emma's major headache.

In choice situation 3, however, the 5 min... (read more)

0
Michael_S
6y
To your first comment, I disagree. I think it's the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain? I think there is more total pain. I'm not counting the # of headaches. I'm talking about the total amount of pain. Can you define S1? We may not, as these discussions tend to go. I'm fine calling it. I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here's my position on the issue: I think moral personhood doesn't make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don't see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn't be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.

Hey Brian,

No worries! I've enjoyed our exchange as well - your latest response is both creative and funny. In particular, when I read "They have read your blog post on the EA forum and decide to flip a coin", I literally laughed out loud (haha). It's been a pleasure : ) If you change your mind and decide to reply, definitely feel welcome to.

Btw, for the benefit of first-time readers, I've updated a portion of my very first response in order to provide more color on something that I originally wrote. In good faith, I've also kept in the response... (read more)

Hey RandomEA (nice to chat again in a different setting lol),

Thanks for linking me to that. I understand moral duty and obligation to mean the same. Do you know what difference they had in mind? And 'opportunity' sounds very vague. It doesn't tell us much about the psychology of the surveyees.

Hey adamaero,

I agree that reasons change! But I would be curious what your current reason is :P (don't worry if you don't want to say)

Also, can you tell me which count as justifications and which count as reasons for you, and the difference between a reason and a justification for you?

I understand myself to be using the word 'reason' to mean cause here, but 'reason' can also be used to mean justification since in everyday parlance, it is a pretty loose term. Something similar can be said for the words 'why' and 'because'.

As I see it, the real distinction ... (read more)

0
adamaero
6y
I do not mean "the reason" can change--I just do not think you can reduce someone's worldview, Weltanschauung, into one simple reason (unless maybe for #6). Regardless, I don't think a survey here would be representative anyway.

1) A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock or a plant is not a subject of experience because it has no experience(s). That is, for example, why we don't feel concerned when we step on grass: it doesn't feel pain or feel anything. On the other hand, a cow is a subject-of-experience - it presumab... (read more)

0
Michael_S
6y
That's what I'm interested in a definition of. What makes it a "single subject"? How is this a binary term? I am making a greater than/less than comparison. That comparison is with pain which results from the neural chemical reactions. There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don't see any reason to treat this differently then the underlying chemical reactions. No problem on the caps.

Hey Cassidy,

Very well written post! I didn't read his book, but just going off your summary of his view where you characterize him as "asserting that knowledge and technology will alleviate most of our persisting worries in time" and where you quote him saying, “… there is no limit to the betterments we can attain if we continue to apply knowledge to enhance human flourishing.”, I am curious how much weight Pinker as well as you give to

1) empathy (i.e. the ability to imagine oneself in the shoes of another - to imagine what it might be like for... (read more)

Hey RandomEA,

Sorry for the late reply. Well, say I'm choosing between the World Food Programme (WFP) and some other charity, and I have $30 to donate. According to WFP, $30 can feed a person for a month (if I remember correctly). If I donate to the other charity, then WFP in its next operation will have $30 less to spend on food, meaning someone who otherwise would have been helped won't be receiving help. Who that person is, we don't know. All we know is that he is the person who was next in line, the first to be turned away.

Now, you disagree with this. ... (read more)

1) I agree that the me today is different from the me yesterday, but I would say this is a qualitative difference, not a numerical difference. I am still the numerically same subject-of-experience as yesterday's me, even though I may be qualitatively different in various physical and psychological ways from yesterday's me. I also agree that the me today is different from the you today, but here I would say that the difference is not merely qualitative, but numerical too. You and I are numerically different subjects-of-experience, not just qualitatively dif... (read more)

0
Michael_S
6y
1) I'd like to know what your definition of "subject-of-experience" is. 2) For this to be true, I believe you would need to posit something about "conscious experience" that is entirely different than everything else in the universe. If say factory A produces 15 widgets, factory B produces 20 widgets, and Factory C produces 15 widgets, I believe we'd agree that the number of widgets in A+C is greater than the number of widgets produced by B, no matter how independent the factories are. Do you disagree with this? Similarly, I'd say if 15 neural impulses occur in brain A, 20 in brain B, and 15 in brain C, the # of neural impulses is greater than A+C than in B. Do you disagree with this? Conscious experiences are a product of such neural chemical reactions. Do you disagree with this? Given this, It seems odd to then postulate that even though all ingredients are the same and are additive between individuals, the conscious product is not. It seems arbitrary and unnecessary to explain anything, and there is no reason to believe it is true.

Hi Telofy,

Thanks for this lucid reply. It has made me realize that it was a mistake to use the phrase "clear experiential sense" because that misleads people into thinking that I am referring to some singular experience (e.g. some feeling of exhaustion that sets in after the final headache). In light of this issue, I have written a "new" first reply to Michael_S to try to make my position clearer. I think you will find it helpful. Moreover, if you find any part of it unclear, please do let me know.

What I'm about to say overlaps with som... (read more)

1
Dawn Drescher
6y
Hi Jeff! To just briefly answer your question, “Are you concluding from this that there is not actually a single subject-of-experience”: I don’t have an intuition for what a subject-of-experience is – if it is something defined along the lines of the three characteristics of continuous person moments from my previous message, then I feel that it is meaningful but not morally relevant, but if it is defined along the lines of some sort of person essentialism then I don’t believe it exists on Occam’s razor grounds. (For the same reason, I also think that reincarnation is metaphysically meaningless because I think there is no essence to a person or a person moment besides their physical body* until shown otherwise.) * This is imprecise but I hope it’s clear what I mean. People are also defined by their environment, culture, and whatnot.

Hi Jonathan,

Thanks for directing me to Scanlon's work. I am adequately familiar with his view on this topic, at least the one that he puts forward in What We Owe to Each Other. There, he tried to put forward an argument to explain why we should save the greater number in a choice situation like the one involving Bob, Amy and Susie, which respected the separateness of persons, but his argument has been well refuted by people like Michael Otsuka (2000, 2006).

Regarding your second point, what reason can you give for giving each person less than the maximum ... (read more)

Load more