*Epistemic status: just my own thoughts&ideas, so it might well be, that this is rubbish. I do believe, however, that this is a crucial consideration when evaluating the plausibility of Average Utilitarianism.*

**TLDR:**

*Christian Tarsney argued that Average Utilitarians should be egoists**I refute his argument, using the Self-Indication Assumption*

This is mainly a response to Tarsney´s post/paper, in which he argues against Average Utilitarianism. If you haven't read it, you should be able to understand this post perfectly well, as I go through the main points of his arguments. This post might be relevant for people involved in Global Priorities Research, and people who just generally care about which moral theories are plausible. A lively discussion of Tarsney's paper can be found here.

# Introduction

How ought we to act, under the consideration, that we might be the only real person in the universe? Most moral theories let us neglect the small probabilities of such absurd considerations. Tarsney discovered that Average Utilitarianism gives huge weight to such considerations of solipsism. He called this effect solipsistic swamping. In this post, I show that the effect of solipsistic swamping is exactly canceled out by taking anthropic evidence into account using the Self-Indication Assumption.

First, I am going to recount the argument for solipsistic swamping and lay out the Self-Indication Assumption. Then I am going to calculate the way in which these two considerations cancel out.

# Solipsistic swamping- a challenge to average utilitarianism

Solipsism is the belief, that one is the only real person in the universe. Average Utilitarianism is a moral theory, that proscribes the maximization of the average wellbeing. There are different versions of Average Utilitarianism, that differ from each other in the way that ``average wellbeing'' is defined (Hurka,1982).

In comparison to Total Utilitarianism, Average Utilitarianism has the nice property, that it avoids the Repugnant Conclusion. This means, that Average Utilitarianism, in comparison to total Utilitarianism, does not prefer a huge number of people, whose life is barely worth living, over a lot of people who lead happy lives (Parfit, 1984 P 420).)

In his paper ``Average Utilitarianism Implies Solipsistic Egoism'' Christian J. Tarsney finds an interesting flaw in Average Utilitarianism: Even tiny credence in solipsism leads to expected average utility maximizers to act egoistically. This property is called solipsistic swamping (Tarsney, 2020).

Let us see how solipsistic swamping plays out numerically, by taking a look at Alice the average utility maximizer, and Tom the total utility maximizer. Let be the credence of Alice and Tom in solipsism. Tarsney does a Fermi estimate of a rational credence in solipsism and comes to the conclusion that it should lie somewhere between and . Let furthermore be the number of people there is in total over the course of human history. Tarsney estimates this number to be beyond and many orders of magnitude bigger, when we consider non-human animals.

Alice and Tom now stand before the choice of giving themself 1 quantity of wellbeing or give other people quantities of wellbeing. How will they act?

To see what Tom will do, we have to calculate the expected total utility of our action alternatives:

This calculation computes the expected total utilities of Tom acting selfishly and Tom acting altruistically . This is done by multiplying the probability that solipsism is true /false with the corresponding utility generated in these eventualities. Tom will do the action with the higher expected total utility.

So Tom will act altruistically as long as the wellbeing he provides others is bigger than the reciprocal value of his credence that solipsism is false. For small credence in solipsism, this leads to him valuing his own wellbeing only slightly higher than the wellbeing of others.

To see what alice will do, we have to do the same with the average utility:

Here, I have computed the expected average utilities of her actions.

So Alice will only act altruistically if the wellbeing she can provide others exceeds the threshold of , which is really high in a highly-populated universe even for very low credence in solipsism.

Tarsney shows, that with these estimated values, Alice would rather provide herself 1 unit of wellbeing than provide 1000 other people with 1000 units of wellbeing.

This can be seen as an argument against Average Utilitarianism or for egoism.

Finding another good argument against solipsism might for now avoid solipsistic swamping by reducing , but if we ever would find large alien civilizations, or discover that grass is sentient, we would have to return to egoism. This prospect seems absurd. The only thing that can counteract solipsistic swamping is an argument against solipsism, that scales with the number of persons in the universe.

# The Self-Indication Assumption- reasoning under anthropic bias

The Self-Indication Assumption is one of several competing methods of anthropic reasoning. This means reasoning about things, that had an influence on your existence.

The Self-Indication Assumption, in a nutshell, says:* Reason as though you were a randomly selected person out of all possible persons.*

Its main competitor is the Self-Sampling Assumption, which says the following:

*Reason as though you were a randomly selected person out of all actual persons.*

(Bostrom,2003)

To see how this difference pans out in practice, let us look at an example:

Imagine God revealed to you, that she flipped a fair coin at the beginning of the universe. If the coin would have come up heads, she would have created a universe with one planet that has a civilization. If the coin would have come up tails, she would have created 99 planets that have civilizations, but which are billions of light-years apart. What should your credence be, that the coin has come up tales?

Using the Self-Sampling Assumption, you should arrive at a credence of . The coin is a fair coin, and you have no further evidence to go on.

Using the Self-Indication Assumption, you should arrive at a credence of Assuming that all Civilisations are equally populated, 99 of 100 possible persons live in a universe, where God has thrown tails.

One advantage of the Self-Indication Assumption is, that it refutes the Doomsday Argument, which reasons just from anthropic evidence, that the world is likely going to end soon (Bostrom,2003).

# Calculating Solipsistic swamping with the Self-Indication Assumption

In his estimations, Tarsney does not take any anthropic evidence into account, when calculating his credence in solipsism. By doing this, he implicitly assumes the Self-Sampling Assumption. Let us now look where it leads us if we assume the Self-Indication Assumption instead.

Let us say, we have a credence of in solipsism, before taking into account any anthropic evidence. If solipsism is true, there is only 1 person, and if solipsism is false, there are persons. From the observation, that we are indeed a person ourselves, this leads us to a credence of in solipsism.

Let us now see, how Alice and Tom are choosing under these conditions between the selfish act and the altruistic act.

Tom the total utilitarian still chooses the altruistic action as long as is bigger than 1 and his initial credence in solipsism is not too high.

Alice the average utilitarian now also chooses the altruistic choices for reasonably small initial credences in Solipsism. Her choice does not even depend on the total number of persons in the universe. This means that finding alien civilizations or grass sentience does not impact Alice`s practical level of altruism, which is a nicely intuitive property.

# Conclusion

Solipsistic swamping shows clearly that average utilitarians should behave egoistically under the Self-Sampling Assumption. If one takes the Self-Indication Assumption, on the other hand, this conclusion can be refuted. This is a solid rebuttal for any average utilitarians who do not want to adhere to a moral theory that preaches egoism. I find the elegance of the way the numbers work out in exactly such a way, that the total number of persons cancel out, astonishing. This reminds me of the elegant way the Self Indication assumption refutes the doomsday argument and might be seen as evidence for the Self-Indication Assumption.

## Literature

Christian J. Tarsney (2020): ``Average Utilitarianism Implies Solipsistic Egoism''

Nick Bostrom, Milan M. Ćirković (2003): ``The Doomsday Argument and the Self-Indication Assumption: Reply to Olum'', The Philosophical Quarterly, Volume 53

Derek Parfit (1984): ``Reasons and Persons'', published: Oxford University Press

T. M. Hurka (1982): ``Average Utilitarianisms''Analysis, Vol. 42 , published: Oxford University Press

I think this is a really interesting observation.

But I don't think it's fair to say that average utilitarianism "avoids the repugnant conclusion".

If the world contains only a million individuals whose lives are worse than not existing (-100 utils each), and you are considering between two options: (i) creating a million new individuals who are very happy (50 utils each) or (ii) creating N new individuals whose lives are barely worth living (x utils each), then for any x, however small, there is some N where (ii) is preferred, even under average utilitarianism.

There are many serious problems with average utilitarianism, not least that it doesn't remove the repugnant conclusion anyway . So although I think this refutation of solipsistic swamping makes sense and is interesting, I don't think it increases my credence in average utilitarianism very much.

Indeed, whether AU avoids the RC

in practicedepends on your beliefs about the average welfare in the universe. In fact, average utilitarianism reduces to critical-level utilitarianism with the critical level being the average utility, in a large enough world (in uncertainty-free cases).Personally, I find the worst part of AU to be the possibility that, if the average welfare is already negative, adding bad lives to the world can make things better, and this is what rules it out for me.

I think you are correct, that there are RC-like problems that AU faces (like the ones you describe), but the original RC (For any population, leading happy lives, there is a bigger population leading nearly worth living lives, whose existence would be better) can be refuted.

cs is the probability that there's only one mind, not also that it's my mind, right? Maybe I'm being pedantic, but solipsism already implies that this mind is mine, so this was kind of confusing to me. I thought you were using cs to refer to Tarsney's range, but it seems you're referring to a more general hypothesis (of which solipsism is a special case, with the mind being me).

With S= there's only one mind (whether mine or not),

P(S|I exist)=P(I exist|S)P(S)P(I exist|S)P(S)+P(I exist|notS)P(notS)=P(I exist|S)csP(I exist|S)cs+P(I exist|notS)(1−cs)What are each of the probabilities here supposed to be? P(I exist|S)=1N,P(I exist|notS)=NpN, where N= the number of

possiblepeople?I think the way you put it makes sense, and if you put the number in, you get to the right conclusion. The way I think about this is slightly different, but (I think) equivalent:

Let {Hn} be the set of all possible Persons, and {pn} the probability of them existing. The probability, that you are the person Hmis pm∑npn. Lets say some but not all possible people have red hair. She subset of possible people with red hair is {Hs}⊆{Hn}. Then the probability, that you have red hair is:∑sps∑npn.

In my calculations in the post, the set of all possible people is the one solipsistic guy, and Np people in the non-solipsistic universe. (with their probability of existence being cs and (1−cs) ). So the probability, that you are in a world, where solipsism is true, is cscs1+(1−cs)Np.

(I'm not very familiar with anthropic reasoning, or SSA and SIA specifically.)

Could you elaborate on this? Solipsism doesn't just mean that there's only one mind that exists, but also that it's my mind. Here's Tarsney's discussion of his estimates:

Also, what about the possibility of infinitely many individuals, which actually seems pretty likely (if you count spatially (or temporally) separated identical individuals separately, or allow the possibility that there could be infinitely many non-identical individuals, maybe increasing without bound in "brain" size)? If you had one of two possibilities, either 1) just you, or 2) infinitely many individuals,

~~your approach would imply that rational credence in solipsism is 0~~, but solipsism is not a logical certainty conditional on these two hypotheses, so this seems wrong. More generally, it just seems wrong to me that credence in solipsism should approach 0 as the number of (possible?) individuals increases without bound. There should be some lower bound that does not depend on the number of possible individuals (and it could be related to the infinite case, through conditional probabilities). I think this is basically what Tarsney is getting at.1. : elaborating on why I think Tarsney implicitly assumes SSA:

You are right, that Tarsney does not take any anthropic evidence into account. Therefore it might be more accurate to say, that he forgot about anthropics/does not think it is important. However it just so happens, that assuming the Self-Sampeling Assumption would not change his credence in solipsism at all. If you are a random person from all actual persons, you can not take your existence as evidence how many people exist. So by not taking anthropic reasoning into account, he gets the same result as if he assumed the Self-Sampeling Assumption.

2. Does't the Self-Indicaion Assumption say, that the universe is almost surely infinite?

Yes, that is the great weakness of the SIA. You are also completely correct, that we need some kind of more sophisticated mathematics if we want to take into account the possibility of infinite people. But also if we just consider the possibility if very many people existing, the SIA yields weird results. See for example Nick Bostroms thought experiment of the presumptuous philosopher (copy-pasted the text from here):

It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry considerations seem to be roughly indifferent between these two theories. The physicists are planning on carrying out a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: "Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1 (whereupon the philosopher runs the God’s Coin Toss thought experiment and explains Model 3)!"

Your evidenceless prior on the number of individuals must be asymptotically 0 (except for a positive probability for infinity), as the number increases, or else the probabilities won't sum to one. Maybe this solves some of the issue?

Of course, we have strong evidence that the number is in fact pretty big as Tarsney points out, based on estimates of how many conscious animals have existed so far. And your prior is underdetermined.

EDIT: I guess you'd need to make more distributional assumptions, since there's no uniform distribution over infinitely many distinct elements, or you'd draw infinitely many individuals with duplicates from a finite set, and your observations wouldn't distinguish you from your duplicates.

~~Adding to this, I think it would follow from your argument that the credence you must assign to the universe having infinitely many individuals must be 0 or 1, which seems to prove too much. You could repeat your argument, but this time with any fixed finite number of individuals instead of 1 for solipsism, and infinitely many individuals as the alternative, and your argument would show that you must assign credence 0 to the first option and so 1 to the infinite.~~

P(N=k|I exist and (N=k or N=∞))=0~~For each natural number~~k∈N~~, and~~N~~representing the number of actual people, you could show that~~

P(N<∞|I exist)=∞∑k=1P(N=k|I exist)≤∞∑k=1P(N=k|I exist and (N=k or N=∞))=0~~and so~~~~And hence~~P(N=∞|I exist)=1~~.~~~~(This is assuming the number of individuals must be countable. I wouldn't be surprised if the SIA has larger cardinals always dominate in the same way infinity does over each finite number. But there's no largest cardinal, although maybe we can use the class of all cardinal numbers? What does this even mean anymore? Or maybe we just need to prove that the number of individuals can't be larger than some particular cardinal.)~~Interesting application of SIA, but I wonder if it shows too much to help average utilitarianism.

SIA seems to support metaphysical pictures in which more people actually exist. This is how you discount the probability of solipsism. But do you think you can simultaneously avoid the conclusion that there are an infinite number of people?

This would be problematic: if you're sure that there are an infinite number of people, average utilitarianism won't offer much guidance because you almost certainly won't have any ability to influence the average utility.

Very interesting point, I have not thought of this.

I do think, however, that SIA, Utilitarianism, SSA, and Average Utilitarianism all kind of break down, once we have an infinite amount of people. I think people, like Bostrom, have thought about infinite ethics, but I have not read anything on that topic.

I agree that there are challenges for each of them in the case of an infinite number of people. My impression is that total utilitarianism can handle infinite cases pretty respectably, by supplementing the standard maxim of maximizing utility with a dominance principle to the effect of 'do what's best for the finite subset of everyone that you're capable of affecting', though it also isn't something I've thought about too much either. I initially was thinking that average utilitarians can't make a similar move without undermining it's spirit, but maybe they can. However, if they can, I suspect they can make the same move in the finite case ('just focus on the average among the population you can affect') and that will throw off your calculations. Maybe in that case, if you can only affect a small number of individuals, the threat from solipsism can't even get going.

In any case, I would hope that SIA is at least able to accommodate an infinite number of possible people, or the possibility of an infinite number of people, without becoming useless. I take it that there are an infinite number of epistemically possible people, and so this isn't just an exercise.