tobycrisford

Wiki Contributions

Comments

Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping.

I think this is a really interesting observation.

But I don't think it's fair to say that average utilitarianism  "avoids the repugnant conclusion".

If the world contains only a million individuals whose lives are worse than not existing (-100 utils each), and you are considering between two options: (i) creating a million new individuals who are very happy (50 utils each) or (ii) creating N new individuals whose lives are barely worth living (x utils each), then for any x, however small, there is some N where (ii) is preferred, even under average utilitarianism.

There are many serious problems with average utilitarianism, not least that it doesn't remove the repugnant conclusion anyway . So although I think this refutation of solipsistic swamping makes sense and is interesting, I don't think it increases my credence in average utilitarianism very much.

Incompatibility of moral realism and time discounting

This is a beautiful thought experiment, and a really interesting argument. I wonder if saying that it shows an incompatibility between moral realism and time discounting is too strong though? Maybe it only shows an incompatibility between time discounting and consequentialism?

Under non-consequentialist moral theories, it is possible for different moral agents to be given conflicting aims. For example, some people believe that we have a special obligation towards our own families. Suppose that in your example, Anna and Christoph are moving towards their respective siblings, and we neglect relativistic effects. In that case, both Anna and Christoph might agree that it is right for Anna to take the carrot, and  that it is also right for Christoph to take the carrot, even though these aims conflict. This is not inconsistent with moral realism.

Similarly, in the relativistic case, we could imagine believing in the moral rule that "everyone should be concerned with utility in their own inertial frame", together with some time discounting principle. Both Anna and Christoph would believe in the true statements "Anna should take the carrot" and "Christoph should take the carrot". They would acknowledge that their aims conflict, but that is not inconsistent with moral realism.

I think the analogy here is quite strong, because you could imagine a time discounter defending their point of view by saying we have stronger obligations to those closer to us in time, in the same way that we might have stronger obligations towards those closer to us in space, or genetically.

On the other hand, when you consider General Relativity, there are no global inertial frames, so it's interesting to imagine how a steelmanned time discounter would adapt the "everyone should be concerned with utility in their own inertial frame" principle to be consistent with General Relativity. Maybe anything they try would have some weird consequences.

What are some low-information priors that you find practically useful for thinking about the world?

I think I disagree with your claim that I'm implicitly assuming independence of the ball colourings.

I start by looking for the maximum entropy distribution within all possible probability distributions over the 2^100 possible colourings. Most of these probability distributions do not have the property that balls are coloured independently. For example, if the distribution was a 50% probability of all balls being red, and 50% probability of all balls being blue, then learning the colour of a single ball would immediately tell you the colour of all of the others.

But it just so happens that for the probability distribution which maximises the entropy, the ball colourings do turn out to be independent. If you adopt the maximum entropy distribution as your prior, then learning the colour of one tells you nothing about the others. This is an output of the calculation, rather than an assumption.

I think I agree with your last paragraph, although there are some real problems here that I don't know how to solve. Why should we expect any of our existing knowledge to be a good guide to what we will observe in future? It has been a good guide in the past, but so what? 99 red balls apparently doesn't tell us that the 100th will likely be red, for certain seemingly reasonable choices of prior.

I guess what I was trying to say in my first comment is that the maximum entropy principle is not a solution to the problem of induction, or even an approximate solution. Ultimately, I don't think anyone knows how to choose priors in a properly principled way. But I'd very much like to be corrected on this.

What are some low-information priors that you find practically useful for thinking about the world?

I think I disagree that that is the right maximum entropy prior in my ball example.

You know that you are drawing balls without replacement from a bag containing 100 balls, which can only be coloured blue or red. The maximum entropy prior given this information is that every one of the 2^100 possible colourings {Ball 1, Ball 2, Ball 3, ...} -> {Red, Blue} is equally likely (i.e. from the start the probability that all balls are red is 1 over 2^100).

I think the model you describe is only the correct approach if you make an additional assumption that all balls were coloured using an identical procedure, and were assigned to red or blue with some unknown, but constant, probability p. But that is an additional assumption. The assumption that the unknown p is the same for each ball is actually a very strong assumption.

If you want to adopt the maximum entropy prior consistent with the information I gave in the set-up of the problem, you'd adopt a prior where each of the 2^100 possible colourings are equally likely.

I think this is the right way to think about it anyway.

The re-paremetrisation example is very nice though, I wasn't aware of that before.

What are some low-information priors that you find practically useful for thinking about the world?

The maximum entropy principle can give implausible results sometimes though. If you have a bag containing 100 balls which you know can only be coloured red or blue, and you adopt a maximum entropy prior over the possible ball colourings, then if you randomly drew 99 balls from the bag and they were all red, you'd conclude that the next ball is red with probability 50/50. This is because in the maximum entropy prior, the ball colourings are independent. But this feels wrong in this context. I'd want to put the probability on the 100th ball being red much higher.

What is the reasoning behind the "anthropic shadow" effect?

Thank you for your answer!


I think I agree that there is a difference between the extinction example and the coin example, to do with the observer bias, which seems important. I'm still not sure how to articulate this difference properly though, and why it should make the conclusion different. It is true that you have perfect knowledge of Q, N, and the final state marker in the coin example, but you do in the (idealized) extinction scenario that I described as well. In the extinction case I supposed that we knew Q, N, and the fact that we haven't yet gone extinct (which is the analogue of a blue marker).


The real difference I suppose is that in the extinction scenario we could never have seen the analogue of the red marker, because we would never have existed if that had been the outcome. But why does this change anything?


I think you're right that we could modify the coin example to make it closer to the extinction example, by introducing amnesia, or even just saying that you are killed if both coins ever land heads together. But to sum up why I started talking about a coin example with no observer selection effects present:


In the absence of a complete consistent formalism for dealing with observer effects, the argument of the 'anthropic shadow' paper still appears to carry some force, when it says that the naive estimates of observers will be underestimates on average, and that therefore, as observers, we should revise our naive estimates up by an appropriate amount. However, an argument with identical structure gives the wrong answer in the coin example, where everything is understood and we can clearly see what the right answer actually is. The naive estimates of people who see blue will be underestimates on average, but that does not mean, in this case, that if we see blue we should revise our naive estimates up. In this case the naive estimate is the correct bayesian one. This should cast doubt on arguments which take this form, including the anthropic shadow argument, unless we can properly explain why they apply in one case but not the other, and that's what I am uncertain how to do.


Thank you for sharing the Nature paper. I will check it out!