Note: I am especially grateful to David Manley for discussing various background issues with me, and to Natasha Oughton and Elliott Thornley for research assistance.
Lewis’s Principal Principle says that one should usually align one’s credences with the known chances. In this paper I develop a version of the Principal Principle that deals well with some exceptional cases related to the distinction between metaphysical and epistemic modality. I explain how this principle gives a unified account of the Sleeping Beauty problem and chance-based principles of anthropic reasoning. In doing so, I defuse the Doomsday Argument that the end of the world is likely to be nigh.
It’s often the case that one should align one’s credences with what one knows of the objective chances. Lewis (1980) calls this the Principal Principle. For example, it is often the case that if one knows that a fair coin has been tossed, then one should have credence 1/2 that heads came up. The standard caveat— the reason for the ‘often’—is that one sometimes knows too much to simply defer to the chances. A trivial example: once one sees that the coin has landed tails, one should no longer have credence 1/2 in heads. In such cases, one has what Lewis calls ‘inadmissible evidence’.
In this paper, I develop a version of the Principal Principle that handles two subtler kinds of exceptions, both related to the distinction between epistemic and metaphysical modality. The first arises because one can know some contingent truths a priori. The second is related to the fact that even an ideal thinker may be ignorant of certain necessary truths—in particular, one may not know who one is.
The second type of case is my main focus, and I will illustrate it with two well-known examples: the Sleeping Beauty puzzle (Elga, 2000) and the Doomsday Argument (Leslie, 1992). My version of the Principal Principle, labelled simply PP, yields standard views about both these cases: it yields the thirder solution to Sleeping Beauty, and denies that Doomsday is especially close at hand. These conclusions are well represented in the literature; my contribution is to present them as an attractive package deal, following from a single principle about the conceptual role of objective chance. The Doomsday Argument, in particular, is usually analysed in quite different terms, using anthropic principles like the Strong SelfSampling Assumption (which is used in the Doomsday Argument itself ) and the SelfIndication Assumption (which is used to resist it). I will explain how PP leads to chance-based versions of these assumptions, unified in a principle I call Proportionality. I will especially urge the merits of Proportionality over the SelfIndication Assumption.
In §2, I introduce the existing version of the Principal Principle that will be my starting place. In §3, I explain the problem that arises from a priori contingencies, and suggest a preliminary solution. In §4, I explain why this preliminary solution is unsatisfactory: it applies only in the very unusual circumstance that one has no self-locating information. I then state my preferred principle, PP, and show how it handles Sleeping Beauty and Doomsday. In §5, I state the principle of Proportionality, which follows from PP, and compare it to the standard anthropic principles. (The proof of the main result is in the appendix.) Then, in §6, I briefly consider what my chance-based principles suggest about reasoning based simply on a priori likelihood, rather than chance. Section 7 sums up and points out one remaining difficulty for my theory.
Along the way, I will use the framework of epistemic two-dimensionalism (Chalmers, 2004) to model the connection between epistemic and metaphysical modality. I won’t be defending epistemic two-dimensionalism in this paper, but it does conveniently represent the phenomena with which I am concerned. My hope is that critics of two-dimensionalism can find equivalent (or better!) things to say in their own frameworks.
Read the rest of the paper
This paper is mainly a project in Bayesian epistemology, and I’ll speak throughout about what one ‘knows’ as a shorthand for what evidence one has in the sense relevant to Bayesian conditionalization. This is a natural way of speaking, but nothing turns on the identification of evidence with knowledge. ↩︎