SIA > SSA, part 1: Learning from the fact that you exist

by Joe_Carlsmith24 min read1st Oct 20212 comments

9

Anthropics
Frontpage

(Cross-posted from Hands and Cities)

This post is the first in a four-part sequence explaining why I think that one prominent approach to anthropic reasoning is better than another. Consider:

God’s extreme coin toss: You wake up alone in a white room. There’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” What should your credence be that the coin landed heads?

The approach I like better — the “Self Indication Assumption” (SIA) — says: ~one in a million. SIA thinks you’re more likely to exist in worlds with more people in your epistemic situation. Here, this is the tails-world by far.

The approach I like worse — the “Self-Sampling Assumption” (SSA) — says: one half. SSA thinks you’re more likely to exist in worlds where the people in your epistemic situation are a larger fraction of the people in your “reference class.” Don’t ask me what a reference class is, but in this case, let’s assume that the people in your epistemic situation are 100% of it either way. So SSA sticks with the one half prior. 

I open with this case because it’s one of the worst for SIA, the approach I favor. In particular, we can construct scientific analogs, in which SIA becomes ludicrously confident in a given cosmology, simply in virtue of that cosmology positing more people in our epistemic situation. For many, this implication (known as the “Presumptuous Philosopher”) is a ~decisive objection to SIA. 

But I think that the objections to SSA are stronger, and that in the absence of an alternative approach superior to both SSA and SIA (“Anthropic Theory X”), the Presumptuous Philosopher is a bullet we should consider biting. 

I proceed as follows. The first part of the sequence (“Learning from the fact that you exist”) describes SIA and SSA. In particular, I emphasize that pace some presentations in the literature, SIA should not be seen as an additional assumption you add to SSA — one that “cancels out” SSA’s bad implications, but accepts SSA’s worldview.  Rather, SIA is a different (and more attractive) picture altogether.

The second part (“Telekinesis, reference classes, and other scandals”) lays out the bulk of my case against SSA. In particular, SSA implies: 

The third part (“An aside on betting in anthropics”) briefly discusses betting in anthropics. In particular: why it’s so gnarly, why I’m not focusing on it, and why I don’t think it’s the only desiderata. 

The fourth part (“In defense of the presumptuous philosopher”) discusses prominent objections to SIA in more detail. In particular: 

That said, even if SSA is worse than SIA, it’s not like SIA is sitting pretty (I especially don’t like how it breaks in infinite cases, and there are presumably many other objections I’m not considering). I briefly discuss whether we should expect to find a better alternative — the “Anthropic Theory X” above. My current answer is: maybe (and maybe it’s already out there), but Anthropic Theory X should probably keep SIA’s good implications (like “thirding” in Sleeping Beauty). And the good implications seem closely tied to (some of) the bad. 

I close by quickly mentioning some of SIA’s possible implications in the real world (for example, re: doomsday arguments). I think we should tread carefully, here, but stay curious.

Acknowledgments: This sequence owes a huge amount to discussion with Katja Grace, and to her work on anthropics (see summary here, her honors thesis here, and the many links throughout the sequence). My thanks, as well, to Amanda Askell, Nick Beckstead, Paul Christiano, Tom Davidson, Carl Shulman, Bastian Stern, and Ben Weinstein-Raun for discussion. 

I. Surprised I Am and ASS-backwards

Cases like God’s extreme coin toss involve reasoning about hypotheses that specify both an objective world (e.g., a heads world with one person, or a tails world with a million), and a “location” of the “self” within that world (e.g., in the tails world, the “self” could be the person in the first room, the second room, etc). Call hypotheses of this form “centered worlds.” The question is how to assign probabilities both to objective worlds and centered worlds, granted (a) some prior over objective worlds, (b) knowledge that you exist, and (c) your other knowledge about your situation. I’ll call this broad topic “anthropics,” though others might define the term differently.

A classic reference here is Bostrom (2002), which I’ll be focusing on a lot — it’s where I’ve spent most of my time. I’m going to be disagreeing with Bostrom quite a bit in this sequence, but I want to say up front that I think his book is great, and that it clarifies a lot of stuff. In fact, this whole sequence is very much “living in the world that Bostrom built,” and a lot of the points I’m going to make are made by Bostrom himself — it’s just that I’m making them with much more of a “this is why Bostrom’s view is untenable” flavor. 

SIA and SSA are two prominent approaches to anthropic reasoning (Bostrom favors a version of SSA, and dismisses SIA in a few short pages). Unfortunately, the names and standard glosses of these principles seem almost optimized for obscurity, and for many years, casual exposure left me unable to consistently remember which was which, or what they really meant. Katja Grace once suggested to me that partisans of SIA remember it as “Surprised I Am” (e.g., the view that updates on your own existence) and SSA as “ASS-backward” (e.g., the bad view). Another option would be to rename them entirely, but I won’t attempt that here. For those familiar with the Sleeping Beauty problem, though, you can think of SIA as “thirding,” and SSA as “halfing” — at least to a first approximation.

(Note: Bostrom presents SIA as an assumption you can add to SSA, yielding “SSA + SIA.” This formulation ends up equivalent to my own, but I think it’s worse, and I explain why in section IV. For now, I’ll treat them as distinct and competing theories.)

How do SIA and SSA approach cases like God’s extreme coin toss? Quantitatively: SIA updates the “prior” in proportion to the number of people in your epistemic situation in each objective world. SSA updates it in proportion to the fraction of the people-in-your-epistemic situation who are in the reference class, in that world. Then they both apportion their new credence on each objective world equally amongst the centered worlds (e.g., the hypotheses about “who you are”) compatible with that objective world (e.g., among the people in that world you might be).

To see how this works, consider the following case: 

God’s coin toss with equal numbers: God tosses a fair coin, and creates ten people in white rooms either way. If heads, he gives one person a red jacket, and the rest, blue jackets. If tails, he gives everyone red jackets. You wake up and see that you have a red jacket. What should your credence be on heads?

Here, both SSA and SIA give the same verdict, but for different reasons. SIA reasons: “Well, my prior is 1:1. But on tails, there are 10x the number of people in my epistemic situation — e.g., red-jacketed people. So I update 10:1 in favor of tails. So, 1/11th on heads.” 

SSA, by contrast, reasons: “Well, my prior is 1:1. But on heads, the people in my epistemic situation are a smaller fraction of the reference class. In particular, on heads, the red-jacketed people are 1/10, but on tails, they’re 10/10, assuming that we don’t include God (note from Joe: this is the type of “assuming X about the reference class” that you have to say all the time if you’re SSA). Thus, I update the prior 10:1 in favor of tails. So, 1/11th on heads.”

Having made this update about the objective world, SIA and SSA then both think of themselves as 1/11th likely to be each of the red-jacketed people.

This case is useful to keep in mind, because it’s a kind of “square one” for anthropics. In particular, it helps answer the question: “Wait, why are we updating the prior at all? Why play this game to begin with?” A key answer is: if you don’t update the prior, and instead skip straight to apportioning your prior credence amongst the red-jacketed people in each world, you say silly things about this case. Thus, you reason: “Well, 50% on heads. So 50% that I’m the one red-jacketed heads-world person. And 50% on tails, so 5%, for each of the tails-world people, that I’m them.” But notice: you’ve failed to learn the right thing from your red jacket. In particular, you’ve failed to learn that the coin probably landed tails. 

To illustrate why you need to learn this, suppose you haven’t yet seen your jacket. Then, surely, you should be 50-50, and split your credence equally amongst all the people in each world. Then suppose you see that your jacket is red. This observation was much more likely conditional on tails rather than heads. Thus, it seems like basic Bayesianism to update. 

(Bostrom actually ends up endorsing a version of SSA that fails to make this update — but that’s not to its credit. I discuss this in part 2, section VIII.)

II. Storytelling

SIA and SSA both get this “square one” right; but they differ in their verdicts in other cases. Before getting to those cases, though, can we say anything about what SIA and SSA are doing on a qualitative level? What is the “story” or “conception of the world” motivating these theories, and their differences? 

It’s actually pretty unclear in both cases. But, here’s a shot at story-telling, which will hopefully illustrate how I, at least, tend to think about these views.

SIA treats you as a specific possible person-in-your-epistemic-situation, who might or might not have existed, even conditional on there being someone in that situation. And it thinks of worlds as “pulling” some number people-in-your-epistemic-situation from the “hat” of the platonic realm. That is, and put fancifully: before you were created with a red jacket in a white room, God said to himself “I need to create X number of people with red jackets in white rooms.” He then reached into the platonic realm and groped around blindly in the area labeled “people with red jackets in white rooms.” You were there, in your red jacket, huddled together with some untold number of other red-jacketed souls (a number large enough, indeed, that God can draw as many people as he wants out, without altering the probability that he draws you). But yet, by the most (infinitely?) ridiculous luck, God’s great fingers wrapped around your ghostly non-body. You got pulled, as the other red-jacketed souls looked on in awe and horror and jealousy and relief. Thus, you found yourself alive. It was, indeed, quite a lottery-win. But importantly, it was more likely in worlds where God reached in more times. Or at least, that’s the idea. (Notably, if the space of red-jacketed-white-roomed-people is infinite, then the probability that you get pulled by a finite world is zero, however finitely-many the pulls. And yes, SIA does imply certainty that you live in an infinite world. And yes, this is indeed a problem. See discussion in part 4, section XIV.)

To be clear: I don’t especially like this story. And we can look for others, perhaps less exotic. Thus, for example, we can also think of SIA as treating you as a random sample from the people-in-your-epistemic-situation who might exist, weighted by the probability that they do exist. I discuss this conception more in part 4, section XV. However, I think it may run into instabilities, so I tend to stick with the story above.

Let’s turn to SSA’s story. Or at least, SSA’s story as I tend to tell it. It’s not a neutral rendition.

Like SIA, SSA learns something from the fact that you exist. In particular, SSA learns that you would’ve necessarily existed in any world that you can’t currently rule out — e.g., any world with anyone in your epistemic situation. That is, granted that you do exist, SSA assumes that if God were going to create any world compatible with your current evidence, then He would have “gone looking for you” in the hat of possible people, then “inserted you” into that world — regardless of how many people it contains. He was, apparently, hell-bent on creating you, come what may, in all of the worlds you haven’t yet figured out don’t contain you — after all, you exist. It’s a strange sort of relationship you have, you and God.

(Here I think the SSA-er says: “no, it’s not like that. Rather, it’s that given that I exist, then if any of those other worlds are real, then it’s the case that I exist in those worlds. So I am licensed, in reasoning about which possible worlds are actual, in assuming that I get created in all of them.” I discuss the dialectic here in a bit more detail in part 2, section X.)

Importantly, though, on SSA, when God creates you and inserts you into the world, he does so in a particular way: namely, he makes you a random member of some “reference class” other than the people in your epistemic situation. What sort of reference class? No one knows. It’s entirely made up. (I’ll return to this problem later.) Still, on SSA, that’s how God operates: he picks some set of people who in some sense “you could have been” — even though for some of them, you often know you aren’t — and then makes one of them, at random, you.

Bostrom is at pains to emphasize that SSA doesn’t involve positing any actual physical mechanism — akin to a time-traveling stork — for randomly distributing souls across members of the reference class. Rather, SSA is just a way of assigning credences. That said, we might wonder what would make such a way of assigning credences track the truth, absent such a mechanism — and I don’t remember Bostrom offering an account. We can ask similar question about SIA, though, and the “hat of possible people” story I offered above isn’t exactly an “oh of course no problems with that one.”

To see where the reference class bit of SSA starts to make an important difference, consider this variation on God’s coin toss with equal numbers:

God’s coin toss with chimpanzees: God tosses a fair coin. If heads, he creates one person in a white room, and nine chimpanzees in the jungle. If tails, he creates ten people in white rooms. You wake up in a white room. What should your credence be on heads?

Here, SIA reasons as it did in the original case, when people in blue jackets were in the role of the chimps. Thus, and using the language of the “story” above: “On tails, there are 10x the number of people in my epistemic situation, and so 10x the number of ‘draws’ from the hat of the platonic realm, and so 10x the chance of drawing me. Thus, I update 10:1 in favor of tails: 1/11th on heads.”

SSA, though, to its great discredit, gives different answers depending on whether you count chimpanzees in the jungle as in your reference class or not. Thus, and using the language of the story above, it reasons: “Well, I know I exist, and I can’t yet rule out heads or tails. So, regardless of whether the coin landed heads vs. tails, I was going to exist. (This is where SIA says: what? That’s wrong.) What’s more, if heads, then I was randomly inserted into a reference class of nine chimps in the jungle, and one human in a white room. Thus, on heads, it would have been only 10% likely that I find myself in my epistemic situation; I would have expected to be a chimp instead. By contrast, on tails, I was randomly inserted into a reference class consisting entirely of humans in white rooms, so it would have been 100% that I find myself in my epistemic situation. So I update 1:10 in favor of tails: 1/11th on heads.”

By contrast, if SSA doesn’t count chimps in the jungle as in your reference class, then it reasons as before: “It’s 100%, on either heads or tails, that I’d find myself a human in a white room, so I don’t update at all: 50%.” Thus, whether you “could have been a chimp,” in the sense relevant to the reference class, ends up a crucial question. And the same will be true, in other cases, of whether you could have been a bacteria, an ant, a genetically engineered post-human, a brain emulation, a nano-bot, a paperclipping AI, a grabby alien, and so on. Indeed, as I’ll discuss below in the context of the “Doomsday Argument,” on SSA, the very future of humanity plausibly hinges on such questions.

(Note that the “could have” here need not be the “could” of metaphysical possibility. But somehow, on SSA, the reference class needs to be such as to license surprise, conditional on heads and chimps-in-the-reference-class, that you find yourself a human — and if you “couldn’t have been a chimpanzee,” it’s unclear why you’d be surprised that you’re not one. Regardless, I’ll continue to use “could have been a chimpanzee” in whatever sense is required to justify such surprise — I’m happy for the sense to be minimal.)

This chimp case may be the earliest and simplest result where I basically just get off the boat with SSA. I take one look at those chimps, and the question of whether they’re in the reference class, and I feel like: “I’m out.” But I don’t necessarily expect others to feel the same way, and there’s much more to say on either side regardless.

III. Can’t we just use the minimal reference class?

Perhaps you’re wondering, for example: can’t SSA just use the simple and attractive reference class of “people in my epistemic situation” (call this the “minimal” reference class)? No, it can’t, because then it loses the ability to update on the number of people in your epistemic situation at all, since the percentage of observers in your reference class who are in your epistemic situation will always be 100%. Thus, with a red jacket in God’s coin toss with equal numbers above, it ends up at 50% on heads, and 50% on tails — even though on heads, only one person out of ten had a red jacket, but on tails, everyone did. In this sense, it starts reasoning like the “heads is always 50% no matter what I’ve learned about my jacket color” person  — and it falls afoul of basic Bayesianism in the same way. 

Indeed, a central problem motivating Bostrom is that he thinks that if you can’t make updates like favoring tails in cases like God’s coin toss with equal numbers, then you can’t do science given the possibility of “big worlds” — that is, worlds where, for any given observation, there is some observer (for example, a Boltzmann brain) who makes it, even if it is false. In comparing big world hypotheses, Bostrom thinks, we need to be able to favor the worlds in which a larger fraction of observers in the relevant reference class make the observation in question — but the minimal reference class makes this impossible. That said, I haven’t thought very much about Bostrom’s “science in big worlds” considerations, and I don’t think the argument against SSA-with-the-minimal-reference-class hinges on them. Regardless of the situation with Boltzmann brains, we should have the resources to favor tails in the “square one” case.

Note how elegantly SIA gets around this problem. SIA honors the “minimal reference class” intuition that what matters here is people in your epistemic situation, and that focusing attention elsewhere is arbitrary. But those people don’t need to be some “fraction” of some larger (and hence more arbitrary) set, in order for their numbers given tails vs. heads to provide information. Rather, the bare fact that there are more people in your epistemic situation given tails vs. heads is enough.

SSA, though, seems stuck with some sort of non-minimal reference class. Exactly how non-minimal is a further question — one that I’ll return to in part 2, section VII

IV. Better and worse ways to understand SIA (or: how to actually stop using reference classes)

I want to pause here to distinguish between the version of SIA I just presented, and a version often presented in the literature — a version I consider less attractive, even though extensionally equivalent.

A bit of notation will be helpful. Let’s call n the number of people in your epistemic situation, in a given objective world. And let’s call r the number of people in your reference class, in that world. As I presented it, SIA updates the prior over objective worlds in proportion to n. SSA updates it proportion to n/r.

Now consider a different theory, which I’ll call “Reference-class-SIA” (or R-SIA) and which corresponds more closely to one type of presentation in the literature. Like SSA, R-SIA thinks of you as a member of some reference class. But it also thinks that you are more likely to exist if more members of your reference class exist. That is, it imagines that God populates the reference class with souls, by pulling them out of the possible-people-in-that-reference-class hat, then throwing them randomly into the bodies of reference class people. And since you are in that hat, more people in the reference class means more chances for you to get pulled. Thus, unlike SIA as presented above, which scales the prior in proportion to n, R-SIA scales the prior in proportion to r.

If you combine R-SIA with SSA, you get SIA as I presented it above. That is, if you first scale in proportion to r, and then in proportion of n/r, the r cancels out, and n is the only thing that matters. Thus, tacking R-SIA onto SSA is sometimes said to “eliminate” the problematic dependence on the reference class that SSA otherwise implies: whatever reference class you choose, you get the same answer. And it is also said to “exactly cancel” some of SSA’s other counterintuitive implications, like the doomsday argument (discussed below). The image, here, is of what I’ll call an “inflate-and-claw-back” dynamic: that is, first you inflate your credence on worlds with many people in your reference class, via R-SIA, and then you claw it back in proportion to the fraction of those people who are in your epistemic situation, via SSA. And after doing this extravagant dance, you’re left with good ol’ n (SIA).

But I think this framing undersells SIA’s appeal. The appeal of SIA with respect to reference classes isn’t that “you can pick whatever reference class you want.” It’s that you don’t have to think in terms of made-up reference classes at all. Rather, you can just think entirely in terms of “people in your epistemic situation” — that is, in terms of n. Somehow, R-SIA + SSA feels to me like its ceding too much ground to SSA’s narrative. It’s living too much in SSA’s weird, reference-classes-are-somehow-a-fundamental-thing-even-though-no-one-has-any-account-of-them world. It’s trying to patch SSA with some extra band-aid, rather than rejecting it entirely.

Similarly, the appeal of SIA with respect to SSA’s counterintuitive implications isn’t that it adds just the right additional ridiculous update to counteract SSA’s other ridiculous update. It’s not that SIA lunges a million miles left, to cleverly (luckily? trickily?) balance out SSA’s lunging a million miles right. Rather, the appeal is that (at least in doomsday-like cases) SIA doesn’t lunge at all. It just stays put, at home, where you always wanted to be. In this sense, SIA as I presented it above feels to me “simpler” than R-SIA + SSA — and I think the simple version captures better what reasoning using SIA actually feels like.

What’s more, thinking in terms of R-SIA leads people to attach slogans to SIA that it doesn’t strictly imply in practice. In particular, my sense is that people think of SIA as the view that favors worlds with more “observers” — and if you’re using R-SIA with the reference class “observers,” this is indeed a natural gloss. But if SIA as I presented it above doesn’t actually care about observers per se (and neither does R-SIA, once you tack on SSA as well). Rather, it only cares about observers in your epistemic situation. You can try to sell me on a hypothesis that contains a zillion extra observers wearing blue jackets; but if I am wearing a red jacket, then on SIA, this feature of the hypothesis leaves me cold (though if it implies something about the number of red-jacketed people as well, or the number of people who could, for all I know, have been given red jackets, that’s a different story). The same holds for bug-eyed aliens, chimps in the jungle, paper-clipping superintelligences, civilizations like our own on planets we can tell that we’re not on, and all the rest of the cosmic zoo. SIA doesn’t like observers; it likes uncertainty about “who/where I am.” And we already know lots of stuff about ourselves.

That said, this consideration only goes so far. In particular, if you don’t know anything about yourself except that you’re an observer, then SIA does indeed like observers per se; and if you “forget” everything about yourself, then on SIA your credence in lots-of-observers-per-se worlds does indeed inflate. And more generally, the number of observers-per-se may correlate strongly with the number of observers in your epistemic situation, and/or the ones that could, for all you know, be in your epistemic situation, and hence be you (added 10/2: I say a bit more about the distinction between "people in your epistemic situation" and "people who, for all you know about about a given objective world, might be in your epistemic situation" here). Ultimately, though, it’s the people-you-could-actually-be that SIA is really after.

Leaving R-SIA + SSA to the side, then, I’ll focus on comparing SIA and SSA. Which theory is better?

Note that I say “better,” not “true” or “best.” These aren’t the only approaches to anthropics, and given the various weird implications and uncertainties I’m about to discuss, it seems plausible that the true/best theory (is there are “true theory” of what your credence “should be”?) lies elsewhere (see discussion in part 4, section XVI). Indeed, there’s a whole literature on anthropics out there, which I haven’t attempted to survey. Rather, I’m sticking to a comparison between two basic, prominent, first-pass views.

Indeed, really I’d prefer to ask a narrower question about these views. Not “which is better?”, but “which is better mostly in light of the considerations discussed in Bostrom (2002), plus a few other considerations that Joe encountered while writing this blog post?”. That is, I’m not, here, really attempting to exhaustively canvass all the relevant arguments and counterarguments (though I’m interested, readers, to hear which of the arguments I don’t include you find most persuasive). Rather, I’m trying to report my (admittedly sometimes strong) inclinations after looking into the topic a bit and thinking about it. 

All that said: SIA currently seems better to me. Part 2 and Part 4 of this sequence explain why. (Part 3 is a bit of an interlude.)

(Next post in this sequence: SIA > SSA, part 2: Telekinesis, reference classes, and other scandals)

9

2 comments, sorted by Highlighting new comments since Today at 6:15 PM
New Comment

Can you explain what you mean by "people in your epistemic situation"? Do you intend it to be people who have all the information currently available to you? Or do you sometimes need to abstract away from some information that you have (e.g. specific details about yourself)?

It’s a good question, and one I considered going into in more detail on in the post (I'll add a link to this comment). I think it’s helpful to have in mind two types of people: “people who see the exact same evidence you do” (e.g., they look down on the same patterns of wrinkles on your hands, the same exact fading on the jeans they’re wearing, etc) and “people who might, for all you know about a given objective world, see the exact same evidence you do” (an example here would be “the person in room 2”). By “people in your epistemic situation,” I mean the former. The latter I think of as actually a disguised set of objective worlds, which posit different locations (and numbers) of the former-type people. But SIA, importantly, likes them both (though on my gloss, liking the former is more fundamental).

Here are some cases to illustrate. Suppose that God creates either one person in room 1 (if heads) or two people (if tails) in rooms 1 and 2. And suppose that there are two types of people: “Alices” and “Bobs.” Let’s say that any given Alice sees the exact same evidence as the other Alices (the same wrinkles, faded jeans, etc), and that the same holds for Bobs, and that if you’re an Alice or a Bob, you know it. Now consider three cases: 

  1. For each person God creates, he flips a second coin. If it’s heads, he creates an Alice. If tails, a Bob. 
  2. God flips a second coin. If it’s heads, he makes the person in room 1 Alice; if tails, Bob. But if the first coin was tails and he needs to create a second person, he makes that person different from the first. Thus, if tails-heads, it’s an Alice in room 1, and a Bob in room 2. But if it’s tails-tails, then it’s a Bob in room 1, and an Alice in room 2. (I talk about this case in part 4, XV.)
  3. God creates all Alices no matter what. 

Let’s write people’s names with “A” or “B,” in order of room number. And let’s say you wake up as an Alice. 

  • In case one, “coin 1 heads” (I’ll write the coin-1 results in parentheses) corresponds to two objective worlds — A, and B — each with 1/4 prior probability. Coin 1 tails corresponds to four objective worlds — AA, AB, BA, and BB — each with 1/8th prior probability. So as Alice, you start by crossing off B and BB, because there are no Alices. So you’re left with 1/4 on A, and 1/8th on each of AA, AB, and BA, so an overall odds-ratio of 2:1:1:1. But now, as SIA, you scale the prior in proportion to the number of Alices there are, so AA gets double weight. Now you’re 2:2:1:1. Thus, you end up with 1/3rd on A, 1/3 on AA (with 1/6th on each of the corresponding centered worlds), and 1/6th on each of AB and BA. And you’re a “thirder" overall. 
  • Now let’s look at case two. Here, the prior is 1/4 on A, 1/4 on B, 1/4 on AB, and 1/4 on BA. So SIA doesn’t actually do any scaling of the prior: there’s a maximum of one A in each world. Rather, it crosses off B, and ends up with 1/3rd on anything else, and stays a “thirder” overall. 
  • Case three is just Sleeping Beauty: SIA scales in proportion to the number of Alices, and ends up a thirder overall. 

So in each of these cases, SIA gives the same result, even though the distribution of Alices is in some sense pretty different. And notice, we can redescribe case 1 and 2 in terms of SIA liking “people who, for all you know about a given objective world, might be an Alice” instead of in terms of SIA liking Alices. E.g., in both cases, there are twice as many such people on tails. But importantly, their probability of being an Alice isn’t correlated with coin 1 heads vs. coin 1 tails. 

Anthropics cases are sometimes ambiguous about whether they’re talking about cases of type 1 or of type 3. God’s coin toss is closer to case 1: e.g., you wake up as a person in a room, but we didn’t specify that God was literally making exact copies of you in the other rooms -- your reasoning, though, treats his probability of giving any particular objective-world person your exact evidence is constant across people. Sleeping Beauty is often treated as more like case 3, but it’s compatible with being more of a case 1 type (e.g., if the experimenters also flip another coin on each waking, and leave it for Beauty to see, this doesn’t make a difference; and in general, the Beauties could have different subjective experiences on each waking, as long as —as far as Beauty knows — these variations in experience are independent of the coin toss outcome). I'm not super careful about these distinctions in the post, partly because actually splitting out all of the possible objective worlds in type-1 cases isn't really  do-able (there's no well-defined distribution that God is "choosing from" when he creates each person in God's coin toss --but his choice is treated, from your perspective, as independent from the coin toss outcome); and as noted, SIA's verdicts end up the same.