# Sleeping Beauty Problem

I’ve been talking a lot about anthropic reasoning, so it’s only fair that I present what’s probably the most well-known thought experiment in this area: the sleeping beauty problem. Here’s a description of the problem from Wiki:

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:

• If the coin comes up heads, Beauty will be awakened and interviewed on Monday only.
• If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.

In either case, she will be awakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Beauty is asked: “What is your credence now for the proposition that the coin landed heads?”

There are two popular positions: the thirder position and the halfer position.

Thirders say: “Sleeping Beauty knows that she is in one of three situations: {Monday & Heads}, {Monday & Tails}, or {Tuesday and Tails}. All three of these situations are equally compatible with her experience (she can’t distinguish between them from the inside), so she should be indifferent about which one she is in. Thus there is a 1/3 chance of each, implying that there is a 1/3 chance of Heads and a 2/3 chance of Tails.”

Halfers say: “The coin is fair, so there is a 1/2 chance of Heads and Tails. When Sleeping Beauty wakes up, she gets no information that she didn’t have before (she would be woken up in either scenario). Since she has no new information, there is no reason to update her credences. So there is still a 1/2 chance of Heads and a 1/2 chance of Tails.”

I think that the Halfers are right. The anthropic information she could update on is the fact W = “I have been awakened.” We want to see what happens when we update our prior odds with respect to W. Using Bayes rule we get…

$\frac {P(H)} {P(T)} = \frac {1/2} {1/2} = 1 \\~\\ \frac {P(H | W)} {P(T | W)} = \frac {P(W | H)} {P(W | T)} \cdot \frac {P(H)} {P(T)} = \frac {1} {1} \cdot \frac {1/2} {1/2} = 1 \ \ \\~\\ So \ P(H | W) = \frac{1}{2}, \ P(T | W) = \frac{1}{2}$

The important feature of this calculation is that the likelihood ratio is 1. This is because both the theory that the coin landed Heads, and the theory that the coin landed Tails, predict with 100% confidence that Sleeping Beauty will be woken up. The fact that Sleeping Beauty is woken up twice if the coin comes up Tails and only once if the coin comes up Heads is, apparently, irrelevant to Bayes’ theorem.

However, Thirders also have a very strong response up their sleeves: “Let’s imagine that every time Sleeping Beauty is right, she gets $1. Now, suppose that Sleeping Beauty always says that the coin landed Tails. Now if she is right, she gets$2… one dollar for each day that she is woken up. What if she always says that the coin lands Heads? Then if she is right, she only gets $1. In other words, if the setup is rerun some large amount of times, the Sleeping Beauty that always says Tails gets twice as much money as the Sleeping Beauty that says Heads. If Sleeping Beauty is indifferent between Heads and Tails, as you Halfers suggest, then she would not have any preference about which one to say. But she would be wrong! She is better off by thinking Tails is more likely… in particular, she should think that Tails is two times more likely than Heads!” This is a response along the lines of “rationality should not function as a handicap.” I am generally very fond of these arguments, but am uncomfortable with what it implies here. If the above reasoning is correct, then Bayes’ theorem tells us to take a position that leaves us worse off. And if this is true, then it seems we’ve found a flaw in using Bayes’ theorem as a guide to rational belief-formation! But maybe this is too hasty. Is it really true that an expected value calculation using 1/2 probabilities will result in being indifferent between saying that the coin will land Heads and saying that it will land Tails? Plausibly not. If the coin lands Heads, then you have twice as many opportunities to make money. In addition, since your qualitative experience is identical on both of these opportunities, you should expect that whatever decision process you perform on Monday will be identical to the decision process on Tuesday. Thus if Sleeping Beauty is a timeless decision theorist, she will see her decision on both days as a single decision. What will she calculate? Expected value of saying Heads = 50% chance of Heads $\cdot$$2 gain for saying Heads on both days + 50% chance of Tails $\cdot$ $0 =$1

Expected value of saying Tails = 50% chance of Heads $\cdot$ $0 + 50% chance of Tails $\cdot$$1 gain for saying Tails on Tuesday = \$0.50

So the expected value of saying Heads is still higher even if you think that the probability of Heads and Tails are equal, provided that you know about subjunctive dependence and timeless decision theory!