I’ve been talking a lot about anthropic reasoning, so it’s only fair that I present what’s probably the most well-known thought experiment in this area: the sleeping beauty problem. Here’s a description of the problem from Wiki:
Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:
- If the coin comes up heads, Beauty will be awakened and interviewed on Monday only.
- If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Beauty is asked: “What is your credence now for the proposition that the coin landed heads?”
There are two popular positions: the thirder position and the halfer position.
Thirders say: “Sleeping Beauty knows that she is in one of three situations: {Monday & Heads}, {Monday & Tails}, or {Tuesday and Tails}. All three of these situations are equally compatible with her experience (she can’t distinguish between them from the inside), so she should be indifferent about which one she is in. Thus there is a 1/3 chance of each, implying that there is a 1/3 chance of Heads and a 2/3 chance of Tails.”
Halfers say: “The coin is fair, so there is a 1/2 chance of Heads and Tails. When Sleeping Beauty wakes up, she gets no information that she didn’t have before (she would be woken up in either scenario). Since she has no new information, there is no reason to update her credences. So there is still a 1/2 chance of Heads and a 1/2 chance of Tails.”
I think that the Halfers are right. The anthropic information she could update on is the fact W = “I have been awakened.” We want to see what happens when we update our prior odds with respect to W. Using Bayes rule we get…
The important feature of this calculation is that the likelihood ratio is 1. This is because both the theory that the coin landed Heads, and the theory that the coin landed Tails, predict with 100% confidence that Sleeping Beauty will be woken up. The fact that Sleeping Beauty is woken up twice if the coin comes up Tails and only once if the coin comes up Heads is, apparently, irrelevant to Bayes’ theorem.
However, Thirders also have a very strong response up their sleeves: “Let’s imagine that every time Sleeping Beauty is right, she gets $1. Now, suppose that Sleeping Beauty always says that the coin landed Tails. Now if she is right, she gets $2… one dollar for each day that she is woken up. What if she always says that the coin lands Heads? Then if she is right, she only gets $1. In other words, if the setup is rerun some large amount of times, the Sleeping Beauty that always says Tails gets twice as much money as the Sleeping Beauty that says Heads. If Sleeping Beauty is indifferent between Heads and Tails, as you Halfers suggest, then she would not have any preference about which one to say. But she would be wrong! She is better off by thinking Tails is more likely… in particular, she should think that Tails is two times more likely than Heads!”
This is a response along the lines of “rationality should not function as a handicap.” I am generally very fond of these arguments, but am uncomfortable with what it implies here. If the above reasoning is correct, then Bayes’ theorem tells us to take a position that leaves us worse off. And if this is true, then it seems we’ve found a flaw in using Bayes’ theorem as a guide to rational belief-formation!
But maybe this is too hasty. Is it really true that an expected value calculation using 1/2 probabilities will result in being indifferent between saying that the coin will land Heads and saying that it will land Tails?
Plausibly not. If the coin lands Heads, then you have twice as many opportunities to make money. In addition, since your qualitative experience is identical on both of these opportunities, you should expect that whatever decision process you perform on Monday will be identical to the decision process on Tuesday. Thus if Sleeping Beauty is a timeless decision theorist, she will see her decision on both days as a single decision. What will she calculate?
Expected value of saying Heads = 50% chance of Heads $2 gain for saying Heads on both days + 50% chance of Tails
$0 = $1
Expected value of saying Tails = 50% chance of Heads $0 + 50% chance of Tails
$1 gain for saying Tails on Tuesday = $0.50
So the expected value of saying Heads is still higher even if you think that the probability of Heads and Tails are equal, provided that you know about subjunctive dependence and timeless decision theory!
I have to state a bias before this comment. I think that anthropic reasoning is a mindset in search of a mission. And that applying it to the Sleeping Beauty Problem unnecessarily restricts the solver into undefinable situations.
Example: In my opinion, the correct thirder solution is that there are four equiprobable outcomes that represent the possible days during the two-day experiment. Since Beauty is memory-wiped between them, they constitute disjoint events to her; the anthropic issue is undefineable since it ignores the disjointness issue entirely. In three of the events she will be awakened, and in one she won’t. But not being awakened does not mean the event does not happen. “Unobservable” does not mean “does not happen.” The evidence that she has been awakened allows her to eliminate the unobservable event, leaving three that are still disjoint and equally likely.
This can be easier to see if, rather than leaving her asleep, you take a different action on Tuesday after Heads. So, awaken and interview Beauty if the coin landed Tails, or if the day is Monday. On Tuesday, if the coin landed on Heads, you awaken Beauty and take her to Disneyworld. As she is getting ready to leave her room, she knows that there is a 3/4 chance of an interview and a 1/4 chance of an excursion. When she is actually interviewed, Bayes Rule says there is a 1/3 chance that the coin landed on Heads. The same application of Bayes Rule applies regardless of what happens on Tuesday after Heads, or even if she is left asleep. Her information is that one of the four possibilities is eliminated.
Because he was trying to apply anthropic reasoning, Bostrom applied the process of reducing the sample space to a smaller one twice rather than once, so he didn’t have to address the anthropic paradox of not observing an outcome. Each reduced the four-outcome sample space to two equiprobable, observable events. One event was common to both spaces. He concluded that the three events remained equiprobable, not that the unobservable one doesn’t happen or that the three events constituted a prior where the Principle of Indifference applies.
Betting arguments are bogus because the value of the wager depends on the outcome of the wager. You can make that dependence support any answer you want. But there is a solution that does not depend on any of these issues.
Use four volunteers. Waken and interview three of them on each day of the experiment. On Monday, leave SB1 asleep if the coin landed on Heads, or leave SB2 asleep if it landed on Tails. On Tuesday, change that to SB3 and SB4, respectively. Instead of asking for credence in a coin result, ask for a credence that each will be wakened only once. Tell them all of these details. As long as none of the volunteers know which schedule the others follow, it does not matter if they know their own schedule, or meet with the other two awake volunteers. Each awake volunteer can say with certainty that her credence is 1/3, and that this must be the same in the original problem.