The problem with the many worlds interpretation of quantum mechanics

The Schrodinger equation is the formula that describes the dynamics of quantum systems – how small stuff behaves.

One fundamental feature of quantum mechanics that differentiates it from classical mechanics is the existence of something called superposition. In the same way that a particle can be in the state of “being at position A” and could also be in the state of “being at position B”, there’s a weird additional possibility that the particle is in the state of “being in a superposition of being at position A and being at position B”. It’s necessary to introduce a new word for this type of state, since it’s not quite like anything we are used to thinking about.

Now, people often talk about a particle in a superposition of states as being in both states at once, but this is not technically correct. The behavior of a particle in a superposition of positions is not the behavior you’d expect from a particle that was at both positions at once. Suppose you sent a stream of small particles towards each position and looked to see if either one was deflected by the presence of a particle at that location. You would always find that exactly one of the streams was deflected. Never would you observe the particle having been in both positions, deflecting both streams.

But it’s also just as wrong to say that the particle is in either one state or the other. Again, particles simply do not behave this way. Throw a bunch of electrons, one at a time, through a pair of thin slits in a wall and see how they spread out when they hit a screen on the other side. What you’ll get is a pattern that is totally inconsistent with the image of the electrons always being either at one location or the other. Instead, the pattern you’d get only makes sense under the assumption that the particle traveled through both slits and then interfered with itself.

If a superposition of A and B is not the same as “A and B’ and it’s not the same as ‘A or B’, then what is it? Well, it’s just that: a superposition! A superposition is something fundamentally new, with some of the features of “and” and some of the features of “or”. We can do no better than to describe the empirically observed features and then give that cluster of features a name.

Now, quantum mechanics tells us that for any two possible states that a system can be in, there is another state that corresponds to the system being in a superposition of the two. In fact, there’s an infinity of such superpositions, each corresponding to a different weighting of the two states.

The Schrödinger equation is what tells how quantum mechanical systems evolve over time. And since all of nature is just one really big quantum mechanical system, the Schrödinger equation should also tell us how we evolve over time. So what does the Schrödinger equation tell us happens when we take a particle in a superposition of A and B and make a measurement of it?

The answer is clear and unambiguous: The Schrödinger equation tells us that we ourselves enter into a superposition of states, one in which we observe the particle in state A, the other in which we observe it in B. This is a pretty bizarre and radical answer! The first response you might have may be something like “When I observe things, it certainly doesn’t seem like I’m entering into a superposition… I just look at the particle and see it in one state or the other. I never see it in this weird in-between state!”

But this is not a good argument against the conclusion, as it’s exactly what you’d expect by just applying the Schrödinger equation! When you enter into a superposition of “observing A” and “observing B”, neither branch of the superposition observes both A and B. And naturally, since neither branch of the superposition “feels” the other branch, nobody freaks out about being superposed.

But there is a problem here, and it’s a serious one. The problem is the following: Sure, it’s compatible with our experience to say that we enter into superpositions when we make observations. But what predictions does it make? How do we take what the Schrödinger equation says happens to the state of the world and turn it into a falsifiable experimental setup? The answer appears to be that we can’t. At least, not using just the Schrödinger equation on its own. To get out predictions, we need an additional postulate, known as the Born rule.

This postulate says the following: For a system in a superposition, each branch of the superposition has an associated complex number called the amplitude. The probability of observing any particular branch of the superposition upon measurement is simply the square of that branch’s amplitude.

For example: A particle is in a superposition of positions A and B. The amplitude attached to A is 0.8. The amplitude attached to B is 0.4. If we now observe the position of the particle, we will find it to be at either A with probability (.6)2 (i.e. 36%), or B with probability (.8)2 (i.e. 64%).

Simple enough, right? The problem is to figure out where the Born rule comes from and what it even means. The rule appears to be completely necessary to make quantum mechanics a testable theory at all, but it can’t be derived from the Schrödinger equation. And it’s not at all inevitable; it could easily have been that probabilities associated with the amplitude were gotten by taking absolute values rather than squares. Or why not the fourth power of the amplitude? There’s a substantive claim here, that probabilities associate with the square of the amplitudes that go into the Schrödinger equation, that needs to be made sense of. There are a lot of different ways that people have tried to do this, and I’ll list a few of the more prominent ones here.

The Copenhagen Interpretation

(Prepare to be disappointed.) The Copenhagen interpretation, which has historically been the dominant position among working physicists, is that the Born rule is just an additional rule governing the dynamics of quantum mechanical systems. Sometimes systems evolve according to the Schrödinger equation, and sometimes according to the Born rule. When they evolve according to the Schrödinger equation, they split into superpositions endlessly. When they evolve according to the Born rule, they collapse into a single determinate state. What determines when the systems evolve one way or the other? Something measurement something something observation something. There’s no real consensus here, nor even a clear set of well-defined candidate theories.

If you’re familiar with the way that physics works, this idea should send your head spinning. The claim here is that the universe operates according to two fundamentally different laws, and that the dividing line between the two hinges crucially on what we mean by the words “measurement and “observation. Suffice it to say, if this was the right way to understand quantum mechanics, it would go entirely against the spirit of the goal of finding a fundamental theory of physics. In a fundamental theory of physics, macroscopic phenomena like measurements and observations need to be built out of the behavior of lots of tiny things like electrons and quarks, not the other way around. We shouldn’t find ourselves in the position of trying to give a precise definition to these words, debating whether frogs have the capacity to collapse superpositions or if that requires a higher “measuring capacity”, in order to make predictions about the world (as proponents of the Copenhagen interpretation have in fact done!).

The Copenhagen interpretation is not an elegant theory, it’s not a clearly defined theory, and it’s fundamentally at tension with the project of theoretical physics. So why has it been, as I said, the dominant approach over the last century to understanding quantum mechanics? This really comes down to physicists not caring enough about the philosophy behind the physics to notice that the approach they are using is fundamentally flawed. In practice, the Copenhagen interpretation works. It allows somebody working in the lab to quickly assess the results of their experiments and to make predictions about how future experiments will turn out. It gives the right empirical probabilities and is easy to implement, even if the fuzziness in the details can start to make your head hurt if you start to think about it too much. As Jean Bricmont said, “You can’t blame most physicists for following this ‘shut up and calculate’ ethos because it has led to tremendous develop­ments in nuclear physics, atomic physics, solid­ state physics and particle physics.” But the Copenhagen interpretation is not good enough for us. A serious attempt to make sense of quantum mechanics requires something more substantive. So let’s move on.

Objective Collapse Theories

These approaches hinge on the notion that the Schrödinger equation really is the only law at work in the universe, it’s just that we have that equation slightly wrong. Objective collapse theories add slight nonlinearities to the Schrödinger equation so that systems sometimes spread out in superpositions and other times collapse into definite states, all according to one single equation. The most famous of these is the spontaneous collapse theory, according to which quantum systems collapse with a probability that grows with the number of particles in the system.

This approach is nice for several reasons. For one, it gives us the Born rule without requiring a new equation. It makes sense of the Born rule as a fundamental feature of physical reality, and makes precise and empirically testable predictions that can distinguish it from from other interpretations. The drawback? It makes the Schrödinger equation ugly and complicated, and it adds extra parameters that determine how often collapse happens. And as we know, whenever you start adding parameters you run the risk of overfitting your data.

Hidden Variable Theories

These approaches claim that superpositions don’t really exist, they’re just a high-level consequence of the unusual behavior of the stuff at the smallest level of reality.  They deny that the Schrödinger equation is truly fundamental, and say instead that it is a higher-level approximation of an underlying deterministic reality. “Deterministic?! But hasn’t quantum mechanics been shown conclusively to be indeterministic??” Well, not entirely. For a while there was a common sentiment amongst physicists that John Von Neumann and others had proved beyond a doubt that no deterministic theory could make the predictions that quantum mechanics makes. Later subtle mistakes were found in these purported proofs that left a door open for determinism. Today there are well-known fleshed-out hidden variable theories that successfully reproduce the predictions of quantum mechanics, and do so fully deterministically.

The most famous of these is certainly Bohmian mechanics, also called pilot wave theory. Here’s a nice video on it if you’d like to know more, complete with pretty animations. Bohmian mechanics is interesting, appear to work, give us the Born rule, and is probably empirically distinguishable from other theories (at least in principle). A serious issue with it is that it requires nonlocality, which is a challenge to any attempt to make it consistent with special relativity. Locality is such an important and well-understood feature of our reality that this constitutes a major challenge to the approach.

Many-Worlds / Everettian Interpretations

Ok, finally we talk about the approach that is most interesting in my opinion, and get to the title of this post. The Many-Worlds interpretation says, in essence, that we were wrong to ever want more than the Schrödinger equation. This is the only law that governs reality, and it gives us everything we need. Many-Worlders deny that superpositions ever collapse. The result of us performing a measurement on a system in superposition is simply that we end up in superposition, and that’s the whole story!

So superpositions never collapse, they just go deeper into superposition. There’s not just one you, there’s every you, spread across the different branches of the wave function of the universe. All these yous exist beside each other, living out all your possible life histories.

But then where does Many-Worlds get the Born rule from? Well, uh, it’s kind of a mystery. The Born rule isn’t an additional law of physics, because the Schrödinger equation is supposed to be the whole story. It’s not an a priori rule of rationality, because as we said before probabilities could have easily gone as the fourth power of amplitudes, or something else entirely. But if it’s not an a posteriori fact about physics, and also not an a priori knowable principle of rationality, then what is it?

This issue has seemed to me to be more and more important and challenging for Many-Worlds the more I have thought about it. It’s hard to see what exactly the rule is even saying in this interpretation. Say I’m about to make a measurement of a system in a superposition of states A and B. Suppose that I know the amplitude of A is much smaller than the amplitude of B. I need some way to say “I have a strong expectation that I will observe B, but there’s a small chance that I’ll see A.” But according to Many-Worlds, a moment from now both observations will be made. There will be a branch of the superposition in which I observe A, and another branch in which I observe B. So what I appear to need to say is something like “I am much more likely to be the me in the branch that observes B than the me that observes A.” But this is a really strange claim that leads us straight into the thorny philosophical issue of personal identity.

In what sense are we allowed to say that one and only one of the two resulting humans is really going to be you? Don’t both of them have equal claim to being you? They each have your exact memories and life history so far, the only difference is that one observed A and the other B. Maybe we can use anthropic reasoning here? If I enter into a superposition of observing-A and observing-B, then there are now two “me”s, in some sense. But that gives the wrong prediction! Using the self-sampling assumption, we’d just say “Okay, two yous, so there’s a 50% chance of being each one” and be done with it. But obviously not all binary quantum measurements we make have a 50% chance of turning out either way!

Maybe we can say that the world actually splits into some huge number of branches, maybe even infinite, and the fraction of the total branches in which we observe A is exactly the square of the amplitude of A? But this is not what the Schrödinger equation says! The Schrödinger equation tells exactly what happens after we make the observation: we enter a superposition of two states, no more, no less. We’re importing a whole lot into our interpretive apparatus by interpreting this result as claiming the literal existence of an infinity of separate worlds, most of which are identical, and the distribution of which is governed by the amplitudes.

What we’re seeing here is that Many-Worlds, by being too insistent on the reality of the superposition, the sole sovereignty of the Schrödinger equation, and the unreality of collapse, ends up running into a lot of problems in actually doing what a good theory of physics is supposed to do: making empirical predictions. The Many-Worlders can of course use the Born Rule freely to make predictions about the outcomes of experiments, but they have little to say in answer to what, in their eyes, this rule really amounts to. I don’t know of any good way out of this mess.

Basically where this leaves me is where I find myself with all of my favorite philosophical topics; totally puzzled and unsatisfied with all of the options that I can see.

Rationality in the face of improbability

I recently read my favorite Wikipedia article of all time. It’s about a park ranger named Roy Cleveland Sullivan, whose claim to fame was having been hit by lightning on seven different occasions and surviving all of them. The details of these events are both tragic and a little hilarious, and raise some interesting questions about rationality.

From the article:

In spring 1972, Sullivan was working inside a ranger station in Shenandoah National Park when another strike occurred. It set his hair on fire; he tried to smother the flames with his jacket. He then rushed to the restroom, but couldn’t fit under the water tap and so used a wet towel instead. Although he never was a fearful man, after the fourth strike he began to believe that some force was trying to destroy him and he acquired a fear of death. For months, whenever he was caught in a storm while driving his truck, he would pull over and lie down on the front seat until the storm passed. He also began to believe that he would somehow attract lightning even if he stood in a crowd of people, and carried a can of water with him in case his hair was set on fire.

Put yourself in his situation and ask yourself if you might start doing the same thing after four times. Now what about if it kept happening?

On August 7, 1973, while he was out on patrol in the park, Sullivan saw a storm cloud forming and drove away quickly. But the cloud, he said later, seemed to be following him. When he finally thought he had outrun it, he decided it was safe to leave his truck. Soon after, he was struck by a lightning bolt.

The next strike, on June 5, 1976, injured his ankle. It was reported that he saw a cloud, thought that it was following him, tried to run away, but was struck anyway. His hair also caught fire.

He was struck the seventh time while fishing in a freshwater pool, which in a weird turn of events was followed by a confrontation with a bear over some trout that he had caught.

What’s more, Sullivan claimed to have been struck by lightning another time as a child, when out helping his father cut wheat in a field.

And furthermore…

Sullivan’s wife was also struck once, when a storm suddenly arrived as she was out hanging clothes in their backyard. Her husband was helping her at the time, but escaped unharmed.

Apparently his fear of lightning was a little contagious:

He was avoided by people later in life because of their fear of being hit by lightning, and this saddened him. He once recalled “For instance, I was walking with the Chief Ranger one day when lightning struck way off (in the distance). The Chief said, ‘I’ll see you later.'”

Okay, so besides from being a hilariously weird series of events, this article does raise some issues related to anthropic reasoning. Namely: what would it be rational for Roy Sullivan to believe?

I want to say that this man had really really good evidence that some angry Thor-like deity existed and was actively hunting him down. In his position, I think I’d feel like it was only rational to try to run from approaching clouds and thunderstorms (although that strategy didn’t seem to be super effective for him).

But at the same time, in a world of billions of people, it’s almost guaranteed to be the case that somebody will find themselves in circumstances just as unlikely as this. If Sullivan had one day looked up lightning strike statistics, and found that the numbers for the overall population were perfectly consistent with a naturalistic hypothesis in which lightning doesn’t target any particular individuals, how should he have responded?

And what should we believe about Roy Sullivan and lightning? Presumably we should not accept his non-naturalistic conclusions. But then what exactly is the difference between what we know and what he knows? We both have the same statistical information about the general trends in lightning strikes, and we both know that Roy Sullivan Cleveland was hit by lightning a bunch of times, so why should we come to different conclusions?

The obvious thought here is that it has something to do with anthropic reasoning. Sure, I have the same non-anthropic evidence as Roy Sullivan, but we have different anthropic evidence. Sullivan doesn’t just know the comparatively unremarkable proposition that “somebody was hit by lightning seven times and survived,” he knows the indexical proposition that “I was hit by lightning seven times and survived.” The non-Sullivans of the world don’t have access to this proposition, and maybe  this is the difference that matters.

Perhaps any population will end up having some individuals that happen to find themselves in very unusual situations, in which it becomes rational for them to come to bizarre conclusions about the world for anthropic reasons. And the bigger the population, the stranger and more rationally certain these beliefs might become.  Imagine a population big enough that it becomes not unlikely that some individual walks around commanding Thor to send bolts of lightning where they’re pointing, and then lo and behold it happens each time.

There would be many many many more individuals out there who succeeded a few times, and even more that never succeed at doing so. But for that tiny fraction that appears to manifest god-like powers, what should they believe? What should their friends and family believe? How far does the anthropic update extend? I’m not sure.

Making sense of improbability

Imagine that you take a coin that you believe to be fair and flip it 20 times. Each time it lands heads. You say to your friend: “Wow, what a crazy coincidence! There was a 1 in 220 chance of this outcome. That’s less than one in a million! Super surprising.”

Your friend replies: “I don’t understand. What’s so crazy about the result you got? Any other possible outcome (say, HHTHTTTHTHHHTHTTHHHH) had an equal probability as getting all heads. So what’s so surprising?”

Responding to this is a little tricky. After all, it is the case that for a fair coin, the probability of 20 heads = the probability of HHTHTTTHTHHHTHTTHHHH = roughly one in a million.

Simpler Example_ Five Tosses.png

So in some sense your friend is right that there’s something unusual about saying that one of these outcomes is more surprising than another.

You might answer by saying “Well, let’s parse up the possible outcomes by the number of heads and tails. The outcome I got had 20 heads and 0 tails. Your example outcome had 12 heads and 8 tails. There are many many ways of getting 12 heads and 8 tails than of getting 20 heads and 0 tails, right? And there’s only one way of getting all 20 heads. So that’s why it’s so surprising.”

Probability vs. Number of heads (1).png

Your friend replies: “But hold on, now you’re just throwing out information. Sure my example outcome had 12 heads and 8 tails. But while there’s many ways of getting that number of heads and tails, there’s only exactly one way of getting the result I named! You’re only saying that your outcome is less likely because you’ve glossed over the details of my outcome that make it equally unlikely: the order of heads and tails!”

I think this is a pretty powerful response. What we want is a way to say that HHHHHHHHHHHHHHHHHHHH is surprising while HHTHTTTHTHHHTHTTHHHH is not, not that 20 heads is surprising while 12 heads and 8 tails is unsurprising. But it’s not immediately clear how we can say this.

Consider the information theoretic formalization of surprise, in which the surprisingness of an event E is proportional to the negative log of the probability of that event: Sur(E) = -log(P(E)). There are some nice reasons for this being a good definition of surprise, and it tells us that two equiprobable events should be equally surprising. If E is the event of observing all heads and E’ is the event of observing the sequence HHTHTTTHTHHHTHTTHHHH, then P(E) = P(E’) = 1/220. Correspondingly, Sur(E) = Sur(E’). So according to one reasonable formalization of what we mean by surprisingness, the two sequences of coin tosses are equally surprising. And yet, we want to say that there is something more epistemically significant about the first than the second.

(By the way, observing 20 heads is roughly 6.7 times more surprising than observing 12 heads and 8 tails, according to the above definition. We can plot the surprise curve to see how maximum surprise occurs at the two ends of the distribution, at which point it is 20 bits.)

Surprise vs. number of heads (1).png

So there is our puzzle: in what sense does it make sense to say that observing 20 heads in a row is more surprising than observing the sequence HHTHTTTHTHHHTHTTHHHH? We certainly have strong intuitions that this is true, but do these intuitions make sense? How can we ground the intuitive implausibility of getting 20 heads? In this post I’ll try to point towards a solution to this puzzle.

Okay, so I want to start out by categorizing three different perspectives on the observed sequence of coin tosses. These correspond to (1) looking at just the outcome, (2) looking at the way in which the observation affects the rest of your beliefs, and (3) looking at how the observation affects your expectation of future observations. In probability terms, these correspond to the P(E), P(T| T) and P(E’ | E).

Looking at things through the first perspective, all outcomes are equiprobable, so there is nothing more epistemically significant about one than the other.

But considering the second way of thinking about things, there can be big differences in the significance of two equally probable observations. For instance, suppose that our set of theories under consideration are just the set of all possible biases of the coin, and our credences are initially peaked at .5 (an unbiased coin). Observing HHTHTTTHTHHHTHTTHHHH does little to change our prior. It shifts a little bit in the direction of a bias towards heads, but not significantly. On the other hand, observing all heads should have a massive effect on your beliefs, skewing them exponentially in the direction of extreme heads biases.

Importantly, since we’re looking at beliefs about coin bias, our distributions are now insensitive to any details about the coin flip beyond the number of heads and tails! As far as our beliefs about the coin bias go, finding only the first 8 to be tails looks identical to finding the last 8 to be tails. We’re not throwing out the information about the particular pattern of heads and tails, it’s just become irrelevant for the purposes of consideration of the possible biases of the coin.

Visualizing change in beliefs about coin bias.png

If we want to give a single value to quantify the difference in epistemic states resulting from the two observations, we can try looking at features of these distributions. For instance, we could look at the change in entropy of our distribution if we see E and compare it to the change in entropy upon seeing E’. This gives us a measure of how different observations might affect our uncertainty levels. (In our example, observing HHTHTTTHTHHHTHTTHHHH decreases uncertainty by about 0.8 bits, while observing all heads decreases uncertainty by 1.4 bits.) We could also compare the means of the posterior distributions after each observation, and see which is shifted most from the mean of the prior distribution. (In this case, our two means are 0.57 and 0.91).

Now, this was all looking at things through what I called perspective #2 above: how observations affect beliefs. Sometimes a more concrete way to understand the effect of intuitively implausible events is to look at how they affect specific predictions about future events. This is the approach of perspective #3. Sticking with our coin, we ask not about the bias of the coin, but about how we expect it to land on the next flip. To assess this, we look at the posterior predictive distributions for each posterior:

Posterior Predictive Distributions.png

It shouldn’t be too surprising that observing all heads makes you more confident that the next coin will land heads than observing HHTHTTTHTHHHTHTTHHHH. But looking at this graph gives a precise answer to how much more confident you should be. And it’s somewhat easier to think about than the entire distribution over coin biases.

I’ll leave you with an example puzzle that relates to anthropic reasoning.

Say that one day you win the lottery. Yay! Super surprising! What an improbable event! But now compare this to the event that some stranger Bob Smith wins the lottery. This doesn’t seem so surprising. But supposing that Bob Smith buys lottery tickets at the same rate as you, the probability that you win is identical to the probability that Bob Smith wins. So… why is it any more surprising when you win?

This seems like a weird question. Then again, so did the coin-flipping question we started with. We want to respond with something like “I’m not saying that it’s improbable that some random person wins the lottery. I’m interested in the probability of me winning the lottery. And if we parse up the outcomes as that either I win the lottery or that somebody else wins the lottery, then clearly it’s much more improbable that I win than that somebody else wins.”

But this is exactly parallel to the earlier “I’m not interested in the precise sequence of coin flips, I’m just interested in the number of heads versus tails.” And the response to it is identical in form: If Bob Smith, a particular individual whose existence you are aware of, wins the lottery and you know it, then it’s cheating to throw away those details and just say “Somebody other than me won the lottery.” When you update your beliefs, you should take into account all of your evidence.

Does the framework I presented here help at all with this case?

Anthropic reasoning in everyday life

Thought experiment from a past post:

A stranger comes up to you and offers to play the following game with you: “I will roll a pair of dice. If they land snake eyes (i.e. they both land 1), you give me one dollar. Otherwise, if they land anything else, I give you a dollar.”

Do you play this game?

[…]

Now imagine that the stranger is playing the game in the following way: First they find one person and offer to play the game with them. If the dice land snake eyes, then they collect a dollar and stop playing the game. Otherwise, they find ten new people and offer to play the game with them. Same as before: snake eyes, the stranger collects $1 from each and stops playing, otherwise he moves on to 100 new people. Et cetera forever.

When we include this additional information about the other games the stranger is playing, then the thought experiment becomes identical in form to the dice killer thought experiment. Thus updating on the anthropic information that you have been kidnapped gives a 90% chance of snake-eyes, which means you have a 90% chance of losing a dollar and only a 10% chance of gaining a dollar. Apparently you should now not take the offer!

This seems a little weird. Shouldn’t it be irrelevant if the game if being offered to other people? To an anthropic reasoner, the answer is a resounding no. It matters who else is, or might be, playing the game, because it gives us additional information about our place in the population of game-players.

Thus far this is nothing new. But now we take one more step: Just because you don’t know the spatiotemporal distribution of game offers doesn’t mean that you can ignore it!

So far the strange implications of anthropic reasoning have been mostly confined to bizarre thought experiments that don’t seem too relevant to the real world. But the implication of this line of reasoning is that anthropic calculations bleed out into ordinary scenarios. If there is some anthropically relevant information that would affect your probabilities, then you need to consider the probability that this information

In other words, if somebody comes up to you and makes you the offer described above, you can’t just calculate the expected value of the game and make your decision. Instead, you have to consider all possible distributions of game offers, calculate the probability of each, and average over the implied probabilities! This is no small order.

For instance, suppose that you have a 50% credence that the game is being offered only one time to one person: you. The other 50% is given to the “dice killer” scenario: that the game is offered in rounds to a group that decuples in size each round, and that this continues until the dice finally land snake-eyes. Presumably you then have to average over the expected value of playing the game for each scenario.

EV_1 = - \$1 \cdot \frac{35}{36} + \$1 \cdot \frac{1}{36} = \$ \frac{34}{36} \approx \$0.94 \\~\\ EV_2 = \$1 \cdot 0.1 + - \$1 \cdot 0.9 = - \$ 0.80 \\~\\ EV = 0.50 \cdot EV_1 + 0.50 \cdot EV_2 \approx \$ .07

In this case, the calculation wasn’t too bad. But that’s because it was highly idealized. In general, representing your knowledge of the possible distributions of games offered seems quite difficult. But the more crucial point is that it is apparently not enough to go about your daily life calculating the expected value of the decisions facing you. You have to also consider who else might be facing the same decisions, and how this influences your chances of winning.

Can anybody think of a real-life example where these considerations change the sign of the expected value calculation?

Adam and Eve’s Anthropic Superpowers

The truly weirdest consequences of anthropic reasoning come from a cluster of thought experiments I’ll call the Adam and Eve thought experiments. These thought experiments all involve agents leveraging anthropic probabilities in their favor in bizarre ways that appear as if they have picked up superpowers. We’ll get there by means of a simple thought experiment designed to pump your intuitions in favor of our eventual destination.

In front of you is a jar. This jar contains either 10 balls or 100 balls. The balls are numbered in order from 1 to either 10 or 100. You reach in and pull out a ball, and find that it is numbered ‘7’. Which is now more likely: that the jar contains 10 balls or that it contains 100 balls? (Suppose that you were initially evenly split between the two possibilities.)

The answer should be fairly intuitive: It is now more likely that the jar contains ten balls.

If the jar contains 10 balls, you had a 1/10 chance of drawing #7. On the other hand, if the jar contains 100 balls you had a 1/100 chance of drawing #7. This corresponds to a likelihood ratio of 10:1 in favor of the jar having ten balls. Since your prior odds in the two possibilities were 1:1, your posterior odds should be 10:1. Thus the posterior probability of 10 balls is 10/11, or roughly 91%.

Now, let’s apply this same reasoning to something more personal. Imagine two different theories of human history. In the first theory, there are 200 billion people that will ever live. In the second, there are 2 trillion people that will ever live. We want to update on the anthropic evidence of our birth order in the history of humanity.

There have been roughly 100 billion people that ever lived, so our birth order is about 100 billion. The self-sampling assumption says that this is just like drawing a ball from a jar that contains either 200 billion numbered balls or 2 trillion numbered balls, and finding that the ball you drew is numbered 100 billion.

The likelihood ratio you get is 10:1, so your posterior odds are ten times more favorable for the “200 billion total humans” theory than for the “2 trillion total humans”. If you were initially evenly split between these two, then noticing your birth order should bring you to a 91% confidence in the ‘extinction sooner’ hypothesis.

This line of reasoning is called the Doomsday argument, and it leads down into quite a rabbit hole. I don’t want to explore that rabbit hole quite yet. For the moment, let’s just note that ordinary Bayesian updating on your own birth order favors theories in which there are less total humans to ever live. The strength of this update depends linearly on the number of humans being considered: comparing Theory 1 (100 people) to Theory 2 (100 trillion people) gives a likelihood ratio of one trillion in favor of Theory 1 over Theory 2. So in general, it appears that we should be extremely confident that extremely optimistic pictures of a long future for humanity are wrong. The more optimistic, the less likely.

With this in mind, let’s go on to Adam and Eve.

Suspend your disbelief for a moment and imagine that there was at some point just two humans on the face of the Earth – Adam and Eve. This fateful couple gave rise to all of human history, and we are all their descendants. Now, imagine yourself in their perspective.

From this perspective, there are two possible futures that might unfold. In one of them, the two original humans procreate and start the chain of actions leading to the rest of human history. In another, the two original humans refuse to procreate, thus preventing human history from happening. For the sake of this thought experiment, let’s imagine that Adam and Eve know that these are the only two possibilities (that is, suppose that there’s no scenario in which they procreate and have kids, but then those kids die off or somehow else prevent the occurence of history as we know it).

By the above reasoning, Adam and Eve should expect that the second of these is enormously more likely than the first. After all, if they never procreate and eventually just die off, then their birth orders are 1 and 2 out of a grand total of 2. If they do procreate, though, then their birth orders are 1 and 2 out of at least 100 billion. This is 50 billion times less likely than the alternative!

Now, the unusual bit of this comes from the fact that it seems like Adam and Eve have control over whether or not they procreate. For the sake of the thought experiment, imagine that they are both fertile, and they can take actions that will certainly result in pregnancy. Also assume that if they don’t procreate, Eve won’t get accidentally pregnant by some unusual means.

This control over their procreation, coupled with the improbability of their procreation, allows them to wield apparently magical powers. For instance, Adam is feeling hungry and needs to go out and hunt. He makes a firm commitment with Eve: “I shall wait for an hour for a healthy young deer to die in front of our cave entrance. If no such deer dies, then we will procreate and have children, leading to the rest of human history. If so, then we will not procreate, and guarantee that we don’t have kids for the rest of our lives.”

Now, there’s some low prior on a healthy young deer just dying right in front of them. Let’s say it’s something like 1 in a billion. Thus our prior odds are 1:1,000,000,000 against Adam and Eve getting their easy meal. But now when we take into account the anthropic update, it becomes 100 billion times more likely that the deer does die, because this outcome has been tied to the nonexistence of the rest of human history. The likelihood ratio here is 100,000,000,000:1. So our posterior odds will be 100:1 in favor of the deer falling dead, just as the two anthropic reasoners desire! This is a 99% chance of a free meal!

This is super weird. It sure looks like Adam is able to exercise telekinetic powers to make deer drop dead in front of him at will. Clearly something has gone horribly wrong here! But the argument appears to be totally sound, conditional on the acceptance of the principles we started off with. All that is required is that we allow ourselves to update on evidence of the form “I am the Nth human being to have been born.” (as well as the very unusual setup of the thought experiment).

This setup is so artificial that it can be easy to just dismiss it as not worth considering. But now I’ll give another thought experiment of the same form that is much less outlandish, so much so that it may actually one day occur!

Here goes…

The year is 2100 AD, and independent, autonomous countries are no more. The entire planet is governed by a single planetary government which has an incredibly tight grip on everything that happens under their reign. Anything that this World Government dictates is guaranteed to be carried out, and there is no serious chance that it will lose power. Technology has advanced to the point that colonization of other planets is easily feasible. In fact, the World Government has a carefully laid out plan for colonization of the entire galaxy. The ships have been built and are ready for deployment, the target planets have been identified, and at the beckoning of the World Government galactic colonization can begin, sure to lead to a vast sprawling Galactic Empire of humanity.

Now, the World Government is keeping these plans at a standstill for a very particular reason: they are anthropic reasoners! They recognize that by firmly committing to either carry out the colonization or not, they are able to wield enormous anthropic superpowers and do things that would otherwise be completely impossible.

For instance, a few years ago scientists detected a deadly cosmic burst from the Sun headed towards Earth. Scientists warned that given the angle of the blast, there was only a 1% chance that it would miss Earth. In addition, they assessed that if the blast hit Earth, it would surely result in the extinction of everybody on the planet.

The World Government made the following plans: They launched tens of millions of their colonizing ships and had them wait for further instructions in a region safe from the cosmic rays. Then, the World Government instructed that if the cosmic rays hit Earth, the ships commence galactic colonization. Otherwise, if the cosmic rays missed Earth, the ships should return to Earth’s surface and abandon the colonization plans.

Why these plans? Well, because by tying the outcome of the cosmic ray collision to the future history of humanity, they leverage the enormous improbability of their early birth order in the history of humanity given galactic colonization in their favor! If a Galactic Empire future contains 1 billion times more total humans than a Earth future, then the prior odds of 1:100 in favor of the cosmic rays hitting Earth get multiplied by a likelihood ratio of 1,000,000,000:1 in favor of the cosmic rays missing Earth. The result is posterior odds of 10,000,000:1, or a posterior confidence of 99.99999% that the cosmic rays will miss!

In short, by wielding tight control over decisions about the future existence of enormous numbers of humans, the World Government is able to exercise apparently magical abilities. Cosmic rays headed their way? No problem, just threaten the universe with a firm commitment to send out their ships to colonize the galaxy. The improbability of this result makes the cosmic rays “swerve”, keeping the people of Earth safe.

The reasoning employed in this story should be very disturbing to you. The challenge is to explain how to get out of its conclusion.

One possibility is to deny anthropic reasoning. But then we also have to deny all the ordinary everyday cases of anthropic reasoning that seem thoroughly reasonable (and necessary for rationality). (See here.)

We could deny that updating on our indexical evidence of our place in history actually has the effects that I’ve said it has. But this seems wrong as well. The calculation done here is the exact type of calculation we’ve done in less controversial scenarios.

We could accept the conclusions. But that seems out of the question. Probabilities are supposed to be a useful tool we apply to make sense of a complicated world. In the end, the world is not run by probabilities, but by physics. And physics doesn’t have clauses that permit cosmic rays to swerve out of the way for clever humans with plans of conquering the stars.

There’s actually one other way out. But I’m not super fond of it. This way out is to craft a new anthropic principle, specifically for the purpose of counteracting the anthropic considerations in cases like these. The name of this anthropic principle is the Self-Indication Assumption. This principle says that theories that predict more observers like you become more likely in virtue of this fact.

Self Indication Assumption: The prior probability of a theory is proportional to the number of observers like you it predicts will exist.

Suppose we have narrowed it down to two possible theories of the universe. Theory 1 predicts that there will be 1 million observers like you. Theory 2 predicts that there will be 1 billion observers like you. The self indication assumption says that before we assess the empirical evidence for these two theories, we should place priors in them that make Theory 2 one thousand times more likely than Theory 1.

It’s important to note that this principle is not about updating on evidence. It’s about setting of priors. If we update on our existence, this favors neither Theory 1 nor Theory 2. Both predict the existence of an observer like us with 100% probability, so the likelihood ratio is 1/1, and no update is performed.

Instead, we can think of the self indication assumption as saying the following: Look at all the possible observers that you could be, in all theories of the world. You should distribute your priors evenly across all these possible observers. Then for each theory you’re interested in, just compile these priors to determine its prior probability.

When I first heard of this principle, my reaction was “…WHY???” For whatever reason, it just made no sense to me as a procedure for setting priors. But it must be said that this principle beautifully solves all of the problems I’ve brought up in this post. Remember that the improbability being wielded in every case was the improbability of existing in a world where there are many other future observers like you. The self indication assumption tilts our priors in these worlds exactly in the opposite direction, perfectly canceling out the anthropic update.

So, for instance, let’s take the very first thought experiment in this post. Compare a theory of the future in which 200 billion people exist total to a theory in which 2 trillion people exist total. We said that our birth order gives us a likelihood ratio of 10:1 in favor of the first theory. But now the self indication assumption tells us that our prior odds should be 1:10, in favor of the second theory! Putting these two anthropic principles together, we get 1:1 odds. The two theories are equally likely! This seems refreshingly sane.

As far as I currently know, this is the only way out of the absurd results of the thought experiments I’ve presented in this post. The self-indication assumption seems really weird and unjustified to me, but I think a really good argument for it is just that it restores sanity to rationality in the face of anthropic craziness.

Some final notes:

Applying the self-indication assumption to the sleeping beauty problem (discussed here) makes you a thirder instead of a halfer. I previously defended the halfer position on the grounds that (1) the prior on Heads and Tails should be 50/50 and (2) Sleeping Beauty has no information to update on. The self-indication assumption leads to the denial of (1). Since there are different numbers of observers like you if the coin lands Heads versus if it lands Tails, the prior odds should not be 1:1. Instead they should be 2:1 in favor of Heads, reflecting the fact that there are two possible “you”s if the coin lands Heads, and only one if the coin lands Tails.

In addition, while I’ve presented the thought experiments that show the positives of the self-indication assumption, there are cases where the self-indication gives very bizarre answers. I won’t go into them now, but I do want to plant a flag here to indicate that the self-indication assumption is by no means uncontroversial.

Sleeping Beauty Problem

I’ve been talking a lot about anthropic reasoning, so it’s only fair that I present what’s probably the most well-known thought experiment in this area: the sleeping beauty problem. Here’s a description of the problem from Wiki:

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:

  • If the coin comes up heads, Beauty will be awakened and interviewed on Monday only.
  • If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.

In either case, she will be awakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Beauty is asked: “What is your credence now for the proposition that the coin landed heads?”

There are two popular positions: the thirder position and the halfer position.

Thirders say: “Sleeping Beauty knows that she is in one of three situations: {Monday & Heads}, {Monday & Tails}, or {Tuesday and Tails}. All three of these situations are equally compatible with her experience (she can’t distinguish between them from the inside), so she should be indifferent about which one she is in. Thus there is a 1/3 chance of each, implying that there is a 1/3 chance of Heads and a 2/3 chance of Tails.”

Halfers say: “The coin is fair, so there is a 1/2 chance of Heads and Tails. When Sleeping Beauty wakes up, she gets no information that she didn’t have before (she would be woken up in either scenario). Since she has no new information, there is no reason to update her credences. So there is still a 1/2 chance of Heads and a 1/2 chance of Tails.”

I think that the Halfers are right. The anthropic information she could update on is the fact W = “I have been awakened.” We want to see what happens when we update our prior odds with respect to W. Using Bayes rule we get…

\frac {P(H)} {P(T)} = \frac {1/2} {1/2} = 1  \\~\\  \frac {P(H | W)} {P(T | W)} = \frac {P(W | H)} {P(W | T)} \cdot \frac {P(H)} {P(T)} = \frac {1} {1} \cdot \frac {1/2} {1/2} = 1 \ \  \\~\\  So \ P(H | W) = \frac{1}{2}, \ P(T | W) = \frac{1}{2}

The important feature of this calculation is that the likelihood ratio is 1. This is because both the theory that the coin landed Heads, and the theory that the coin landed Tails, predict with 100% confidence that Sleeping Beauty will be woken up. The fact that Sleeping Beauty is woken up twice if the coin comes up Tails and only once if the coin comes up Heads is, apparently, irrelevant to Bayes’ theorem.

However, Thirders also have a very strong response up their sleeves: “Let’s imagine that every time Sleeping Beauty is right, she gets $1. Now, suppose that Sleeping Beauty always says that the coin landed Tails. Now if she is right, she gets $2… one dollar for each day that she is woken up. What if she always says that the coin lands Heads? Then if she is right, she only gets $1. In other words, if the setup is rerun some large amount of times, the Sleeping Beauty that always says Tails gets twice as much money as the Sleeping Beauty that says Heads. If Sleeping Beauty is indifferent between Heads and Tails, as you Halfers suggest, then she would not have any preference about which one to say. But she would be wrong! She is better off by thinking Tails is more likely… in particular, she should think that Tails is two times more likely than Heads!”

This is a response along the lines of “rationality should not function as a handicap.” I am generally very fond of these arguments, but am uncomfortable with what it implies here. If the above reasoning is correct, then Bayes’ theorem tells us to take a position that leaves us worse off. And if this is true, then it seems we’ve found a flaw in using Bayes’ theorem as a guide to rational belief-formation!

But maybe this is too hasty. Is it really true that an expected value calculation using 1/2 probabilities will result in being indifferent between saying that the coin will land Heads and saying that it will land Tails?

Plausibly not. If the coin lands Heads, then you have twice as many opportunities to make money. In addition, since your qualitative experience is identical on both of these opportunities, you should expect that whatever decision process you perform on Monday will be identical to the decision process on Tuesday. Thus if Sleeping Beauty is a timeless decision theorist, she will see her decision on both days as a single decision. What will she calculate?

Expected value of saying Heads = 50% chance of Heads \cdot $2 gain for saying Heads on both days + 50% chance of Tails \cdot $0 = $1

Expected value of saying Tails = 50% chance of Heads \cdot $0 + 50% chance of Tails \cdot $1 gain for saying Tails on Tuesday = $0.50

So the expected value of saying Heads is still higher even if you think that the probability of Heads and Tails are equal, provided that you know about subjunctive dependence and timeless decision theory!

Infinities in the anthropic dice killer thought experiment

It’s time for yet another post about the anthropic dice killer thought experiment. 😛

In this post, I’ll point out some features of the thought experiment that have gone unmentioned in this blog thus far. Perhaps it is in these features that we can figure out how to think about this the right way.

First of all, there are lots of hidden infinities in the thought experiment. And as we’ve seen before, where there are infinities, things start getting really wacky. Perhaps some of the strangeness of the puzzle can be chalked up to these infinities.

For instance, we stipulated that the population from which people are being kidnapped is infinite. This was to allow for the game to go on for arbitrarily many rounds, but it leads to some trickiness. As we saw in the last post, it becomes important to calculate the probability of a particular individual being kidnapped if randomly drawn from the population. But… what is this probability if the population is infinite? The probability of selecting a particular person from an infinite population is just like the probability of picking 5 if randomly selecting from all natural numbers: zero!

Things get a little conceptually tricky here. Imagine that you’re randomly selecting a real number between 0 and 1. The probability that you select any particular number is zero. But at the same time, you will end up selecting some number. Whichever number you end up selecting, is a number that you would have said had a 0% chance of being selected! For situations like these, the term “almost never” is used. Rather than saying that any particular number is impossible, you say that it will “almost never” be picked. While this linguistic trick might make you feel less uneasy about the situation, there still seems to be some remaining confusion to be resolved here.

So in the case of our thought experiment, no matter how many rounds the game ends up going on, you have a 0% chance of being kidnapped. At the same time, by stipulation you have been kidnapped. Making sense of this is only the first puzzle. The second one is to figure out if it makes sense to talk about some theories making it more likely than others that you’ll be kidnapped (is \frac{11}{\infty} smaller than \frac{111}{\infty} ?)

An even more trouble infinity is in the expected number of people that are kidnapped. No matter how many rounds end up being played, there are always only a finite number of people that are ever kidnapped. But let’s calculate the expected number of people that play the game.

Number \  by \ n^{th} \ round = \frac{10^n - 1}{9}  \\~\\  \sum\limits_{n=1}^{\infty} { \frac{35^{n-1}}{36^n} \cdot \frac{10^n - 1}{9} }

But wait, this sum diverges! To see this, let’s just for a moment consider the expected number of people in the last round:

Number \  in \ n^{th} \ round = 10^{n-1}  \\~\\  \sum\limits_{n=1}^{\infty} { \frac{35^{n-1}}{36^n} \cdot 10^{n-1} }  = \sum\limits_{n=1}^{\infty} { (\frac{350}{36})^{n-1} }

Since \frac{350}{36} > 1 , this sum diverges. So on average there are an infinite number of people on the last round (even though the last round always contains a finite number of people). Correspondingly, the expected number of people kidnapped is infinite.

Why might these infinities matter? Well, one reason is that there is a well known problem with playing betting games against sources with infinite resources. Consider the Martingale betting system:

A gambler makes a bet of $1 at some odds. If they win, then good for them! Otherwise, if they lose, they bet $2 on the same odds. If they lose this time, they double down again, betting $4. And so on until eventually they win. The outcome of this is that by the time they win, they have lost \$(1 + 2 + 4 + ... + 2^n) and gained \$ 2^{n+1} . This is a net gain of $1. In other words, no matter what the odds they are betting on, this betting system guarantees a gain of $1 with probability 100%.

However, this guaranteed $1 only applies if the gambler can continue doubling down arbitrarily long. If they have a finite amount of money, then at some point they can no longer double down, and they suffer an enormous loss. For a gambler with finite resources, they stand a very good chance of gaining $1 and a very tiny chance of losing massively. If you calculate the expected gain, it turns out to be no better than what you expect from any ordinary betting system.

Summing up: With finite resources, continually doubling down gives no advantage on average. But with infinite resources, continually doubling down gives a guaranteed profit. Hopefully you see the similarity to the dice killer thought experiment. With an infinite population to draw from, the killer can keep “doubling down” (actually “decupling” down) until they finally get their “payout”: killing all of their current captives. On the other hand, with a finite population, the killer eventually loses the ability to get a new group of 10x the population of the previous one and lets everybody free. In this case, exactly like the Martingale system, the odds for a kidnappee end up coming out to the prior odds of 1/36.

What this indicates is that at least some of the weirdness of the dice killer scenario can be chalked up to the exploitability of infinities by systems like the Martingale system. If you have been kidnapped by the dice killer, you should think that your odds are 90% only if you know you are drawn from an infinite population. Otherwise, your odds should come out to 1/36.

But now consider the following: If you are a casino owner, should you allow into your casino a person with infinite money? Clearly not! It doesn’t matter how much of a bias the games in the casino give in favor of the house. An infinitely wealthy person can always exploit this infinity to give themselves an advantage.

But what about allowing a person with infinite money into your casino to place a single bet? In this case, I think that the answer is yes, you should allow them. After all, with only a finite number of bets, the odds still come out in favor of the house. This is actually analogous to the original dice killer puzzle! You are only selected in one round, and know that you will not be selected at any other time. So perhaps the infinity does not save us here.

One final point. It looks like a lot of the weirdness of this thought experiment is the same type of weirdness as you get from infinitely wealthy people using the Martingale betting system. But now we can ask: Is it possible to construct a variant of the dice killer thought experiment in which the anthropic calculation differs from the non-anthropic calculation, AND the expected number of people kidnapped is finite? It doesn’t seem obvious to me that this is impossible. Since the expected number of captives takes the form of an infinite sum with the number of people by the Nth round multiplied by roughly (\frac{35}{36})^N , all that is required is that the number of people by the Nth round be less than (\frac{36}{35})^N . Then the anthropic calculation should give a different answer from the non-anthropic calculation, and we can place the chance of escape in between these two. Now we have a finite expected number of captives, but a reversal in decision depending on whether you update on anthropic evidence or not. Perhaps I’ll explore this more in future posts.

Not a solution to the anthropic dice killer puzzle

I recently came up with what I thought was a solution to the dice killer puzzle. It turns out that I was wrong, but in the process of figuring this out I discovered a few subtleties in the puzzle that I had missed first time around.

First I’ll repost the puzzle here:

A mad killer has locked you in a room. You are trapped and alone, with only your knowledge of your situation to help you out.

One piece of information that you have is that you are aware of the maniacal schemes of your captor. His plans began by capturing one random person. He then rolled a pair of dice to determine their fate. If the dice landed snake eyes (both 1), then the captive would be killed. If not, then they would be let free.

But if they are let free, the killer will search for new victims, and this time bring back ten new people and lock them alone in rooms. He will then determine their fate just as before, with a pair of dice. Snake eyes means they die, otherwise they will be let free and he will search for new victims.

His murder spree will continue until the first time he rolls snake eyes. Then he will kill the group that he currently has imprisoned and retire from the serial-killer life.

Now. You become aware of a risky way out of the room you are locked in and to freedom. The chances of surviving this escape route are only 50%. Your choices are thus either (1) to traverse the escape route with a 50% chance of survival or (2) to just wait for the killer to roll his dice, and hope that it doesn’t land snake eyes.

What should you do?

As you’ll recall, there are two possible estimates of the probability of the dice landing snake eyes: 1/36 and 90%. Briefly, the arguments for each are…

Argument 1  The probability of the dice landing snake eyes is 1/36. If the dice land snake eyes, you die. So the probability that you die is 1/36.

Argument 2  The probability that you are in the last round is above 90%. Everybody in the last round dies. So the probability that you die is above 90%.

The puzzle is trying to explain what is wrong with the second argument, given its unintuitive consequences. So, here’s an attempt at a resolution!

Imagine that you find out that you’re in the fourth round with 999 other people. The probability that you’re interested in is the probability that the fourth round is the last round (which is equivalent to the fourth round being the round in which you get snake-eyes and thus die). To calculate this, we want to consider all possible worlds (i.e. all possible number of rounds that the game might go for) and calculate the probability weight for each.

In other words, we want to be able to calculate P(Game ends on the Nth round) for every N. We can calculate this a priori by just considering the conditions for the game ending on the Nth round. This happens if the dice roll something other than snake eyes N-1 times and then snake eyes once, on the final round. Thus the probability should be:

Screen Shot 2018-09-02 at 10.47.18 AM.png

Now, to calculate the probability that the game ends on the fourth round, we just plug in N = 4 and we’re done!

But hold on. There’s an obvious problem with this approach. If you know that you’ve been kidnapped on the fourth round, then you should have zero credence that the game ended on the third, second, or first rounds. But the probability calculation above gives a non-zero credence to each of these scenarios! What’s gone wrong?

Answer: While the probability above is the right prior probability for the game progressing to the Nth round, what we actually want is the posterior probability, conditioned on the information that you have about your own kidnapping.

In other words, we’re not interested in the prior probability P(Game ends on the Nth round). We’re interested in the conditional probability P(Game ends on the Nth round | I was kidnapped in the fourth round). To calculate this requires Bayes’ rule.

Screen Shot 2018-09-02 at 10.54.19 AM

The top term P(You are in the fourth round | N total rounds) is zero whenever N is less than four, which is a good sign. But what happens when N is ≥ 4? Does the probability grow with N or shrink?

Intuitively, we might think that if there are a very large number of rounds, then it is very unlikely that we are in the fourth one. Taking into account the 10x growth in number of people each round, it looks like for any N > 4, the theory that there are N rounds strongly predicts that you are in the Nth round. The larger N is, the more strongly it predicts that you are not in the fourth round. In other words, the update on your being in the fourth round strongly favors possible worlds in which the fourth round is the last one

But this is not the whole story! There’s another update to be considered. Remember that in this setup, you exist as a member of a boundless population and are at some point kidnapped. We can ask the question: How likely is it that you would have been kidnapped if there were N rounds?  Clearly, the more rounds there are before the game ends, the more people are kidnapped, and so the higher chance you have of being kidnapped in the first place! This means that we should expect it to be very likely that the fourth round is not the last round, because worlds in which the fourth round is not the last one contain many more people, thus making it more likely that you would have been kidnapped at all.

In other words, we can break our update into two components: (1) that you were kidnapped, and (2) that it was in the fourth round that you were kidnapped. The first of these updates strongly favors theories in which you are not in the last round. The second strongly favors theories in which you are in the last round. Perhaps, if we’re lucky, these two updates cancel out, leaving us with only the prior probability based on the objective chance of the dice rolling snake eyes (1/36)!

Recapping: If we know which round we are in, then when we update on this information, the probability that this round is the last one is just equal to the objective chance that the dice roll lands snake eyes (1/36). Since this should be true no matter what particular round we happen to be in, we should be able to preemptively update on being in the Nth round (for some N) and bring our credence to 1/36.

This is the line of thought that I had a couple of days ago, which I thought pointed the way to a solution to the anthropic dice killer puzzle. But unfortunately… I was wrong. It turns out that even when we consider both of these updates, we still end up with a probability > 90% of being in the last round.

Here’s an intuitive way to think about why this is the case.

In the solution I wrote up in my initial post on the anthropic dice killer thought experiment, I gave the following calculation:

Screen Shot 2018-09-02 at 12.54.23 PM.png

Basically, we look at the fraction of people that die if the game ends on the Nth round, calculate the probability of the game ending on the Nth round, and then average the fraction over all possible N. This gives us the average fraction of people that die in the last round.

We now know that this calculation was wrong. The place where I went wrong was in calculating the chance of getting snake eyes in the nth round. The probability I wrote was the prior probability, where what we want instead is the posterior probability after performing an anthropic update on the fact of your own kidnapping.

So maybe if we plug in the correct values for these probabilities, we’ll end up getting a saner answer!

Unfortunately, no. The fraction of people that die starts at 100% and then gradually decreases, converging at infinity to 90% (the limit of \frac{1000...}{1111...} is .9). This means that no matter what probabilities we plug in there, the average fraction of people will be greater than 90%. (If the possible values of a quantity are all greater than 90%, then the average value of this quantity cannot possibly be less than 90%.)

Screen Shot 2018-09-02 at 1.07.07 PM.png

This means that without even calculating the precise posterior probabilities, we can confidently say that the average probability of death must be greater than 90%. And therefore our proposed solution fails, and the mystery remains.

It’s worth noting that even if our calculation had come out with the conclusion that 1/36 was the actual average chance of death, we would still have a little explaining to do. Namely, it actually is the case that the average person does better by trying to escape (i.e. acting as if the probability of their death is greater than 90%) than by staying around (i.e. acting as if the probability of their death is 1/36).

This is something that we can say with really high confidence: accepting the apparent anthropic calculation of 90% leaves you better off on average than rejecting it. On its own, this is a very powerful argument for accepting 90% as the answer. The rational course of action should not be one that causes us to lose where winning is an option.

Pushing anti-anthropic intuitions

A stranger comes up to you and offers to play the following game with you: “I will roll a pair of dice. If they land snake eyes (i.e. they both land 1), you give me one dollar. Otherwise, if they land anything else, I give you a dollar.”

Do you play this game?

Here’s an intuitive response: Yes, of course you should! You have a 35/36 chance of gaining $1, and only a 1/36 chance of losing $1. You’d have to be quite risk averse to refuse those odds.

What if the stranger tells you that they are giving this same bet to many other people? Should that change your calculation?

Intuitively: No, of course not! It doesn’t matter what else the stranger is doing with other people.

What if they tell you that they’ve given this offer to people in the past, and might give the offer to others in the future? Should that change anything?

Once again, it seems intuitively not to matter. The offers given to others simply have nothing to do with you. What matters are your possible outcomes and the probabilities of each of these outcomes. And what other people are doing has nothing to do with either of these.

… Right?

Now imagine that the stranger is playing the game in the following way: First they find one person and offer to play the game with them. If the dice land snake eyes, then they collect a dollar and stop playing the game. Otherwise, they find ten new people and offer to play the game with them. Same as before: snake eyes, the stranger collects $1 from each and stops playing, otherwise he moves on to 100 new people. Et cetera forever.

We now ask the question: How does the average person given the offer do if they take the offer? Well, no matter how many rounds of offers the stranger gives, at least 90% of people end up in his last round. That means that at least 90% of people end up giving over $1 and at most 10% gain $1. This is clearly net negative for those that hand over money!

Think about it this way: Imagine a population of individuals who all take the offer, and compare them to a population that all reject the offer. Which population does better on average?

For the population who takes the offer, the average person loses money. An upper bound on how much they lose is 10% ($1) + 90% (-$1) = -$.80. For the population that reject the offer, nobody gains money or loses It either: the average case is exactly $0. $0 is better than -$.80, so the strategy of rejecting the offer is better, on average!

This thought experiment is very closely related to the dice killer thought experiment. I think of it as a variant that pushes our anti-anthropic-reasoning intuitions. It just seems really wrong to me that if somebody comes up to you and offers you this deal that has a 35/36 chance of paying out you should reject it. The details of who else is being offered the deal seem totally irrelevant.

But of course, all of the previous arguments I’ve made for anthropic reasoning apply here as well. And it is just true that the average person that rejects the offer does better than the average person that accepts it. Perhaps this is just another bullet that we have to bite in our attempt to formalize rationality!

A closer look at anthropic tests for consciousness

(This post is the culmination of my last week of posts on anthropics and conservation of expected evidence.)

In this post, I described how anthropic reasoning can apparently give you a way to update on theories of consciousness. This is already weird enough, but I want to make things a little weirder. I want to present an argument that in fact anthropic reasoning implies that we should be functionalists about consciousness.

But first, a brief recap (for more details see the post linked above):

Screen Shot 2018-08-09 at 9.09.08 AM

Thus…

Screen Shot 2018-08-09 at 9.15.37 AM.pngScreen Shot 2018-08-09 at 9.19.18 AM

Whenever this experiment is run, roughly 90% of experimental subjects observe snake eyes, and roughly 10% observe not snake eyes. What this means is that 90% of the people update in favor of functionalism (by a factor of 9), and only 10% of people update in favor of substrate dependence theory (also by a factor of 9).

Now suppose that we have a large population that starts out completely agnostic on the question of functionalism vs. substrate dependence. That is, the prior ratio for each individual is 1:

Screen Shot 2018-08-09 at 9.28.15 AM

Now imagine that we run arbitrarily many dice-killer experimental setups on the population. We would see an upwards drift in the average beliefs of the population towards functionalism. And in the limit of infinite experiments, we would see complete convergence towards functionalism as the correct theory of consciousness.

Now, the only remaining ingredient is what I’ve been going on about the past two days: if you can predict beforehand that a piece of evidence is going to make you on average more functionalist, then you should preemptively update in favor of functionalism.

What we end up with is the conclusion that considering the counterfactual infinity of experimental results we could receive, we should conclude with arbitrarily high confidence that functionalism is correct.

To be clear, the argument is the following:

  1. If we were to be members of a population that underwent arbitrarily many dice-killer trials, we would converge towards functionalism.
  2. Conservation of expected evidence: if you can predict beforehand which direction some observation would move you, then you should pre-emptively adjust your beliefs in that direction.
  3. Thus, we should preemptively converge towards functionalism.

Premise 1 follows from a basic application of anthropic reasoning. We could deny it, but doing so amounts to denying the self-sampling assumption and ensuring that you will lose in anthropic games.

Premise 2 follows from the axioms of probability theory. It is more or less the statement that you should update your beliefs with evidence, even if this evidence is counterfactual information about the possible results of future experiments.

(If this sounds unintuitive to you at all, consider the following thought experiment: We have two theories of cosmology, one in which 99% of people live in Region A and 1% in Region B, and the other in which 1% live in Region A and 99% in Region B. We now ask where we expect to find ourselves. If we expect to find ourselves in Region A, then we must have higher credence in the first theory than the second. And if we initially did not have this higher credence, then considering the counterfactual question “Where would I find myself if I were to look at which region I am in?” should cause us to update in favor of the first theory.)

Altogether, this argument looks really bullet proof to me. And yet its conclusion seems very wrong.

Can we really conclude with arbitrarily high certainty that functionalism is correct by just going through this sort of armchair reasoning from possible experimental results that we will never do? Should we now be hardcore functionalists?

I’m not quite sure yet what the right way to think about this is. But here is one objection I’ve thought of.

We have only considered one possible version of the dice killer thought experiment (in which the experimenter starts off with 1 human, then chooses 1 human and 9 androids, then 1 human and 99 androids, and so on). In this version, observing snake eyes was evidence for functionalism over substrate dependence theory, which is what causes the population-wide drift towards functionalism.

We can ask, however, if we can construct a variant of the dice killer thought experiment in which snake eyes counts as evidence for substrate dependence theory over functionalism. If so, then we could construct an experimental setup that we can predict beforehand will end up with us converging with arbitrary certainty to substrate dependence theory!

Let’s see how this might be done. We’ll imagine the set of all variants on the thought experiment (that is, the set of all choices the dice killer could make about how many humans and androids to kidnap in each round.)

Screen Shot 2018-08-10 at 12.32.28 AM

For ease of notation, we’ll abbreviate functionalism and substrate dependence theory as F and S respectively.

Screen Shot 2018-08-10 at 12.32.57 AM

And we’ll also introduce a convenient notation for calculating the total number of humans and the total number androids ever kidnapped by round N.

Screen Shot 2018-08-10 at 12.33.41 AM

Now, we want to calculate the probability of snake eyes given functionalism in this general setup, and compare it to the probability of snake eyes given substrate dependence theory. The first step will be to consider the probability of snake eyes if  the experiment happens to end on the nth round, for some n. This is just the number of individuals in the last round divided by the total number of kidnapped individuals.

Screen Shot 2018-08-10 at 12.35.06 AM

Now, we calculate the average probability of snake eyes (the average fraction of individuals in the last round).

Screen Shot 2018-08-10 at 12.36.08 AM

The question is thus if we can find a pair of sequences

Screen Shot 2018-08-10 at 12.41.24 AM

such that the first term is larger than the second.

Screen Shot 2018-08-10 at 12.45.29 AM.png

It seems hard to imagine that there are no such pairs of sequences that satisfy this inequality, but thus far I haven’t been able to find an example. For now, I’ll leave it as an exercise for the reader!

If there are no such pairs of sequences, then it is tempting to take this as extremely strong evidence for functionalism. But I am concerned about this whole line of reasoning. What if there are a few such pairs of sequences? What if there are far more in which functionalism is favored than those in which substrate dependence is favored? What if there are an infinity of each?

While I buy each step of the argument, it seems wrong to say that the right thing to do is to consider the infinite set of all possible anthropic experiments you could do, and then somehow average over the results of each to determine the direction in which we should update our theories of consciousness. Indeed, I suspect that any such averaging procedure would be vulnerable to arbitrariness in the way that the experiments are framed, such that different framings give different results.

At this point, I’m pretty convinced that I’m making some fundamental mistake here, but I’m not sure exactly where this mistake is. Any help from readers would be greatly appreciated. 🙂