Bayesianism does great when the true model of the world is included in the set of models that we are using. It can run into issues, however, when the true model starts with zero prior probability.

We’d hope that even in these cases, the Bayesian agent ends up doing as well as possible, given their limitations. This paper presents lots of examples of how a Bayesian thinker can go horribly wrong as a result of accidentally excluding the true model from the set of models they are considering.

Here’s the setup: A fair coin is being repeatedly flipped, while being watched by a Bayesian agent that wants to predict the bias in the coin. This agent starts off with the correct credence distribution over outcomes: they have a 50% credence in it landing heads and a 50% chance of it landing tails.

However, this agent only has two theories available to them:

T_{1}: The coin lands heads 80% of the time.

T_{2}: The coin lands heads 20% of the time.

Even though the Bayesian doesn’t have access to the true model of reality, they are still able to correctly forecast a 50% chance of the coin landing heads by evenly splitting their credences in these two theories. Given this, we’d hope that they wouldn’t be *too* handicapped and in the long run would be able to do pretty well at predicting the next flip.

Here’s the punchline, before diving into the math: **The Bayesian doesn’t do this. **In fact, their behavior becomes more and more unreasonable the more evidence they get.

They end up spending almost all of their time being virtually certain that the coin is biased, and occasionally flip-flopping in their belief about the direction of the bias. As a result of this, their forecast will *almost always be very wrong. N*ot only will it fail to converge to a realistic forecast, but in fact, it will get further and further away from the true value on average. And remember, this is the case *even though convergence is possible!*

Alright, so let’s see why this is true.

First of all, our agent starts out thinking that T_{1} and T_{2} are equally likely. This gives them an initially correct forecast:

P(T_{1}) = 50%

P(T_{2}) = 50%

P(H) = P(H | T_{1}) · P(T_{1}) + P(H | T_{2}) · P(T_{2})

= 80% · 50% + 20% · 50% = 50%

So even though the Bayesian doesn’t have the correct model in their model set, they are able to distribute their credences in a way that will produce the correct forecast. If they’re smart enough, then they should just stay near this distribution of credences in the long run, and in the limit of infinite evidence converge to it. So do they?

Nope! If they observe n heads and m tails, then their likelihood ratios end up moving *exponentially* with n – m. This means that the credences will almost certainly end up very highly uneven.

In what follows, I’ll write the difference in the number of heads and the number of tails as z.

z = n – m

P(n, m | T_{1}) = .8^{n }.2^{m}

P(n, m | T_{2}) = .2^{n} .8^{m}

L(n, m | T_{1}) = 4^{z}

L(n, m | T_{2}) = 1/4^{z}

P(T_{1} | n, m) = 4^{z} / (4^{z} + 1)

P(T_{2} | n, m) = 1 / (4^{z} + 1)

Notice that the final credences *only* depend on z. It doesn’t matter if you’ve done 100 trials or 1 trillion, all that matters is how many more heads than tails there are.

Also notice that the final credences are exponential in z. This means that for positive z, P(T_{1} | n, m) goes to 100% exponentially quickly, and vice versa.

z |
0 |
1 | 2 | 3 | 4 | 5 |
6 |

P(T_{1}|z) |
50% | 80% | 94.12% | 98.46% | 99.61% | 99.90% | 99.97% |

P(T_{2}|z) |
50% | 20% | 5.88% | 1.54% | 0.39% | .10% | 0.03% |

The Bayesian agent is *almost always *virtually certain in the truth of one of their two theories. But which theory they think is true is constantly flip-flopping, resulting in a belief system that is vacillating helplessly between two suboptimal extremes. This is clearly really undesirable behavior for a supposed model of epistemic rationality.

In addition, as the number of coin tosses increases, it becomes less and less likely that z is exactly 0. At N tosses, the average value of z is √N. This means that the more evidence they receive, the further on average they will be from the ideal distribution.

Sure, you can object that in this case, it would be dead obvious to just include a T_{3}: “The coin lands heads 50% of the time.” But that misses the point.

The Bayesian agent *had a way out* – they could have noticed after a long time that their beliefs were constantly wavering from extreme confidence in T_{1} to extreme confidence in T_{2}, and seemed to be doing the opposite of converging to reality. They could have noticed that an even distribution of credences would allow them to do much better at predicting the data. And if they had done so, they they would end up *always giving an accurate forecast* of the next outcome.

But they didn’t, and they didn’t because the model that exactly fit reality was not in their model set. Their epistemic system didn’t allow them the flexibility needed to realize that they needed to learn from their failures and rethink their priors.

Reality is very messy and complicated and rarely adheres exactly to the nice simple models we construct. It doesn’t seem crazily implausible that we might end up accidentally excluding the true model from our set of possible models, and this example demonstrates a way that Bayesian reasoning can lead you astray in exactly these circumstances.