Yesterday I described the two-envelopes paradox and laid out its solution. Yay! Problem solved.

Except that it’s not. Because I said that the root of the problem was an improper prior, and when we instead use a proper prior, any proper prior, we get the right result. But we can propose a variant of the two envelopes problem that gives a proper prior, and *still* *mandates infinite switching*.

Here it is:

In front of you are two envelopes, each containing some unknown amount of money. You know that one of the envelopes has twice the amount of money of the other, but you’re not sure which one that is and can only take one of the two.

In addition, you know that the envelopes were stocked by a mad genius according to the following procedure: He randomly selects an integer n ≥ 0 with probability ⅓ (⅔)^{n}, then stocked the smaller envelope with $2^{n} and the larger with double this amount.

You have picked up one of the envelopes and are now considering if you should switch your choice.

Let’s verify quickly that the mad genius’s procedure for selecting the amount of money makes sense:

Total probability = ∑_{n} ⅓ (⅔)^{n} = ⅓ · 3 = 1

Okay, good. Now we can calculate the expected value.

You know that the envelope that you’re holding contains one of the following amounts of money: ($1, $2, $4, $8, …).

First let’s consider the case in which it contains $1. If so, then you know that your envelope must be the smaller of the two, since there is no $0.50 envelope. So if your envelope contains $1, then you are sure to gain $1 by switching.

Now let’s consider every other case. If the amount you’re holding is $2^{n}, then you know that there is a probability of ⅓ (⅔)^{n} that it is the smaller envelope and ⅓ (⅔)^{n+1} that it’s the larger one. You are $2^{n} better off if you have the smaller envelope and switch, and are 2^{n-1} worse off if you initially had the larger envelope and switch.

So your change in expected value by switching instead of staying is:

∆EU = $ ⅓ (1⅓)^{n} – $ ⅓ ¼ (1⅓)^{n+1
}= $ ⅓ (1⅓)^{n} (1 – ¼ · 1⅓)

= $ ⅓ (1⅓)^{n} (1 – ⅓) > 0

So if you are holding $1, you are better off switching. And if you are holding more than $1, you are better off switching. In other words, switching is *always better* than staying, regardless of how much money you are holding.

And yet this exact same argument applies once you’ve switched envelopes, so you are led to an infinite process of switching envelopes back and forth. Your decision theory tells you that as you’re doing this, your expected value is exponentially growing, so it’s worth it to you to keep on switching ad infinitum – it’s not often that you get a chance to generate exponentially large amounts of money!

The problem this time can’t be the prior – we are explicitly given the prior in the problem, and verified that it was normalized just in case.

So what’s going wrong?

***

…

*(once again, recommend that you sit down and try to figure this out for yourself before reading on)*

…

***

Recall that in my post yesterday, I claimed to have proven that no matter *what* your prior distribution over money amounts in your envelope, you will always have a net zero expected value. But apparently here we have a statement that contradicts that.

The reason is that my proof yesterday was only for *continuous *prior distributions over all real numbers, and didn’t apply to discrete distributions like the one in this variant. And apparently for discrete distributions, it is no longer the case that your expected value is zero.

The best solution to this problem that I’ve come across is the following: This problem involves comparing infinite utilities, and decision theory can’t handle infinities.

There’s a long and fascinating precedent for this claim, starting with problems like the Saint Petersburg paradox, where an infinite expected value leads you to bet arbitrarily large amounts of money on arbitrarily unlikely scenarios, and including weird issues in Boltzmann brain scenarios. Discussions of Pascal’s wager also end up confronting this difficulty – comparing different levels of infinite expected utility leads you into big trouble.

And in this variant of the problem, both your expected utility for switching and your expected utility for staying are infinite. Both involve a calculation of a sum of (⅔)^{n} (the probability) times 2^{n}, which diverges.

This is fairly unsatisfying to me, but perhaps it’s the same dissatisfaction that I feel when confronting problems like Pascal’s wager – a mistaken feeling that decision theory *should* be able to handle these problems, ultimately rooted in a failure to internalize the hidden infinities in the problem.