I’ve recently been writing a lot about how infinities screw up decision theory and ethics. In his paper Infinite Ethics, Nick Bostrom talks about attempts to apply nonstandard analysis to try to address the problems of infinities. I’ll just briefly describe nonstandard analysis in this post, as well as the types of solutions that he sketches out.
Hyperreals
So first of all, what is nonstandard analysis? It is a mathematical formalism that extends the ordinary number system to include infinitely large numbers and infinitely small numbers. It doesn’t do so just by adding the symbol ∞ to the set of real numbers ℝ. Instead, it adds an infinite amount of different infinitely large numbers, as well as an infinity of infinitesimal numbers, and proceeds to extend the ordinary operations of addition and multiplication to these numbers. The new number system is called the hyperreals.
So what actually are hyperreals? How does one do calculations with them?
A hyperreal number is an infinite sequence of real numbers. Here are some examples of them:
(3, 3, 3, 3, 3, …)
(1, 2, 3, 4, 5, …)
(1, ½, ⅓, ¼, ⅕, …)
It turns out that the first of these examples is just the hyperreal version of the ordinary number 3, the second is an infinitely larger hyperreal, and the third is an infinitesimal hyperreal. Weirded out yet? Don’t worry, we’ll explain how to make sense of this in a moment.
So every ordinary real number is associated with a hyperreal in the following way:
N = (N, N, N, N, N, …)
What if we just switch the first number in this sequence? For instance:
N’ = (1, N, N, N, N, …)
It turns out that this change doesn’t change the value of the hyperreal. In other words:
N = N’
(N, N, N, N, N, …) = (1, N, N, N, N, …)
In general, if you take any hyperreal number and change a finite amount of the numbers in its sequence, you end up with the same number you started with. So, for example,
3 = (3, 3, 3, 3, 3, …)
= (1, 5, 99, 3, 3, 3, …)
= (3, 3, 0, 0, 0, 0, 3, 3, …)
The general rule for when two hyper-reals are equal relies on the concept of a free ultrafilter, which is a little above the level that I want this post to be at. Intuitively, however, the idea is that for two hyperreals to be equal, the number of ways in which their sequences differ must be either finite or certain special kinds of infinities (I’ll leave this “special kinds” vague for exposition purposes).
Adding and multiplying hyperreals is super simple:
(a1, a2, a3, …) + (b1, b2, b3, …) = (a1 + b1, a2 + b2, a3 + b3, …)
(a1, a2, a3, …) · (b1, b2, b3, …) = (a1 · b1, a2 · b2, a3 · b3, …)
Here’s something that should be puzzling:
A = (0, 1, 0, 1, 0, 1, …)
B = (1, 0, 1, 0, 1, 0, …)
A · B = ?
Apparently, the answer is that A · B = 0. This means that at least one of A or B must also be 0. But both of them differ from 0 in an infinity of places! Subtleties like this are why we need to introduce the idea of a free ultrafilter, to allow certain types of equivalencies between infinitely differing sequences.
Anyway, let’s go on to the last property of hyperreals I’ll discuss:
(a1, a2, a3, …) < (b1, b2, b3, …)
if
an ≥ bn for only a finite amount of values of n
(This again has the same weird infinite exceptions as before, which we’ll ignore for now.)
Now at last we can see why (1, 2, 3, 4, …) is an infinite number:
Choose any real number N
N = (N, N, N, N, …)
ω = (1, 2, 3, 4, …)
So ωn ≤ Nn for n = 1, 2, 3, …, floor(N)
and ωn > Nn for all other n
This means that ω is larger than N, because there are only a finite amount of members of the sequence for which ωn is greater than ωn. And since this is true for any real number N, then ω must be larger than every real number! In other words, you can now give an answer to somebody who asks you to name a number that is bigger than every real number!
ε = (1, ½, ⅓, ¼, ⅕, …) is an infinitesimal hyperreal for a similar reason:
Choose any real number N > 0
N = (N, N, N, N, …)
ε = (1, ½, ⅓, ¼, ⅕, …)
So εn ≥ Nn for n = 1, 2, …, ceiling(1/N)
and εn < Nn for all other n
Once again, ε is only larger than N in a finite number of places, and is smaller in the other infinity. So ε is smaller than every real number greater than 0.
In addition, the sequence (0, 0, 0, …) is smaller than ω for every value of its sequence, so ε is larger than 0. A number that is smaller than every positive real and greater than 0 is an infinitesimal.
Okay, done introducing hyperreals! Let’s now see how this extended number system can help us with our decision theory problems.
Saint Petersburg Paradox
One standard example of weird infinities in decision theory is the St Petersburg Paradox, which I haven’t talked about yet on this blog. I’ll use this thought experiment as a template for the discussion. Briefly, then, imagine a game that works as follows:
Game 1
Round 1: Flip a coin.
If it lands H, then you get $2 and the game ends.
If it lands T, then you move on to Round 2.
Round 2: Flip the coin again.
If it lands H, then you get $4 and the game ends.
If T, then move on to Round 3.
Round 3: Flip a coin.
If it lands H, then you get $8 and the game ends.
If it lands T, then you move on to Round 4.
(et cetera to infinity)
This game looks pretty nice! You are guaranteed at least $2, and your payout doubles every time the coin lands H. The question is, how nice really is the game? What’s the maximum amount that you should be willing to pay in to play?
Here we run into a problem. To calculate this, we want to know what the expected value of the game is – how much you make on average. We do this by adding up the product of each outcome and the probability of that outcome:
EV = ½ · $2 + ¼ · $4 + ⅛ · $8 + …
= $1 + $1 + $1 + …
= ∞
Apparently, the expected payout of this game is infinite! This means that in order to make a profit, you should be willing to give literally all of your money in order to play just a single round of the game! This should seem wrong… If you pay $1,000,000 to play the game, then the only way that you make a profit is if the coin lands heads twenty times in a row. Does it really make sense to risk all of this money on such a tiny chance?
The response to this is that while the chance that this happens is of course tiny, the payout if it does happen is enormous – you stand to double, quadruple, octuple, (et cetera) your money. In this case, the paradox seems to really be a result of the failure of our brains to intuitively comprehend exponential growth.
There’s an even stronger reason to be unhappy with the St Petersburg Paradox. Say that instead of starting with a payout of $2 and doubling each time from there, you had started with a payout of $2000 and doubled from there.
Game 2
Round 1: Flip a coin.
If it lands H, then you get $2000 and the game ends.
If it lands T, then you move on to Round 2.
Round 2: Flip the coin again.
If it lands H, then you get $4000 and the game ends.
If T, then move on to Round 3.
Round 3: Flip a coin.
If it lands H, then you get $8000 and the game ends.
If it lands T, then you move on to Round 4.
(et cetera to infinity)
This alternative game must be better than the initial game – after all, no matter how many times the coin lands T before finally landing H, your payout is 1000 better than it would have been previously. So if you’re playing the first of the two games, then you should always wish that you were playing the second, no matter how many times the coin ends up landing T.
But the expected value comparison doesn’t grant you this! Both games have an infinite expected value, and infinity is infinity. We can’t have one infinity being larger than another infinity, right?
Enter the hyperreals! We’ll turn the expected value of the first game into a hyperreal as follows:
EV1 = ½ · $2 = $1
EV2 = ½ · $2 + ¼ · $4 = $1 + $1 = $2
EV3 = ½ · $2 + ¼ · $4 + ⅛ · $8 = $1 + $1 + $1 = $3
EV = (EV1, EV2, EV3, …)
= $(1, 2, 3, …)
Now we can compare it to the second game:
Game 1: $(1, 2, 3, …) = ω
Game 2: $(1000, 2000, 3000, …) = $1000 · ω
So hyperreals allow us to compare infinities, and justify why Game 2 has a 1000 times larger expected value than Game 1!
Let me give another nice result of this type of analysis. Imagine Game 1′, which is identical to Game 1 except for the first payout, which is $4 instead of $2. We can calculate the payouts:
Game 1: $(1, 2, 3, …) = ω
Game 1′: $(2, 3, 4, …) = $1 + ω
The result is that Game 1′ gives us an expected increase of just $1. And this makes perfect sense! After all, the only difference between the games is if they end in the first round, which happens with probability ½. And in this case, you get $4 instead of $2. The expected difference between the games should therefore be ½ · $2 = $1! Yay hyperreals!
Of course, this analysis still ends up concluding that the St Petersburg game does have an infinite expected payout. Personally, I’m (sorta) okay with biting this bullet and accepting that if your goal is to maximize money, then you should in principle give any arbitrary amount to play the game.
But what I really want to talk about are variants of the St Petersburg paradox where things get even crazier.
Getting freaky
For instance, suppose that instead of the initial game setup, we have the following setup:
Game 3
Round 1: Flip a coin.
If it lands H, then you get $2 and the game ends.
If it lands T, then you move on to Round 2.
Round 2: Flip the coin again.
If it lands H, then you pay $4 and the game ends.
If T, then move on to Round 3.
Round 3: Flip the coin again.
If it lands H, then you get $8 and the game ends.
If it lands T, then you move on to Round 4.
Round 4: Flip the coin again.
If it lands H, then you pay $16 and the game ends.
If it lands T, then you move on to Round 5.
(et cetera to infinity)
The only difference now is that if the coin lands H on any even round, then instead of getting money that round, you have to pay that money back to the dealer! Clearly this is a less fun game than the last one. How much less fun?
Here things get really weird. If we only looked at the odd rounds, then the expected value is ∞.
EV = ½ · $2 + ⅛ · $8 + …
= $1 + $1 + …
= ∞
But if we look at the odd rounds, then we get an expected value of -∞!
EV = ¼ · -$4 + 1/16 · -$16 + …
= -$1 + -$1 + …
= -∞
We find the total expected value by adding together these two. But can we add ∞ to -∞? Not with ordinary numbers! Let’s convert our numbers to hyperreals instead, and see what happens.
EV = $(1, -1, 1, -1, …)
This time, our result is a bit less intuitive than before. As a result of the ultrafilter business we’ve been avoiding talking about, we can use the following two equalities:
(1, -1, 1, -1, …) = 1
(-1, 1, -1, 1, …) = -1
This means that the expected value of Game 3 is $1. In addition, if Game 3 had started with you having to pay $2 for the first round rather than getting $2, then the expected value would be -$1.
So hyperreal decision theory recommends that you play the game, but only buy in if it costs you less than $1.
Now, the last thought experiment I’ll present is the weirdest of them.
Game 4
Round 1: Flip a coin.
If it lands H, then you pay $2 and the game ends.
If it lands T, then you move on to Round 2.
Round 2: Flip the coin again.
If it lands H, then you get $2 and the game ends.
If T, then move on to Round 3.
Round 3: Flip the coin again.
If it lands H, then you pay $2.67 and the game ends.
If it lands T, then you move on to Round 4.
Round 4: Flip the coin again.
If it lands H, then you get $4 and the game ends.
If it lands T, then you move on to Round 4.
Round 4: Flip the coin again.
If it lands H, then you pay $6.40 and the game ends.
If it lands T, then you move on to Round 4.
(et cetera to infinity)
The pattern is that the payoff on the nth round is (-2)n / n. From this, we see that the expected value of the nth round is 1/n. This sum converges as follows:
∑n=1 (-1)n / n = -ln(2) ≈ -.69
But by Cauchy’s rearrangement theorem, it turns out that by rearranging the terms of this sum, we can make it add up to any amount that we want! (this follows from the fact that the sum of the absolute values of the term is infinite)
This means that not only is the expected value for this game undefined, but it can be justified having every possible value. Not only do we not know the expected value of the game, but we don’t know whether it’s a positive game or a negative game. We can’t even figure out if it’s a finite game or an infinite game!
Let’s apply hyperreal numbers.
EV1 = -$1
EV2 = $(-1 + ½) = -$0.50
EV3 = $(-1 + ½ – ⅓) = -$0.83
EV4 = $(-1 + ½ – ⅓ + ¼) = -$0.58
So EV = $(-1.00, -0.50, -0.83, -0.58, …)
Since this series converges from above and below to -ln(2) ≈ -$0.69, the expected value is -$0.69 + ε, where ε is a particular infinitesimal number. So we get a precisely defined expectation value! One could imagine just empirically testing this value by running large numbers of simulations.
A weirdness about all of this is that the order in which you count up your expected value is extremely important. This is a general property of infinite summation, and seems like a requirement for consistent reasoning about infinities.
We’ve seen that hyperreal numbers can be helpful in providing a way to compare different infinities. But hyperreal numbers are only the first step into the weird realm of the infinite. The surreal number system is a generalization of the hyperreals that is much more powerful. In a future post, I’ll talk about the highly surreal decision theory that results from application of these numbers.
I’m confused about Game 4.
The application of hyperreals looks equivalent to taking the limit of partial sums (as usual for infinite sums), and still looks sensitive to the ordering of the terms.
AFAICT, the expected value really is undefined, but the sample means still converge to -ln(2). This isn’t the case for e.g. Cauchy, for which a sample mean of any sample size is distributed as another Cauchy with the same parameters. (Why is this distribution different? Why is this ordering of terms special?)
Wiki makes these claims about exactly the distribution for Game 4, but with insufficient explanation: https://en.wikipedia.org/wiki/Law_of_large_numbers#Differences_between_the_weak_law_and_the_strong_law
Hyperreal numbers are different from limits of partial sums because they allow us to make careful comparisons between different partial sums whose limits go to infinity or to 0. Consider, for instance, the sequences (a_N) = (sum from n=1 to N of n) and (b_N) = (sum up to N of 2n). The limits of these sequences are the same in standard analysis: a_∞ = b_∞, and there is no way to make sense of the claim that b_∞ is twice as large of an infinity as a_∞.
If we convert these limits into hyperreals, then we do have a rigorous way to talk about these two sequences being different. We are able to say that one infinity is larger than another by 2. More generally, we can talk about one infinity being equal to the square root of another, or precisely define the infinitesimal quantity that results from dividing 3 by ∞, et cetera. None of this is even coherent if we’re talking purely about limits of partial sums.
Bostrom talks about “infinitarian paralysis”, where a standard decision theorist is unable to distinguish between clearly different games when they have infinite expected values. One way to think about hyperreals is that they allow a precise method to break the paralysis and still make choices in the face of infinities. I think I’d also make the further claim that using this method will make you systematically better off than just using ordinary decision theory.
***
On ordering: by construction, you cannot in general infinitely reorder the numbers in the sequence for a given hyperreal number and end up with the same number. This is again another difference between hyperreals and limits of partial sums – infinite associativity is assumed to be valid for sequences of partial sums, but is by definition NOT valid for hyperreals. So all of the hyperreals that I used in this post are precisely defined, exactly because infinite reordering is not an allowed operation in the formalism.
This is maybe unintuitive, but I think it’s a really important feature of the system. Most of the paradoxes and problems of infinite sums arise from this idea that you can just reorder all the terms in your sum and not change anything significant. I think of this as basically an intuition we carry over from finite sums that we’re better off without, and that prevents us from being able to coherently discuss infinities.
Finally! (sorry for the long reply) One seemingly ad-hoc aspect of all of this is the specific way that we convert our expected values to hyperreals. We basically took the simplest possible method (just make the hyperreal sequence look exactly like the sequence you would have been summing up previously). Once we have made this choice, then we have a consistent system for comparing different infinities and different expected values.
If instead we have decided to do something else, like taking the original sequence of sums, switching the positions of every pair of elements, and then using that for the hyperreal sequence, then we would have gotten different answers. I’m not sure exactly what to say about this, besides that once you choose a single procedure and apply it uniformly, you get a consistent framework that removes many of the standard problems with infinities. The procedure we used has some obvious benefits over other more complicated procedures, but in the end I’m not sure how exactly to defend it besides just by treating it as axiomatic.
An addendum on the broad overview of why hyperreals are useful in a way that ordinary number systems are not:
Somebody that is a hedonic utilitarian and wants to maximize total happiness is faced with real decision theoretic difficulties in an infinite universe – all actions they could take make only a finite difference to the total happiness of the universe. Hyperreals are one way to solve this problem.
Somebody that (1) is considering two actions, one of which gives them +1 utils and the other of which is neutral, and (2) expects to live a future infinity of happy moments, faces the exact same difficulties with making sense of why Action 1 is better than Action 2. Using hyperreals, they are allowed to say that Action 1 makes the universe exactly +1 util better than Action 2.