Before I read this paper, I thought that the continuum hypothesis probably did not have a truth value, and that ZFC + CH and ZFC + ¬CH are systems that equally well capture what it means to be a set. Now I think that the continuum hypothesis is *actually false*, and that ZFC + CH would not be an adequate representation of our intuitive notion of sets.

The argument in the paper is not a formal proof of ¬CH. Instead, it’s a refutation of the following form: “The continuum hypothesis is equivalent in ZFC to this statement which is obviously false. So the continuum hypothesis must be false.”

So what is this obviously false statement? It is a claim that an event that happens with 100% probability is impossible. More exactly, for any function that assigns to each real number a countable set of real numbers, the claim is that it’s impossible for there exist two real numbers x and y such that x is not in the countable set assigned to y and y is not in the countable set assigned to x.

Why is this obviously false? Well, imagine throwing darts at the real line, so as to randomly select a real each time. If we throw one dart, there is a probability of zero that we would hit a natural number. Why? Because the cardinality of the naturals is less than the cardinality of the reals. By the same token, there is a probability of zero that our dart hits a rational number (as the rationals are also only countably large).

Now, if we throw a second dart, the probability that we hit a real number that’s some rational distance away from the first real number is also zero. Again, this is just because we’re considering a countable set of candidates from an uncountable set of possibilities. So by the same token, if we assign to the first dart ANY countable set of real numbers, then there is probability zero that the second dart lands within that countable set. And if we also assign a countable set of real numbers to the second dart, there is also probability zero that the first dart happened to fall within the countable set assigned to the second.

So, to summarize in more general terms: for any function f that maps each real number to a countable set of real numbers, a random choice of x and y will give us that x ∉ f(y) and y ∉ f(x) with 100% probability. Now, the statement that is equivalent to the falsity of the continuum hypothesis (i.e. the existence of cardinalities between |ω| and |𝒫(ω)|) is that it’s *possible* to find two real numbers x and y that are not in each others’ countable sets.

Notice how weak this is! By randomly choosing real numbers, we will end up with two that are not in each others’ countable sets with probability 100%. But we’re not even saying that the probability 100% event will always happen! We’re just saying that it’s POSSIBLE that it happens! I.e. imagine that by some miraculous coincidence it happens that the first two real numbers we choose are within each others’ countable sets (technically, only one of them needs to be in the other’s countable set). After all, probability-zero events *do* happen. Not a problem! Just pick two new real numbers! And if this fails, pick again!

Pretty cool right? Well, it gets even cooler. We just saw that from the statement “for any function f that maps reals to countable sets of reals, there exists x and y such that x ∉ f(y) and y ∉ f(x)”, you can prove that |𝒫(ω)| > |ω_{1}| (where the cardinality of ω_{1} is by definition the next cardinality after ω). We can now consider THREE reals x, y, and z. Consider mappings from two real numbers to a countable set of reals. We can now state that for any such mapping, none of the three reals is in the countable set assigned to the others. And this entails that we can prove that |𝒫(ω)| > |ω_{2}|! In other words, we can prove that there are at least TWO cardinalities in between the reals and the naturals!

And for ANY n, if we take the statement that there are n reals such that none is inside the countable set mapped to by the others, then we can use this to prove that |𝒫(ω)| > |ω_{n-1}|. Notice: whatever n we choose, there’s still probability 100% that the n reals don’t “overlap” in this way! So for any n, we’re again only claiming that it’s possible for an event with 100% probability to happen. And if we accept this, then this entails that there’s not just a single cardinality between |ω| and |𝒫(ω)|. There’s definitely at least two. And at least three. And so on for ANY finite value of n. Which means that there are infinitely many cardinalities between the naturals and the reals!

Ok, this is a good point to stop reading if you don’t want to know the technical details of how EXACTLY to prove the result and are satisfied to just know what the result says. But if not, then read on! (The proof is actually really short and simple, though it relies on some background knowledge about ordinals.)

I’ll prove the result for n = 2. Suppose that |𝒫(ω)| = ℵ_{1} = |ω_{1}|. Then by the axiom of choice, there’s some well-ordering of the reals of order type ω_{1}. Call this well-ordering <. We define f(x) to be {y | y ≤ x}. Notice that since ω_{1} is the FIRST uncountable ordinal, f(x) is always a countable set, no matter what x is! Then, since for any two reals, x ≤ y or y ≤ x, this entails that for any two reals, x ∈ f(y) or y ∈ f(x).

And we’re done! We’ve shown that if the continuum hypothesis is true, then there exists a function f such that for any two reals, x ∈ f(y) or y ∈ f(x). So if we think that there is no such function, then we must deny the continuum hypothesis.

The random reals argument doesn’t work. It’s not true that for any function on uncountable elements to countable sets there exists an x that isn’t in f(y) and a y that isn’t in f(x). The first uncountable ordinal is a counterexample. The problem is that the argument against there being a set of reals with no x and y such that x isn’t in f(y) and y isn’t in f(x) applies with equal force against omega-one being set of its own elements. Throw a dart at omega-ones elements, there are uncountably many, if each element is (mapped to) a countable set of its predecessors… Think about throwing darts at the elements of omega, landing in a finite set out of an infinite set has probability zero, and yet either x is in f(y) or y is in f(x).

This is a really good point. I think it’s wrong for subtle reasons. There’s an important disanalogy between (1) picking one of a finite set of elements out of a countably infinite set of possibilities, and (2) picking one of a countable set of elements out of a set of cardinality 2^ℵ0. In the first case we use an ordinary probability distribution, and in the second case we use a probability density function. In practical terms, this means that any probability distribution over omega will have to assign finite probability to some finite sets, but not any probability distribution over R will have to assign finite probability to some countable sets.

The probabilistic claim we need is just that “If an event has 100% probability, then it’s possible”, which I think we do have to accept. That claim entails that for any function from reals to countable sets of reals, there must be an x and a y such that x is not in f(y) and y is not in f(x). It DOESN’T entail that for any function from naturals to finite sets of naturals there must be an x and a y such that x is not in f(y) and y is not in f(x).

Now, does it entail that for any function from a set of size |omega-1| to countable sets there must be an x and a y such that x is not in f(y) and y is not in f(x)? I think that depends on the continuum hypothesis. If CH is true, then omega-1 is the same size as the reals, so the claim does hold. If CH is false, then we have to ask if there are probability distributions over sets of cardinality ℵ1 that assign zero to all subsets of cardinality ℵ0. And that’s a complicated question about a peculiar cardinality that we don’t know much about. So it’s not so clear that the claim should hold for omega-1.

Summarizing: The crucial claim that this argument relies on is: “If an event has 100% probability, then it’s possible.” That premise applies whenever we can say that “x is not in f(y) and y is not in f(x)” has 100% probability. Which we know applies for countable subsets of real numbers, doesn’t apply for finite subsets of natural numbers, and it’s not so clear on measure-theoretic grounds whether it applies for countable subsets of omega-1.

Probability arguments don’t work when there are non-measurable sets. If you can throw darts at the real line, you can throw darts to determine the measure of any set, by throwing darts and counting the fraction that land in the set. The moment you admit a non-measurable, you renounce probability arguments forever, and this is what causes the intuition issues with the axiom of choice.

It has nothing to do with the axiom of choice. It’s about the idea of powerset as a set. Just use a type theory with the powersets considered as different types, they aren’t ‘sets’ with a pre-existing structure.

To understand this, there is no substitute for Cohen’s book “Set Theory and the Continuum Hypothesis”. The argument you give is due to Sierpinsky, it was popularized by Freiling in the 80s, but the real results here are the two proofs of Lebesgue measurability, the consistency proof due to Solovay and Cohen and the determinacy proof.

This is a really interesting point, Ron. Thanks for the book recommendation, I’ll put it on my list to check out.

i don’t understand how this is related to CH… if you replace omega-1 by any other infinite cardinal, the proof still holds, right? let x be any infinite cardinal less than beth_1. then let f be a function that assigns reals to sets of reals of cardinality <beth_1. we have a 100% chance of y not being in f(x) and x not being in f(y). then, by the axiom of choice, there is a well-ordering of the reals of order type beth_1. let f(x) be {y | y ≤ x} in this well-order. then clearly y is in f(x) or x is in f(y) and since our order type is exactly beth_1 that means that the set will always have cardinality <beth_1, which means that the 100% probability event will never happen