A Compact Proof of the Compactness Theorem

I’ve written up a proof of the compactness theorem before, but I recently looked it over and think that the proof can be expressed more, heh heh, compactly, than before.

So, what is the compactness theorem? It is the statement that if a set of statements is finitely satisfiable (each of its finite subsets has a model), then it’s satisfiable. The converse of this (a satisfiable set of statements is also finitely satisfiable) is trivial: a model of a set of sentences is also a model of every subset of that set. The following proof will be for propositional logic, but can be easily extended to first order logic.

The short version

Suppose that a set of sentences A is finitely satisfiable. We’ll extend A to a larger set B by giving A an “opinion” on every sentence. We can build up this extension in a series of stages: B0 is just A. Now, take any sentence a. If B0 ⋃ {a} is finitely satisfiable, define B1 to be B0 ⋃ {a}. Otherwise, define B1 to be B0 ⋃ {¬a}. Either way, B1 will be finitely satisfiable, because B0 cannot be inconsistent with both a and ¬a. When we’ve gone through every sentence, we take the union of all these extensions to form B. B is finitely satisfiable, since every finite subset of B is also a finite subset of one of the extensions that make it up. Now, we define a truth assignment V that assigns True to each propositional variable p if and only if p is in B. V satisfies B (which you can show by induction on sentences), and since A is a subset of B, V also satisfies A. So A is satisfiable.

The long(er) version

Suppose that a set of sentences A is finitely satisfiable. If a set of sentences A’ is finitely satisfiable, then for any sentence a, at least one of A’ ⋃ {a} and A’ ⋃ {¬a} is finitely satisfiable. If neither were, then we’d have two finite sets, one that entails ¬a and the other that entails a, and the union of these would be an unsatisfiable finite subset of A’.) So given any well-ordering of the sentences of the language, we can extend A one sentence at a time, maintaining finite satisfiability at each step. The union of all these extensions, call it B, is still finitely satisfiable, because any finite subset of B is also a finite subset of one of the extensions. B is also complete: for any sentence b either b ∈ B or ¬b ∈ B.

Now, define the truth assignment V as follows: for any atomic sentence b, V(b) = True iff b ∈ B. V satisfies B, which we prove by induction on sentences:

  • If b is an atomic sentence, then V(b) = True iff b ∈ B, by construction.
  • If b = ¬c for some c (for which V(c) = True iff c ∈ B), then V(b) = True iff V(c) = False iff c ∉ B iff b ∈ B.
  • If b = c ∧ d for some c and d that satisfy the induction hypothesis, then V(b) = V(c ∧ d) = True iff V(c) = V(d) = True iff c ∈ B and d ∈ B. If both c and d are in B, then ¬(c ∧ d) can’t be in B by finite satisfiability, so c ∧ d ∈ B by completeness of B. And if c ∧ d ∈ B, then neither ¬c nor ¬d can be in B by finite satisfiability. So c and d are both in B by completeness of B.

This proof by induction covers the connectives ¬ and ∧, with which you can express all other connectives, so this shows that V satisfies B. And since B is a superset of A, V satisfies A as well. This shows that any finitely satisfiable set of sentences is also satisfiable. Note that we used the axiom of choice implicitly with the instruction to well-order the language. As the language can be any size, this is equivalent to the well-ordering principle, which is equivalent in ZF to the axiom of choice. The same sort of assumption arises in the proof of the completeness theorems for first-order and propositional logics. If you reject choice, then you should also be skeptical that propositional logic and first-order logics are complete!

ZFC, and getting the right answer by accident

There’s something I’m confused about with regards to ZFC. We already know that no first order theory allows you to perfectly capture the semantics of concepts like the natural numbers or finitely many. The best we can do if restricted to first-order logic (which is where most of modern mathematics tries to stay) is to have theories with surrogates for those concepts.

For instance, there’s this object called ω that stands in for the natural numbers in ZFC, even though there are all these models of ZFC in which ω contains many more objects besides the natural numbers (in fact, uncountably many in some models). And “X is finite” is replaced by “there’s a bijection from X to an element of ω.”

In formalizing these surrogate concepts, we make sure to not include any false statements about the concepts, which gives us a type of soundness of our surrogate concept. I.e., ZFC isn’t going to prove things about ω that are false of the set of natural numbers, because in one model ω is the set of natural numbers.

But it doesn’t give us completeness. We aren’t going to be able to prove ALL the true first-order sentences about the natural numbers, or about the concept of finiteness. (This is of course just a product of the first-order nature of the theories in which we’re defining these surrogates.)

So in each of these cases there will be true statements involving the concept that our theory will be agnostic about. We can look for “really good” surrogates, surrogates which allow us to prove most of the important and interesting true statements about the concept, and only fail in very abstract and unusual settings. The degree to which we can make good surrogates is the degree to which a first-order thinker can make sense of and usefully apply non-first-order concepts. (A first-order thinker being one that has no built-in understanding of non-first-order concepts.)

So one general question is: how good is a given surrogate? And another is, how do we know based on the axioms how good of a surrogate we’ll be getting? This is the thing I’m confused about.

In ZFC, there’s this weird phenomenon of the theory “getting the right answers accidentally.” It’s a little tough to put into words, but here’s an example:

ZFC can prove that |P(X)| > |X| for all X. So for instance ZFC can prove that |P(ω)| > |ω|. Meaning that ZFC can prove that the power set of the naturals is uncountable. But ZFC has countable models! (As does every first-order theory.) In those models, the power set of the naturals is NOT uncountable.

First order logic is sound, so what’s going on ISN’T that ZFC is proving a sentence that’s false in some of its models. It’s that the sentence is false in that model, if interpreted to be about the actual concept, and true in that model if interpreted to be about the surrogate concept. The surrogate for “P(ω) is uncountable” in ZFC is “there’s no bijection from P(ω) to ω”. And in the countable models of ZFC, the bijection that would prove the equinumerosity of ω and P(ω) is missing! So in those models, it’s actually TRUE that “there’s no bijection from P(ω) to ω”, even though P(ω) and ω really do have the same number of elements.

This is the sort of subtle nonsense you get used to in mathematical logic. Two accidents cancel out: first ZFC makes the mistake of having models where P(ω) is countable, and second it makes the mistake of losing track of the bijection from P(ω) and ω. And as a result, ZFC is able to prove the correct thing.

This seems really weird to me, and it’s an unsettling way for a supposed foundations of mathematics to be getting the right answers. This type of thing happens a lot, and it feels like we keep getting “lucky” in that our unintended interpretations of a sentence interfere with each other and cancel out their problematic features. It makes me wonder why we should be confident that ZFC will continue giving us the right answers (as opposed to being agnostic on important basic questions). And in fact we do have some examples of important and basic questions that ZFC is agnostic on, most dramatically that if |X| < |Y|, then |P(X)| < |P(Y)|!

It’s not that I doubt that ZFC’s surrogates for non-first order concepts end up allowing us to prove an enormous amount of the true facts about these concepts, because they clearly do. But is there some principled reason we can give for WHY the axioms of ZFC lead us to such a robust framework for mathematics in which we can prove many true statements?

One thing that suggests this is the power of the axiom of replacement. It’s just an extremely strong and useful axiom that allows us to prove a whole lot. But that doesn’t seem to help explain the “right-by-accident” phenomenon. So what does?

Polish Notation and Garden-Path Sentences

Polish notation is a mathematical notation system that allows you to eliminate parentheses without ambiguity. It’s called “Polish” because the name of its Polish creator, Jan Łukasiewicz, was too difficult for people to pronounce.

A motivating example: Suppose somebody says “p and q implies r”. There are two possible interpretations of this: “(p and q) implies r” and “p and (q implies r)”. The usual way to disambiguate these two is to simply add in parentheses like I just did. Another way is to set an order-of-operations convention, like that “and” always applies before “implies”. This is what’s used in basic algebra, and what allows you to write 2 + 2 ⋅ 4 without any fear that you’ll be interpreted as meaning (2 + 2) ⋅ 4.

Łukasiewicz’s method was to make all binary connectives into prefixes. So “A and B” because “and A B”, “P implies Q” becomes “implies P Q”, and so on. In this system, “(p and q) implies r” translates to “implies and p q r”, and “p and (q implies r)” translates to “and p implies q r”. Since the two expressions are different, there’s no need for parentheses! And in general, no ambiguity ever arises from lack of parentheses when using Polish notation.

If this is your first time encountering Polish notation, your first reaction might be to groan and develop a slight headache. But there’s something delightfully puzzling about reading an expression written in Polish notation and trying to understand what it means. Try figuring out what this means: “implies and not p or q s r”. Algebra can be written in Polish notation just as easily, removing the need for both parentheses AND order-of-operations. “2 + 2 = 4” becomes “+ 2 2 = 4”, or even better, “= + 2 2 4”.

Other binary connectives can be treated in Polish notation as well, creating gems like: “If and you’re happy you know it clap your hands!” “When life is what happens you’re busy making plans.” “And keep calm carry on.” “Therefore I think, I am.” (This last one is by of the author the Meditations). Hopefully you agree with me that these sentences have a nice ring to them, though the meaning is somewhat obscured.

But putting connectives in front of the two things being connected is not unheard of. Some examples in English: “ever since”, “because”, “nonwithstanding”, “whenever”, “when”, “until”, “unless”. Each of these connects two sentences, and yet can appear in front of both. When we hear a sentence like “Whenever he cheated on a test the professor caught him”, we don’t have any trouble parsing it. (And presumably you had no trouble parsing that entire last sentence either!) One could imagine growing up in a society where “and” and “or” are treated the same way as “ever since” and “until”, and perhaps in this society Polish notation would seem much more natural!

Slightly related to sentential connectives are verbs, which connect subjects and objects. English places its verbs squarely between the subject and the object, as does Chinese, French, and Spanish. But in fact the most common ordering is subject-object-verb! 45% of languages, including Hindi, Japanese, Korean, Latin, and Ancient Greek, use this pattern. So for instance, instead of “She burned her hand”, one would say “she her hand burned”. This is potentially weirder to English-speakers than Polish notation; it’s reverse Polish notation!

9% of languages use Polish notation for verbs (the verb-subject-object pattern). These include Biblical Hebrew, Arabic, Irish, and Filipino. In such languages, it would be grammatical to say “Loves she him” but not “She loves him”. (3% of languages are VOS – loves him she – 1% are OVS – him loves she – and just a handful are OSV – him she loves).

Let’s return to English. Binary prepositions like “until” appear out front, but they also swap the order of the two things that they connect. For instance, “Until you do your homework, you cannot go outside” is the same as “You cannot go outside until you do your homework”, not “You do your homework until you cannot go outside”, which sounds a bit more sinister.

I came up with some examples of sentences with several layers of these binary prepositions to see if the same type of confusion as we get when examining Polish notation for “and” or “implies” sets in here, and oh boy does it.

Single connective
Since when the Americans dropped the bomb the war ended, some claimed it was justified.

Two connectives, unlayered
Since when the Americans dropped the bomb the war ended, when some claimed it was an atrocity others argued it was justified.

Still pretty readable, no? Now let’s layer the connectives.

One layer
Whenever he was late she would weep.
She would weep whenever he was late.

Two layers
Since whenever he was late she would weep, he hurried over.
He hurried over, since she would weep whenever he was late.

Three layers
Because since whenever he was late she would weep he hurried over, he left his wallet at home.
He left his wallet at home, because he hurried over since she would weep whenever he was late.

Four layers
Because because since whenever he was late she would weep he hurried over he left his wallet at home, when he was pulled over the officer didn’t give him a ticket.
The officer didn’t give him a ticket when he was pulled over, because he left his wallet at home because he hurried over since she would weep whenever he was late.

Five layers
When he heard because because since whenever he was late she would weep he hurried over he left his wallet at home, when he was pulled over the officer didn’t give the man a ticket, the mayor was outraged at the lawlessness.
The mayor was outraged at the lawlessness when he heard the officer didn’t give the man a ticket when he was pulled over because he left his wallet at home because he hurried over since she would weep whenever he was late.

Read that last one out loud to a friend and see if they believes you that it makes grammatical sense! With each new layer, things become more and more… Polish. That is, indecipherable. (Incidentally, Polish is SVO just like English). Part of the problem is that when we have multiple layers like this, phrases that are semantically connected can become more and more distant in the sentence. It reminds me of my favorite garden-path sentence pattern:

The mouse the cat the dog chased ate was digested.
(The mouse that (the cat that the dog chased) ate) was digested.
The mouse (that the cat (that the dog chased) ate) was digested.

The phrases that are meant to be connected, like “the mouse” and “was digested” are sandwiched on either side of the sentence, and can be made arbitrarily distant by the addition of more “that the X verbed” clauses.

Does anybody know of any languages where “and” comes before the two conjuncts? What about “or”? English does this with “if”, so it might not be too much of a stretch.

Defining uncountability

Most of the shocking results in mathematical logic have to do with how limited we are in terms of producing good deductive systems for mathematical concepts. (Good here means sound, complete, finitary, and computable.) No logic with a good deductive system can categorically define the natural numbers. Adding a “for finitely many” quantifier to first-order logic makes it impossible to have a good deductive system. Ditto for a “for all subsets of” quantifier. And so on.

Once you’ve accustomed yourself to the extreme logical limitations imposed on us, it comes as a bit of a shock when you realize that we still have the ability to describe ANY complicated mathematical concept. What I learned recently was that you can extend first-order logic by adding a “for uncountably many” quantifier, and still have a good deductive system for the logic! It says something that I was so impressed by this fairly moderate ability to distinguish between the countable and the uncountable. But it certainly does feel puzzling that while you can distinguish between the countable and the uncountable, you can’t distinguish between the finite and the infinite, which seems like a much more basic and fundamental division!

What’s even cooler is that the deductive system for this extended logic is extremely simple. It amounts to taking the axioms and inference rules of first-order logic, and then tacking on four additional axiom schemas. We’ll write “for uncountably many x, Φ(x)” as Ux Φ(x).

  1. ∀y∀z¬Ux (x = y ∨ x = z)
  2. ∀x (Φ(x) → Ψ(x)) → (Ux Φ(x) → Ux Ψ(x))
  3. Ux Φ(x) ↔ Uy Φ(y), where y isn’t free in Φ(x)
  4. Uy∃x Φ(x,y) → (∃xUy Φ(x,y) ∨ Ux∃y Φ(x,y))

Each of these axioms schemas is conceptually quite easy to make sense of. The first says that for any two elements y and z, there’s not uncountably many things that are each equal to one or the other. In other words, uncountable is bigger than two.

The second says that if the set of things that satisfy Φ is a subset of the set of things that satisfy Ψ, and uncountably many things satisfy Φ, then uncountably many things satisfy Ψ. In other words, if a set has an uncountable subset, then it must be uncountable itself.

The third is obvious enough that it doesn’t need explanation. The fourth, on the other hand, does. I find it helpful to think of Φ(x,y) as saying “x points at y.” Then Uy∃x Φ(x,y) becomes “uncountably many things are pointed at”, ∃xUy Φ(x,y) becomes “some person points at uncountably many things”, and Ux∃y Φ(x,y) becomes “there are uncountably many people pointing.”

So the fourth axiom just says that if uncountably many things are pointed at, then either somebody is pointing at uncountably many things, or there are uncountably many people pointing at things. Otherwise you’d only have countably many people pointing at countably many things each, and that’s not enough to get uncountably many things pointed at! So this fourth axiom can really be understood as saying that the union of countably many countable sets is countable.

✯✯✯

There’s a caveat to the above claims. This is that first-order logic extended by the “for uncountably many” quantifier only has a good deductive system IF the language is restricted to only allow countably many constant symbols. Here’s the reasoning:

First of all, if a logic has a sound, complete and finitary deductive system, then it is compact. Discussed here.

Second, if a logic L is compact and a set of sentences Σ in L has a countably infinite model, then Σ also has an uncountable model. Why? Simply append to L an uncountable set A of constant symbols, and add to Σ an uncountable set of sentences declaring that each of these constants refer to a distinct object. Now we have an uncountable model of Σ’s extension by compactness (the countably infinite model is a model of every finite subset of Σ’s extension), and this uncountable model must also satisfy Σ. (This is the only place where we make use of uncountably many constant symbols.)

Third, if a logic L is compact, and a set of sentences Σ in L has models of arbitrarily large finite size, then Σ has infinite models. The proof of this is virtually identical to the last paragraph; add infinitely many constant symbols and declare them all to refer to distinct objects. Every finite subset of this extension of Σ has a model, so by compactness the extension of Σ also has a model. This model is infinite by construction, and as a model of an extension of Σ must also be a model of Σ.

That sets up all the background model theory we need. Now, suppose that FOL+U had a good deductive system that applied even with uncountably many constant symbols. By soundness, completeness, and finiteness, FOL + U would be compact. Consider now the FOL+U theory T consisting of the single axiom ¬Ux (x = x). T cannot have uncountable models, by assumption that the quantifier Ux is really capturing the semantics of “for uncountably many x”.

But if T can’t have uncountable models, then it can’t have a countably infinite models either, by the second result above! So T doesn’t have any infinite models. And then by the third result, T cannot have models of arbitrarily large finite size! This means that asserting “there aren’t uncountably many objects”, we actually end up ruling out all countably infinite models, as well as all models larger than some finite size! This is inconsistent with the claim that the U quantifier is correctly capturing the semantics of “uncountably many”; after all, the negation of “for uncountably many” should be “for countably many”, not “for small enough finite”!

Summing up the argument:

  1. If a logic has a sound, complete, and finitary deductive system, then it is compact.
  2. If a logic L is compact, and a set of sentences Σ in L has a countable model, then Σ has an uncountable model.
  3. If a logic L is compact, and a set of sentences Σ in L has models of arbitrary finite size, then Σ has infinite models.
  4. FOL+U = First order logic + “for uncountably many” is sound, complete, and finitary.
  5. The FOL+U theory T = {¬Ux (x = x)} doesn’t have uncountable models.
  6. By 1 and 4, FOL+U is compact.
  7. By 2 and 6, if a set of sentences Σ in FOL+U has a countable model, then Σ has an uncountable model.
  8. Suppose T had countably infinite models. Then by 7, it would also have to have uncountable models, contradicting 5. So T doesn’t have countably infinite models.
  9. Suppose T had arbitrarily large finite models. Then by 3 and 6, T would have infinite models, contradicting 5 and 8. So T doesn’t have arbitrarily large finite models.
  10. Rephrasing 9, there’s some finite n such that T has no models of cardinality larger than n.

✯✯✯

What’s great about the logic FOL+U is that it allows you to talk about the property of uncountability, so long as your theories are all only countably large (which isn’t too bad a restriction). So for instance, consider the FOL+U theory consisting of the axioms of first-order Peano arithmetic plus one additional axiom: ¬Ux (x = x). This theory successfully rules out all uncountable models! This is something that you couldn’t do in ordinary first order logic, since it obeys the upward Lowenheim-Skolem property (a model of one infinite cardinality implies models of every larger infinite cardinality). The order type of all countable nonstandard models of PA is known (each is ω + ℤ⋅ℚ, a natural number line followed by countably many integer lines arranged like the rationals), and much of the difficulty of the model theory of PA involves the uncountable models. So this extension of PA is a big improvement over first-order PA!

Remember that ZFC has those pesky countable models, in which “the set of all subsets” turns out to be missing some subsets? We can get rid of all of those countable models with the FOL+U theory consisting of the axioms of ZFC + the additional axiom Ux (x = x). I’m pretty sure that this does not fix all the issues with correctly pinning down the notion of the power set — that would be too good to be true — but it is certainly an improvement!

Who would win in a fight, logic or computation?

Which paradigm is stronger? When I say logic here, I’ll be referring to standard formal theories like ZFC and Peano arithmetic. If one uses an expansive enough definition to include computability theory, then logic encompasses computation and wins by default.

If you read this post, you might immediately jump to the conclusion that logic is stronger. After all, we concluded that in first order Peano arithmetic one can unambiguously define not only the decidable sets, but the sets decidable with halting oracles, and those decidable with halting oracles for halting oracles, and so on forever. We saw that the Busy Beaver numbers, while undecidable, are definable by a Π1 sentence. And Peano arithmetic is a relatively weak theory; much more is definable in a theory like ZFC!

On the other hand, remember that there’s a difference between what’s absolutely definable and what’s definable-relative-to-ℕ. A set of numbers is absolutely definable if there’s a sentence that is true of just those numbers in every model of PA. It’s definable-relative-to-ℕ if there’s a sentence that’s true of just those numbers in the standard model of PA (the natural numbers, ℕ). Definable-relative-to-ℕ is an unfair standard for the strength of PA, given that PA, and in general no first order system, is able to categorically define the natural numbers. On the other hand, absolute definability is the standard that meets the criterion of “unambiguous definition”. With an absolute definition, there’s no room for confusion about which mathematical structure is being discussed.

All those highly undecidable sets we discussed earlier were only definable-relative-to-ℕ. In terms of what PA is able to define absolutely, it’s limited to only the finite sets. Compare this to what sets of numbers can be decided by a Turing machine. Every finite set is decidable, plus many infinite sets, including the natural numbers!

This is the first example of computers winning out over formal logic in their expressive power. While no first order theory (and in general no theory in a logic with a sound and complete proof system) can categorically define the natural numbers, it’s incredibly simple to write a program that defines that set using the language of PA. In regex, it’s just “S*0”. And here’s a Python script that does the job:

def check(s):
    if s == '':
        return False
    elif s == ‘0’:
        return True
    elif s[0] == ’s’:
        return check(s[1:])
    else:
        return False

We don’t even need a Turing machine for this; we can build a simple finite state machine:

In general, the sets decidable by computer can be much more complicated and interesting than those absolutely definable by a first-order theory. While first-order theories like ZFC have “surrogates” for infinite sets of numbers (ω and its definable subsets), these surrogate sets include nonstandard elements in some models. As a result, ZFC may fail to prove some statements about these sets which hold true of ℕ but fail for nonstandard models. For instance, it may be that the Collatz conjecture is true of the natural numbers but can’t be proven from ZFC. This would be because ZFC’s surrogate for the set of natural numbers (i.e. ω) includes nonstandard elements in some models, and the Collatz conjecture may fail for some such nonstandard elements.

On top of that, by taking just the first Turing jump into uncomputability, computation with a halting oracle, we are able to prove the consistency of ZFC and PA. Since ZFC is recursively axiomatizable, there’s a program that runs forever listing out theorems, such that every theorem is output at some finite point. We can use this to produce a program that looks to see if “0 = 1” is a theorem of ZFC. If ZFC is consistent, then the program will search forever and never find this theorem. But if ZFC is not consistent, then the program will eventually find the theorem “0 = 1” and will terminate at that point. Now we just apply our halting oracle to this program! If ZFC is consistent, then the halting oracle tells us that the program doesn’t halt, in which case we return “True”. If ZFC is not consistent, then the halting oracle tells us that the program does halt, in which case we return “False”. And just like that, we’ve decided the truth value of Con(ZFC)!

The same type of argument applies to Con(PA), and Con(T) for any T that is recursively axiomatizable. If you’re not a big fan of ZFC but have some other favored system for mathematics, then so long as this system’s axioms can be recognized by a computer, its consistency can be determined by a halting oracle.

Some summary remarks.

On the one hand, there’s a very simple explanation for why logic appears over and over again to be weaker than computation: we only choose to study formal theories that are recursively axiomatizable and have computable inference rules. Of course we can’t do better than a computer using a logical theory that can be built into a computer! If on the other hand, we chose true arithmetic, the set of all first-order sentences of PA that are true of ℕ, to be our de jure theory of mathematics, then the theorems of mathematics could not be recursively enumerated by any Turing machine, or indeed any oracle machine below 0(ω). So perhaps there’s nothing too mysterious here after all.

On the other hand, there is something very philosophically interesting about truths of mathematics that cannot be proven just using mathematics, but could be proven if it happened that our universe gave us access to an oracle for the halting problem. If a priori reasoning is thought to be captured by a formal system like ZFC, then it’s remarkable that there are facts about the a priori (like Con(ZFC)) that cannot possibly be established by a priori reasoning. Any consistency proof cannot be provided by a consistent system itself, and going outside to a stronger system whose consistency is more in doubt doesn’t help at all. The only possible way to learn the truth value of such statements is contingent; we can learn it if the universe contains some physical manifestation of a halting oracle!

Computing truth values of sentences of arithmetic, or: Math is hard

Previously I talked about the arithmetic hierarchy for sets, and how it relates to the decidability of sets. There’s also a parallel notion of the arithmetic hierarchy for sentences of Peano arithmetic, and it relates to the difficulty of deciding the truth value of those sentences.

Truth value here and everywhere else in this post refers to truth value in the standard model of arithmetic. Truth value in the sense of “being true in all models of PA” is a much simpler matter; PA is recursively axiomatizable and first order logic is sound and complete, so any sentence that’s true in all models of PA can be eventually proven by a program that enumerates all the theorems of PA. So if a sentence is true in all models of PA, then there’s an algorithm that will tell you that in a finite amount of time (though it will run forever on an input that’s false in some models).

Not so for truth in the standard model! As we’ll see, whether a sentence evaluates to true in the standard model of arithmetic turns out to be much more difficult to determine in general. Only for the simplest sentences can you decide their truth value using an ordinary Turing machine. And the set of all sentences is in some sense infinitely uncomputable (you’ll see in a bit in what sense exactly this is).

What we’ll discuss is a way to convert sentences of Peano arithmetic to computer programs. Before diving into that, though, one note of caution is necessary: the arithmetic hierarchy for sentences is sometimes talked about purely syntactically (just by looking at the sentence as a string of symbols) and other times is talked about semantically (by looking at logically equivalent sentences). Here I will be primarily interested in the entirely-syntactic version of the arithmetic hierarchy. If you’ve only been introduced to the semantic version of the hierarchy, what you see here might differ a bit from what you recognize.

Let’s begin!

The simplest types of sentences have no quantifiers at all. For instance…

0 = 0
2 ⋅ 2 < 7
(2 + 2 = 4) → (2 ⋅ 2 = 4)

Each of these sentences can be translated into a program quite easily, since +, ⋅, =, and < are computable. We can translate the → in the third sentence by converting it into a conjunction:

## (2 + 2 = 4) → (2 ⋅ 2 = 4)
not(2 + 2 == 4 and not 2 * 2 == 4)

Slightly less simple-looking are sentences with bounded quantifiers:

∀x < 10 (x + 0 = x)
∃x < 100 (x + x = x)
∀x < 5 ∃y < 7 (x > 1 → x⋅y = 12)
∃x < 5 ∀y < x ∀z < y (y⋅z ≠ x)

In each of these examples, the bounded quantifier could in principle be expanded out, leaving us with a finite quantifier-free sentence. This should suggest to us that adding bounded quantifiers doesn’t actually increase the computational difficulty.

We can translate sentences with bounded quantifiers into programs by converting each bounded quantifier to a for loop. The translation slightly differently depending on whether the quantifier is universal or existential:

def Aupto(n, phi):
    for x in range(n):
        if not phi(x):
            return False
    return True
def Elessthan(n, phi):
    for x in range(n):
        if phi(x):
            return True
    return False

Note that the second input needs to be a function; reflecting that it’s a sentence with free variables. Now we can quite easily translate each of the examples, using lambda notation to more conveniently define the necessary functions

## ∀x<10 (x + 0 = x)
Aupto(10, lambda x: x + 0 == x)

## ∃x<100 (x + x = x)
Elessthan(100, lambda x: x + x == x)

## ∀x<5 ∃y<7 ((x > 1) → (x*y = 12))
Aupto(5, lambda x: Elessthan(7, lambda y: not (x > 1 and x * y != 12)))

## ∃x<5 ∀y<x ∀z<y (y⋅z ≠ x)
Elessthan(5, lambda x: Aupto(x, lambda y: Aupto(y, lambda z: y * z != x)))

Each of these programs, when run, determines whether or not the sentence is true. Hopefully it’s clear how we can translate any sentence with bounded quantifiers into a program of this form. And when we run the program, it will determine the truth value of the sentence in a finite amount of time.

So far, we’ve only talked about the simplest kinds of sentences, with no unbounded quantifiers. There are two names that both refer to this class: Π0 and Σ0. So now you know how to write a program that determines the truth value of any Σ00 sentence!

We now move up a level in the hierarchy, by adding unbounded quantifiers. These quantifiers must all appear out front and be the same type of quantifier (all universal or all existential).

Σ1 sentences: ∃x1 ∃x2 … ∃xk Phi(x1, x2, …, xk), where Phi is Π0.
Π1 sentences: ∀x1 ∀x2 … ∀xk Phi(x1, x2, …, xk), where Phi is Σ0.

Some examples of Σ1 sentences:

∃x ∃y (x⋅x = y)
∃x (x⋅x = 5)
∃x ∀y < x (x+y > x⋅y)

And some Π1 sentences:

∀x (x + 0 = x)
∀x ∀y (x + y < 10)
∀x ∃y < 10 (y⋅y + y = x)

We can translate unbounded quantifiers as while loops:

def A(phi):
    x = 0
    while True:
        if not phi(x):
            return False
        x += 1

def E(phi):
    x = 0
    while True:
        if phi(x):
            return True
        x += 1

There’s a radical change here from the bounded case, which is that these functions are no longer guaranteed to terminate. A(Φ) never returns True, and E(Φ) never returns False. This reflects the nature of unbounded quantifiers. An unbounded universal quantifier is claiming something to be true of all numbers, and thus there are infinitely many cases to be checked. Of course, the moment you find a case that fails, you can return False. But if the universally quantified statement is true of all numbers, then the function will have to keep searching through the numbers forever, hoping to find a counterexample. With an unbounded existential quantifier, all one needs to do is find a single example where the statement is true and then return True. But if there is no such example (i.e. if the statement is always false), then the program will have to search forever.

I encourage you to think about these functions for a few minutes until you’re satisfied that not only do they capture the unbounded universal and existential quantifiers, but that there’s no better way to define them.

Now we can quite easily translate our example sentences as programs:

## ∃x ∃y (x⋅x = y)
E(lambda x: E(lambda y: x * x == y))

## ∃x (x⋅x = 5)
E(lambda x: x * x == 5)

## ∃x ∀y < x (x+y > x⋅y)
E(lambda x: Aupto(x, lambda y: x + y > x * y))

## ∀x (x + 0 = x)
A(lambda x: x + 0 == x)

## ∀x ∀y (x + y < 10)
A(lambda x: A(lambda y: x + y < 10))

## ∀x ∃y < 10 (y⋅y + y = x)
A(lambda x: Elessthan(10, y * y + y == x))

The first is a true Σ1 sentence, so it terminates and returns True. The second is a false Σ1 sentence, so it runs forever. See if you can figure out if the third ever halts, and then run the program for yourself to see!

The fourth is a true Π1 sentence, which means that it will never halt (it will keep looking for a counterexample and failing to find one forever). The fifth is a false Π1 sentence, so it does halt at the first moment it finds a value of x and y whose sum is 10. And figure out the sixth for yourself!

The next level of the hierarchy involves alternating quantifiers.

Σ2 sentences: ∃x1 ∃x2 … ∃xk Φ(x1, x2, …, xk), where Φ is Π1.
Π2 sentences: ∀x1 ∀x2 … ∀xk Φ(x1, x2, …, xk), where Φ is Σ1.

So now we’re allowed sentences with a block of one type of unbounded quantifier followed by a block of the other type of unbounded quantifier, and ending with a Σ0 sentence. You might guess that the Python functions we’ve defined already are strong enough to handle this case (and indeed, all higher levels of the hierarchy), and you’re right. At least, partially. Try running some examples of Σ2 or Π2 sentences and see what happens. For example:

## ∀x ∃y (x > y)
A(lambda x: E(lambda y: x > y))

It runs forever! If we were to look into the structure of this program, we’d see that A(Φ) only halts if it finds a counterexample to Φ, and E(Φ) only halts if it finds an example of Φ. In other words A(E(Φ)) only halts if A finds out that E(Φ) is false; but E(Φ) never halts if it’s false! The two programs’ goals are diametrically opposed, and as such, brought together like this they never halt on any input.

The same goes for a sentence like ∃x ∀y (x > y): for this program to halt, it would require that ∀y (x > y) is found to be true for some value of x, But ∀y (x > y) will never be found true, because universally quantified sentences can only be found false! This has nothing to do with the (x > y) being quantified over, it’s entirely about the structure of the quantifiers.

No Turing machine can decide the truth values of Σ2 and Π2 sentences. There’s a caveat here, related to the semantic version of the arithmetic hierarchy. It’s often possible to take a Π2 sentence like ∀x ∃y (y + y = x) and convert it to a logically equivalent but Π1 sentence like ∀x ∃y<x (y + y = x). This translation works, because y + y = x is only going to be true if y is less than or equal to x. Now we have a false Π1 sentence rather than a false Π2 sentence, and as such we can find a counterexample and halt.

We can talk about a sentence’s essential level on the arithmetic hierarchy, which is the lowest level of the logically equivalent sentence. It’s important to note here that “logically equivalent sentence” is a cross-model notion: A and B are logically equivalent if and only if they have the same truth values in every model of PA, not just the standard model. The soundness and completeness of first order logic, and the recursive nature of the axioms of PA, tells us that the set of sentences that are logically equivalent to a given sentence of PA is recursively enumerable. So we can generate these sentences by searching for PA proofs of equivalence and keeping track of the lowest level of the arithmetic hierarchy attained so far.

Even when we do this, we will still find sentences that have no logical equivalents below Σ2 or Π2. These sentences are essentially uncomputable; not just uncomputable in virtue of their form, but truly uncomputable in all of their logical equivalents. However, while they are uncomputable, they would become computable if we had a stronger Turing machine. Let’s take another look at the last example:

## ∀x ∃y (x > y)
A(lambda x: E(lambda y: x > y))

Recall that the problem was that A(E(Φ)) only halts if E(Φ) returns False, and E(Φ) can only return True. But if we had a TM equipped with an oracle for the truth value of E(Φ) sentences, then maybe we could evaluate A(E(Φ))!

Let’s think about that for a minute more. What would an oracle for the truth value of Σ1 sentences be like? One thing that would work is if we could run E(Φ) “to infinity” and see if it ever finds an example, and if not, then return False. So perhaps an infinite-time Turing machine would do the trick. Another way would be if we could simply ask whether E(Φ) ever halts! If it does, then ∃y (x > y) must be true, and if not, then it must be false.

So a halting oracle suffices to decide the truth values of Σ1 sentences! Same for Π1 sentences: we just ask if A(Φ) ever halts and return False if so, and True otherwise.

If we run the above program on a Turing machine equipped with a halting oracle, what will we get? Now we can evaluate the inner existential quantifier for any given value of x. So in particular, for x = 0, we will find that Ey (x > y) is false. We’ve found a counterexample, so our program will terminate and return False.

On the other hand, if our sentence was true, then we would be faced with the familiar feature of universal quantifiers: we’d run forever looking for a counterexample and never find one. So to determine that this sentence is true, we’d need an oracle for the halting problem for this new more powerful Turing machine!

Here’s a summary of what we have so far:

TM = Ordinary Turing Machine
TM2 = TM + oracle for TM
TM3 = TM + oracle for TM2

The table shows what type of machine suffices to decide the truth value of a sentence, depending on where on the arithmetic hierarchy the sentence falls and whether the sentence is true or false.

We’re now ready to generalize. In general, Σn sentences start with a block of existential quantifiers, and then alternate between blocks of existential and universal quantifiers n – 1 times before ending in a Σ0 sentence. Πn sentences start with a block of universal quantifiers, alternates quantifiers n – 1 times, and then ends in a Σ0 sentence. And as you move up the arithmetic hierarchy, it requires more and more powerful halting oracles to decide whether sentences are true:

(TM = ordinary Turing machine, TMn+1 = TM + oracle for TMn)

If we define Σω to be the union of all the Σ classes in the hierarchy, and Πω the union of the Π classes, then deciding the truth value of Σω ⋃ Πω (the set of all arithmetic sentences) would require a TMω – a Turing machine with an oracle for TM, TM2, TM3, and so on. Thus the theory of true arithmetic (the set of all first-order sentences that are true of ℕ), is not only undecidable, it’s undecidable with a TM2, TM3, and TMn for every n ∈ ℕ. At every level of the arithmetic hierarchy, we get new sentences that are essentially on that level (not just sentences that are superficially on that level in light of their syntactic form, but sentences which, in their simplest possible logically equivalent form, lie on that level).

This gives some sense of just how hard math is. Just understanding the first-order truths of arithmetic requires an infinity of halting oracles, each more powerful than the last. And that says nothing about the second-order truths of arithmetic! That would require even stronger Turing machines than TMω – Turing machines that have halting oracles for TMω, and then TMs with oracles for that, and so on to unimaginable heights (just how high we must go is not currently known).

How uncomputable are the Busy Beaver numbers?

The Busy Beaver numbers are an undecidable set; if they were decidable, we could figure out BB(n) for each n, enabling us to decide the halting problem. They are also not recursively enumerable, but for a trickier reason. Recursive enumerability would not allow you to figure out BB(n) (as it just gives us the ability to list out the BB numbers in some order, not necessarily in increasing order). But since the sequence is strictly increasing, recursive enumerability would enable us to put an upper bound on BB(n), which is just as good for the purposes of solving the halting problem. Simply enumerate the BB numbers in whatever order your algorithm allows, and once you’ve listed n of them, you know that the largest of those n is at least as big as the BB(n) (after all, it’s a BB number that’s larger than n-1 other BB numbers).

So the Busy Beaver numbers are also not recursively enumerable. But curiously, they’re Turing equivalent to the halting problem, and the halting problem is recursively enumerable. So what gives?

The answer is that the Busy Beaver numbers are co-recursively enumerable. This means that there is an algorithm that takes in a number N and returns False if N is not a Busy Beaver number, and runs forever otherwise. Here’s how that algorithm works:

First, we use the fact that BB(N) is always greater than N. This allows us to say that if N is a Busy Beaver number, then it’s the running time of a Turing machine with at most N states. There are finitely many such Turing machines, so we just run them all in parallel. We wait for N steps, and then see if any machines halt at N steps.

If no machines halt at N steps, then we return False. If some set of machines {M1, M2, …, Mk} halt at N steps, then we continue waiting. If N is not a Busy Beaver number, then for each of these machines, there must be another machine of the same size that halts later. So if N is not a Busy Beaver number, then for each Mi there will be a machine Mi‘ such that Mi‘ has the same number of states as Mi and that halts after some number of steps Ni‘ > N. Once this happens, we rule out Mi as a candidate for the Busy Beaver champion. Eventually, every single candidate machine is ruled out, and we can return False.

On the other hand, if N is a Busy Beaver number, then there will be some candidate machine M such that no matter how long we wait, we never find another machine with the same number of states that halts after it. In this case, we’ll keep waiting forever and never get to return True.

It’s pretty cool to think that for any number that isn’t a Busy Beaver number, we can in principle eventually rule it out. If a civilization ran this algorithm to find the first N busy beaver numbers, they would over time rule out more and more candidates, and after a finite amount of time, they would have the complete list of the first N numbers. Of course, the nature of co-recursive enumerability is that they would never know for sure if they had reached that point; they would forever be waiting to see if one of the numbers on their list would be invalidated and replaced by a much larger number. But in the limit of infinite time, this process converges perfectly to truth.

✯✯✯

Define H to be the set of numbers that encode Turing machines that halt on the empty input, and B to be the set of Busy Beaver numbers. H and B are Turing equivalent. The proof of this is two-part:

  1. H is decidable with an oracle for B

We are given as input a Turing machine M (encoded as some natural number) and asked whether it will halt. We use our oracle for B to find the value of BB(n), where n is the number of states that M has, and run M for BB(n) steps. If M doesn’t halt at this point, then we know that it will never halt and we return False. And if it has already halted, we return True.

2. B is decidable with an oracle for H

We are given as input a number N and asked whether it’s a Busy Beaver number. We collect all Turing machines with at most N states, and apply H to determine which of these halt. We throw out all the non-halting Turing machines and run all the rest. We then run all the remaining Turing machines until each one has halted, noting the number of steps that each runs for and how many states it had. At the end we check if N was the running time of any of the Turing machines, and if so, if there are any other Turing machines with the same number of states that halted later. If so, then we return False, and otherwise we return True.

So H and B have the same Turing degree. And yet B is co-recursively enumerable and H is recursively enumerable (given a Turing machine M, just run it and return True if it ever halts). This is actually not so strange; the difference between recursive enumerability and co-recursive enumerability is not a difference in “difficulty to decide”, it’s just a difference between whether what’s decided is the True instances or the False instances.

As a very simple example of the same phenomenon, consider the set of all halting Turing machines H and the set of all non-halting Turing machines Hc. H is recursively enumerable and Hc is co-recursively enumerable. And obviously given an oracle for either one can also decide the other. More generally, for any set X, consider Xc = {n : n ∉ X}. X and Xc are Turing equivalent, and if X is recursively enumerable then Xc is co-recursively enumerable. What’s more, if X is Σn then Xc is Πn.

The Arithmetic Hierarchy and Computability

In this post you’ll learn about a deep connection between sentences of first order arithmetic and degrees of uncomputability. You’ll learn how to look at a logical sentence and determine the degree of uncomputability of the set of numbers that it defines. And you’ll learn much of the content of and intuitive justification for Post’s theorem, which is perhaps the most important result in computability theory.

The arithmetic hierarchy is a classification system for sets of natural numbers. It relates sentences of Peano arithmetic to degrees of uncomputability. There’s a level of the hierarchy for each natural number, and on each level there are three classes: Σn, Πn, and Δn. Where on the hierarchy a set of naturals lies indicates the complexity of the formulas that define it. Here’s how that works:

Σ0 = Π0 = Δ0

On the zeroth level are all the sets that are definable by a sentence in the language of first-order Peano arithmetic that has no unbounded quantifiers. Examples:

x = 0
(x = 0) ∨ (x = 3) ∨ (x = 200)
∃y < 200 (x = y)
∀y < x ∀z < y (y⋅z = x → z = 1)
∃y < x (x = y + y)

The first two examples have no quantifiers at all. The first, x = 0, is true of exactly one natural number (namely, 0), so the set that it defines is just {0}. The second is true of three natural numbers (0, 3, and 200), so the set it defines is {0, 3, 200}. Thus both {0} and {0, 3, 200} are on the zeroth level of the hierarchy. It should be clear that just by stringing together a long enough disjunction of sentences, we can define any finite set of natural numbers. So every finite set of natural numbers is on the zeroth level of the hierarchy.

The third example has a quantifier in it, but it’s a bounded quantifier. In that sentence, y only ranges over 200 different natural numbers. In some sense, this sentence could be expanded out and written as a long disjunction of the form (x = 0) ∨ (x = 1) ∨ … ∨ (x = 199). It should be easy to see that the set this defines is {0, 1, 2, … 199}.

The fourth example is a little more interesting, for two reasons. First of all, the quantifiers are now bound by variables, not numbers. This is okay, because no matter what value of x we’re looking at, we’re only checking finitely many values of y, and for each value of y, we only check finitely many values of z. Secondly, this formula can be translated as “x is a prime number.” This tells us that the set of primes {2, 3, 5, 7, 11, …} is on the zeroth level of the hierarchy, our first infinite set!

The fifth example also gives us an infinite set; work out which set this is for yourself before moving on!

Notice that each of the sets we’ve discussed so far can be decided by a Turing machine. For each set on the zeroth level, the defining sentence can be written without any quantifiers, involving just the basic arithmetic operations of Peano arithmetic (+, ⋅, and <), each of which is computable. So for instance, we can write an algorithm that decides prime membership for x by looking at the sentence defining the set of primes:

∀y < x ∀z < y (y⋅z = x → z = 1)

for y in range(x):
    for z in range(y):
        if (y⋅z == x) and (z != 1):
            return False
return True

Another example:

(x = 0) ∨ (x = 3) ∨ (x = 200)

if (x = 0) or (x = 3) or (x = 200):
    return True
else:
    return False

Every sentence with only bounded quantifiers can be converted in a similar way to an algorithm that decides membership in the set it defines. We just translate bounded quantifiers as for loops! Thus every set on the zeroth level is decidable. It turns out that every decidable set is also on the zeroth level. We can run the process backwards and, given an algorithm deciding membership in the set, convert it into a sentence of first-order Peano arithmetic with only bounded quantifiers.

On the zeroth level, the three classes Σ0, Π0, and Δ0 are all equal. So every decidable set can be said to be in each of Σ0, Π0, and Δ0. The reason for the three different names will become clear when we move on to the next level of the hierarchy.

Σ1, Π1, Δ1

Entering into the first level, we’re now allowed to use unbounded quantifiers in our definitions of the set. But not just any pattern of unbounded quantifiers, they must all be the same type of quantifier, and must all appear as a block at the beginning of the sentence. For Σ1 sets, the quantifiers up front must all be existential. For example:

∃y (y2 + y + 1 = x)
∃y ∃z (y is prime ∧ z is prime ∧ x = y + z ∧ x is even)

The first defines the infinite set {1, 3, 7, 13, 21, 31, …}, and the second defines the set of all even numbers that can be written as the sum of two primes (whether this includes every even number is unknown!). The shorthand I’ve used in the second sentence is permitted, because we know from the previous section that the sentences “x is prime” and “x is even” can be written without unbounded quantifiers.

It so happens that each of these sets can also be defined by formulas on the zeroth level as well. One easy way to see how is that we can just bound each quantifier by x and everything is fine. But are there any Σ1 sets that aren’t also Σ0?

Yes, there are! Every decidable set is in Σ0, so these examples won’t be as simple. To write them out, I’ll have to use some shorthand that I haven’t really justified:

∃y (the Turing machine encoded by x halts in y steps)
∃x (y Gödel-codes a proof from PA of the sentence Gödel-coded by x)

Both of these rely on the notion of numbers encoding other objects. In the first we encode Turing machines as numbers, and in the second we encode sentences of Peano arithmetic as numbers using Gödel numbering. In general, any finite object whatsoever can be encoded as a natural number, and properties of that object can be discussed as properties of numbers.

Let’s take a close look at the first sentence. The set defined by this sentence is the set of numbers that encode Turing machines that halt. (For readability, in the future I’ll just say “the set of Turing machines such that …” rather than “the set of natural numbers that encode Turing machines with the property …”.) This set is undecidable, by the standard proof of the undecidability of the halting problem. This tells us that the set cannot be defined without any quantifiers!

The second sentence defines the set of natural numbers that encode theorems of PA. Why is this set not decidable? Suppose that it were. Then we could ask “is 0=1 a theorem of PA?” and, assuming the consistency of PA, after a finite amount of time get back the answer “No.” This would allow us to prove the consistency of PA, and crucially, the Turing machine that we use to decide this set could be coded as a number and used by PA to prove the same thing! Then PA would prove its own consistency, making it inconsistent and contradicting our assumption. On the other hand, if PA is inconsistent, then this set actually is decidable, because everything is provable from the axioms of PA and the set of all sentences can be decided! So this is only a good example of “Σ1 but not Σ0” if you have faith in the consistency of PA.

The Σ0 sets were the decidable sets, so what are the Σ1 sets? They are the recursively enumerable sets (also called semi-decidable). These are the sets for which an algorithm exists that takes in an input N and returns True if N is in the set, and runs forever otherwise. And just like before, we can use the sentences of PA to construct such an algorithm! Let’s look at the first example:

∃y (y2 + y + 1 = x)

y = 0
while True:
    if y**2 + y + 1 == x:
        return True
    y += 1

For each unbounded quantifier, we introduce a variable and a while loop that each step increases its value by 1. This program runs until it finds a “yes” answer, and if no such answer exists, it runs forever.

Another example:

∃y (the Turing machine encoded by x halts in y steps)

y = 0
while True:
    if TM(x) halts in y steps:
        return True
    y += 1

I made up some notation here on the fly; TM(x) is a function that when fed a natural number returns the Turing machine it encodes. So what this program does is run TM(x) for increasing lengths of time, returning True if TM(x) ever halts. If TM(x) never halts, then the program runs forever. So this program semi-decides the halting problem.

Moving on to Π1 sets! Σ1 sets were those defined by sentences that started with a block of existential quantifiers, and Π1 sets are defined by sentences that start with a block of unbounded universal quantifiers. For example:

∀y ∀z (x = y⋅z → (y = 1 ∨ x = 1))
∀y > x ∃z < y (x⋅z = y)

The first example is hopefully clear enough. It defines the set of prime numbers in a similar way to before. The second is meant to be slightly deceiving. That first quantifier might look bounded, until you notice that the variable it supposedly binds ranges over infinitely many values. On the other hand, the second quantifier is genuinely bounded, as for any given y it only ranges over finitely many values. See if you can identify the set it defines.

Both of these sets are in fact also in Σ0. Like with Σ1 sets, it’s difficult to find simple examples of Π1 sets that aren’t also Σ0. Here’s a cheap example:

∀y (the Turing machine encoded by x doesn’t halt in y steps)

This defines the set of non-halting Turing machines. If we were to translate this into a program, it would look something like:

y = 0
while True:
    if TM(x) halts in y steps:
        return False
    y += 1

The difference between this and the earlier example is that this program can only successfully determine that natural numbers aren’t in the set, not that they are in the set. So we can only decide the “no” cases rather than the “yes” cases. Sets like this are called co-recursively enumerable.

So far we have that Σ0 = Π0 = Δ0 = {decidable sets of naturals}, Σ1 = {recursively enumerable sets of naturals}, and Π1 = {co-recursively enumerable sets of naturals}. To finish off level one of the hierarchy, we need to discuss Δ1.

A set is Δ1 if it is both Σ1 and Π1. It can be defined by either type of sentence. In other words, if it is both recursively enumerable (can decide the True instances) and co-recursively enumerable (can decide the False instances). Thus a set is Δ1 if and only if it’s decidable! So every Δ1 set is also a Δ0 set.

A diagram might help at this point to see what we’ve learned:

(“Recursive” is another term for “decidable”)

Σ2, Π2, Δ2

We now enter the second level of the hierarchy. Having exhausted the recursively enumerable, co-RE, and decidable sets, we know that the new sets we encounter can no longer be semi-decided by any algorithms. We’ve thus entered a new realm of uncomputability.

A set is Σ2 if it can be defined by a sentence with a block of ∃s, then a block of ∀s, and finally a Σ0 sentence (free of unbounded quantifiers. Start with some simple examples:

∃y ∀z (y⋅z < x)
∃y ∃z ∀w ∀u ((y⋅w < x) ∨ ((z + u) > x))

As with any examples you can write as simply as that, the sets these define are also Σ0. So what about sentences that define genuinely new sets that we haven’t seen before? Here’s one:

∃y ∀z (TM(x) doesn’t halt on input y after z steps)

This defines the set of Turing machines that compute partial functions (i.e. functions that run forever on some inputs).

Something funny happens when you start trying to translate this sentence into a program. We can start by translating the existential quantifier up front:

y = 0
while True:
    if ∀z (TM(x) doesn't halt on input y after z steps):
        return True
    y += 1

Now let’s see what it would look like to expand the universal quantifier:

z = 0
while True:
    if TM(x) halts on input y after z steps:
        return False
    z += 1

There’s a problem here. The first program only halts if the second one returns True. But the second one never returns “True”; it’s Π1, co-recursively enumerable! We’ve mixed a recursively enumerable program with a co-recursively enumerable one, and as a result the program we wrote never halts on any input!

This is what happens when we go above the first level of the hierarchy. We’ve entered the realm of true uncomputability.

On the other hand, suppose that we had access to an oracle for the halting problem. In other words, suppose that we are allowed to make use of a function Halts(M, i) that decides for any Turing machine M and any input i, whether M halts on i. Now we could write the following program:

y = 0
while True:
    if not Halts(TM(x), i):
        return True
    y += 1

And just like that, we have a working program! Well, not quite. This program halts so long as its input x is actually in the set of numbers that encode TMs that compute partial functions. If x isn’t in that set, then the program runs forever. This is the equivalent of recursive enumerability, but for programs with access to an oracle for the halting problem!

This is what characterizes Σ2 sets: they are recursively enumerable using a Turing machine with an oracle for the halting problem.

Perhaps predictably, Π2 sets are those that are co-recursively enumerable using a Turing machine with an oracle for the halting problem. Equivalently, these are sets that are definable with a sentence that has a block of ∀s, then a block of ∃s, and finally a Σ0 sentence. An example of such a set is the set of Turing machines that compute total functions. Another is the set of Busy Beaver numbers.

And just like on level one of the hierarchy, the Δ2 sets are those that are both Σ2 and Π2. Such sets are both recursively enumerable and co-recursively enumerable with the help of an oracle for the halting problem. In other words, they are decidable using an oracle for the halting problem. (Notice that again, Δ2 sets are “easier to decide” than Σ2 and Π2 sets.)

The Busy Beaver numbers are an example of a Δ2 set. After all, if one had access to an oracle for the halting problem, they could compute the value of the nth Busy Beaver number. Simply list out all the n-state Turing machines, use your oracle to see which halt, and run all of those that do. Once they’ve all stopped, record the number of steps taken by the longest-running. With the ability to compute the nth Busy Beaver number, we do the following:

n = 1
while True:
    if x == BB(n):
        return True
    n += 1

This returns True for all inputs that are Busy Beaver numbers, but runs forever on those that aren’t. We can fix this with bounded quantifiers: since the Busy Beaver numbers are a strictly increasing sequence, once we’ve reached an n such that x < BB(n), we can return False. This gives us an algorithm that decides the set of Busy Beaver numbers!

In fact, the Busy Beaver numbers are even better than Δ2; they’re Π1, co-recursively enumerable! But that’s a tale for another time.

Σn, Πn, Δn

Perhaps you’re now beginning to see the pattern.

Σ3 sets are those that can be defined by a sentence that has a block of ∃s, then a block of ∀s, then a block of ∃s, and finally a Σ0 sentence. We could more concisely say that Σ3 sets are those that can be defined by a sentence that has a block of ∃s and then a Π2 sentence.

Π3 sets are those definable by a sentence that starts with a block of ∀s, then alternates twice between groups of quantifiers before ending in a sentence with only bound quantifiers.

And Δ3 sets are those that are Σ3 and Π3. If you’ve picked up the pattern, you might guess that the Δ3 sets are exactly those that are decidable with an oracle for the halting problem for Turing machines that have an oracle for the halting problem. The Σ3 sets are those that are recursively enumerable with such an oracle, and the Π3 sets are co-RE with the oracle.

And so on for every natural number.

Σn sets are definable by a sentence that looks like ∃x1 ∃x2 … ∃xk φ(x, x1, x2, …, xk), where φ is Πn-1.
Πn sets are definable by a sentence that looks like ∀x1 ∀x2 … ∀xk φ(x, x1, x2, …, xk), where φ is Σn-1.
And Δn sets are both Σn and Πn.

At each level of the hierarchy, we introduce new oracle Turing machines with more powerful oracles. We’ll call the Turing machines encountered on the nth level TMns.

Σn sets are recursively enumerable with an oracle for the halting problem for TMns.
Πn sets are co-recursively enumerable with an oracle for the halting problem for TMns.
Δn sets are decidable with an oracle for the halting problem for TMns.

One might reasonably wonder if at some stage we exhaust all the sets of natural numbers. Or do we keep finding higher and higher levels of uncomputability?

Yes, we do! For each n, Σn is a proper subset of Σn+1 and Πn is a proper subset of Πn+1. One easy argument to see why this must be the case is that there are an uncountable infinity of sets of naturals, and only countably many sets of naturals in each level of the hierarchy. (This is because each level of the hierarchy is equivalent to the sets that are computable/recursively enumerable/co-RE using a TM with a particular oracle, and each TM only computes countably many things.)

This tells us that the hierarchy doesn’t collapse at any finite stage. Each step up, we find new sets that are even harder to decide than all those we’ve encountered so far. But (and now prepare yourself for some wildness) we can make the same cardinality argument about the finite stages of the hierarchy to tell us that even these don’t exhaust all the sets. There are only countably many finite stages of the hierarchy, and each stage contains only countably many sets of naturals!

What this means is that even if we combined all the finite stages of the hierarchy to form one massive set of sets of naturals decidable with any number of oracles for halting problems for earlier levels, we would still have not made a dent in the set of all sets of naturals. To break free of the arithmetic hierarchy and talk about these even more uncomputable levels, we need to move on to talk about the set of Turing degrees, the structure of which is incredibly complex and beautiful.

A Self-Interpreting Book

A concept: a book that starts by assuming the understanding of the reader and using concepts freely, and as you go on it introduces a simple formal procedure for defining words. As you proceed, more and more words are defined in terms of the basic formal procedure, so that halfway through, half of the words being used are formally defined, and by the end the entire thing is formally defined. Once you’re read through the whole book, you can start it over and read from the beginning with no problem.

I just finished a set theory textbook that felt kind of like that. It started with the extremely sparse language of ZFC: first-order logic with a single non-logical symbol, ∈. So the alphabet of the formal language consisted of the following symbols: ∈ ( ) ∧ ∨ ¬ → ↔ ∀ ∃ x ‘. It could have even started with a sparser formal language if it was optimizing for alphabet economy: ∈ ( ∧ ¬ ∀ x ‘ would suffice. As time passed and you got through more of the book, more and more things were defined in terms of the alphabet of ZFC: subsets, ordered pairs, functions from one set to another, transitivity, partial orders, finiteness, natural numbers, order types, induction, recursion, countability, real numbers, and limits. By the last chapter it was breathtaking to read a sentence filled with complex concepts and realize that every single one of these concepts was ultimately grounded in this super simple formal language we started with, with a finitistic sound and complete system of rules for how to use each one.

But could it be possible to really fully define ALL the terms used by the end of the book? And even if it were, could the book be written in such a way as to allow an alien that begins understanding nothing of your language to read it and, by the end, understand everything in the book? Even worse, what if the alien not only understands nothing of your language, but starts understanding nothing of the concepts involved? This might be a nonsensical notion; an alien that can read a book and do any level of sophisticated reasoning but doesn’t understand concepts like “and” and “or“.

One way that language is learned is by “pointing”: somebody asks me what a tree is, so I point to some examples of trees and some examples of non-trees, clarifying which is and which is not. It would be helpful if in this book we could point to simple concepts by means of interactive programs. So, for instance, an e-book where an alien reading the book encounters some exceedingly simple programs that they can experiment with, putting in inputs and seeing what results. So for instance, we might have a program that takes as input either 00, 01, 10, or 11, and outputs the ∧ operation applied to the two input digits. Nothing else would be allowed as inputs, so after playing with the program for a little bit you learn everything that it can do.

One feature of such a book would be that it would probably use nothing above first-order logical concepts. The reason is that the semantics of second-order logic cannot be captured by any sound and complete proof system, meaning that there’s no finitistic set of rules one could explain to an alien so that they know how to use the concepts involved correctly. Worse, the set of second-order tautologies is not even recursively enumerable (worse than the set of first-order tautologies, which is merely undecidable), so no amount of pointing-to-programs would suffice. First-order ZFC can define a lot, but can it define enough to write a book on what it can define?

Epsilon-induction and the cumulative hierarchy

The axiom of foundation says that every non-empty set must contain an element that it is disjoint with. One implication of this is that no set can contain itself. (Why? Suppose ∃x (x ∈ x). Then you can pair x with itself to form {x}. {x} is non-empty, so it must contain an element that it’s disjoint with. But its only element is x, and that element is not disjoint with {x}. In particular, they both have as an element x.)

Another implication of foundation is that there can be no infinite descending sequence of sets. A sequence of sets is a function f whose domain is ω. For each k ∈ ω we write f(k) as fk. Suppose that f is a function with the property that for each n ∈ ω, fn+1 ∈ fn. Then by the axiom of replacement, there would exist the set {f0, f1, f2, …}. Each element of this set contains the following element, so none of the elements are disjoint with the entire set. So the sequence we started with could not exist.

This allows us to prove a pretty powerful and surprising result about set theory. The result: For any sentence Φ, if Φ holds of at least one set, then there must exist a set X such that Φ(X) holds, but Φ doesn’t hold for any element of X.

Formally:

(∃A Φ(A)) → ∃X (Φ(X) ∧ ∀Y ∈ X (¬Φ(Y))).

Suppose this was false. Then Φ would be satisfied by at least one set, and every set that satisfied Φ would contain an element that satisfied Φ. We can use this to build an infinite descending sequence of sets. Take any set that satisfies Φ and call it X0. Since X0 satisfies Φ, it must contain at least one element that satisfies Φ. Define X1 to be any such element. X1 satisfies Φ as well, so it must contain at least one element that satisfies Φ, which we’ll call X2. And so on forever. Since no set can contain itself, Xn+1 is always distinct from Xn. We can use recursion and then replacement to construct the set {X0, X1, X2, …}, which is an infinite descending sequence of sets. Contradiction!

This peculiar principle turns out to be useful to prove some big results, and the proofs are always a little funny and enjoyable to me. I’m not aware of any name for it, so I’ll call it the “far-from-the-tree” principle: for any property that holds of at least one set, there’s a set that has that property, none of whose elements have the property. I’ll now show two big results that are proven by this principle.

Everything’s in the Cumulative Hierarchy

The cumulative hierarchy is constructed recursively as follows:

V0 = ∅
Vα+1 = 𝒫(Vα), for each ordinal α
Vλ = U{Vβ : β < λ}, for each limit ordinal λ

V0 = ∅
V1 = {∅}
V2 = {∅, {∅}}
V3 = {∅, {∅}, {{∅}}, {∅, {∅}}}

One can prove that Vα is transitive for each ordinal α (in other words, that every element of Vα is also a subset of Vα). Also, due to the nature of the power-set operation, for any two ordinals α < β, Vα ⊆ Vβ.

Now, the reason the cumulative hierarchy is interesting is that we can prove that every set is on some level of the hierarchy. We do so with the far-from-the-tree principle:

Suppose that there was a set X that wasn’t on the hierarchy. This means that ∀α, X ∉ Vα. We use the far-from-tree principle with the property of “not being on the hierarchy” to conclude that there must be a set Y such that Y is not on the hierarchy, but every element of Y is on the hierarchy. So for each element y ∈ Y there’s some ordinal αy such that y ∈ Vαy. Take the highest such ordinal, and now we have a level of the hierarchy that every element of Y is on. Now just go up by one level! Since every element of Y was in the previous level, the set consisting of each of those elements (i.e. Y) is in the power set of that level. So Y actually is on the hierarchy! Contradiction.

This shows that any set you choose shows up on some level of the cumulative hierarchy. Incidentally, the rank of a set is defined as the first level that the set appears on, and one can prove that for each ordinal α, rank(α) = α.

∈-Induction

∈-induction is a really strong principle of induction, because it is induction over all sets, not just the natural numbers or the ordinals. The principle of ∈-induction is the following:

Suppose that for every set X, if Φ(x) holds for every x ∈ X, then Φ(X) holds.
Then Φ holds of every set.

Suppose that this principle was false. Then it’s true that for every set X, if Φ(x) holds for every x ∈ X, then Φ(X) holds, but false that Φ holds of every set. So there’s some set Y such that ¬Φ(Y). We use the far-from-the-tree principle with the property “¬Φ”. This tells us that there’s some set Z such that ¬Φ(Z), but Φ(z) for every z in Z. But we already said that if Φ(z) for every z in Z, then Φ(Z)! Contradiction.

I hope you like these applications as much as I do. They feel to me like very strong and surprising results, especially given how simply they can be proved.