There’s something I’m confused about with regards to ZFC. We already know that no first order theory allows you to perfectly capture the semantics of concepts like *the natural numbers* or *finitely many*. The best we can do if restricted to first-order logic (which is where most of modern mathematics tries to stay) is to have theories with surrogates for those concepts.

For instance, there’s this object called ω that stands in for the natural numbers in ZFC, even though there are all these models of ZFC in which ω contains many more objects besides the natural numbers (in fact, uncountably many in some models). And “X is finite” is replaced by “there’s a bijection from X to an element of ω.”

In formalizing these surrogate concepts, we make sure to not include any false statements about the concepts, which gives us a type of soundness of our surrogate concept. I.e., ZFC isn’t going to prove things about ω that are false of the set of natural numbers, because in one model ω *is* the set of natural numbers.

But it doesn’t give us completeness. We aren’t going to be able to prove ALL the true first-order sentences about the natural numbers, or about the concept of finiteness. (This is of course just a product of the first-order nature of the theories in which we’re defining these surrogates.)

So in each of these cases there will be true statements involving the concept that our theory will be agnostic about. We can look for “really good” surrogates, surrogates which allow us to prove most of the important and interesting true statements about the concept, and only fail in very abstract and unusual settings. The degree to which we can make good surrogates is the degree to which a first-order thinker can make sense of and usefully apply non-first-order concepts. (A first-order thinker being one that has no built-in understanding of non-first-order concepts.)

So one general question is: how good is a given surrogate? And another is, how do we know based on the axioms how good of a surrogate we’ll be getting? This is the thing I’m confused about.

In ZFC, there’s this weird phenomenon of the theory “getting the right answers accidentally.” It’s a little tough to put into words, but here’s an example:

ZFC can prove that |P(X)| > |X| for all X. So for instance ZFC can prove that |P(ω)| > |ω|. Meaning that ZFC can prove that the power set of the naturals is uncountable. But ZFC has countable models! (As does every first-order theory.) In those models, the power set of the naturals is NOT uncountable.

First order logic is sound, so what’s going on ISN’T that ZFC is proving a sentence that’s false in some of its models. It’s that the sentence is false in that model, if interpreted to be about the actual concept, and true in that model if interpreted to be about the surrogate concept. The surrogate for “P(ω) is uncountable” in ZFC is “there’s no bijection from P(ω) to ω”. And in the countable models of ZFC, the bijection that would prove the equinumerosity of ω and P(ω) is missing! So in those models, it’s actually TRUE that “there’s no bijection from P(ω) to ω”, even though P(ω) and ω really do have the same number of elements.

This is the sort of subtle nonsense you get used to in mathematical logic. Two accidents cancel out: first ZFC makes the mistake of having models where P(ω) is countable, and second it makes the mistake of losing track of the bijection from P(ω) and ω. And as a result, ZFC is able to prove the correct thing.

This seems really weird to me, and it’s an unsettling way for a supposed foundations of mathematics to be getting the right answers. This type of thing happens a lot, and it feels like we keep getting “lucky” in that our unintended interpretations of a sentence interfere with each other and cancel out their problematic features. It makes me wonder why we should be confident that ZFC will continue giving us the right answers (as opposed to being agnostic on important basic questions). And in fact we do have some examples of important and basic questions that ZFC is agnostic on, most dramatically that if |X| < |Y|, then |P(X)| < |P(Y)|!

It’s not that I doubt that ZFC’s surrogates for non-first order concepts end up allowing us to prove an enormous amount of the true facts about these concepts, because they clearly do. But is there some principled reason we can give for WHY the axioms of ZFC lead us to such a robust framework for mathematics in which we can prove many true statements?

One thing that suggests this is the power of the axiom of replacement. It’s just an extremely strong and useful axiom that allows us to prove a whole lot. But that doesn’t seem to help explain the “right-by-accident” phenomenon. So what does?

I think I take some different views, but that may well be the fault of my ignorance…

There’s FOL PA and things provable in that theory. Same with SOL PA.

I don’t think it’s necessarily to talk about things that are „true“ or „false“ about the natural numbers as defined above beyond that.

Then there’s other theories with other axioms (e.g. ZF) which interprets all notions (0, S and the axioms,..) of that theory. These theories actually even have several ways of modeling FOL PA. And for some statements of ZF (corresponding to statements of FOL PA that are not provable in FOL PA), different models let you prove this or that (e.g. one model may let you prove all numbers can be reached via a finite succession of S’s and another model may have numbers that aren’t reachable this way).

I think it’s fine to say the two things discussed in the above two paragraph are what there is to the theory of natural numbers. If by „natural numbers“ you mean FOL PA, then there’s things about the natural numbers which aren’t true or false.

You lament that the FOL theory doesn’t „capture the semantics“ (not all models are isomorphic), but I’d like you to reflect and argue why that’s a bad thing. Given the fact that, when you use natural numbers, you really only ever use PA (and not any of its models), you have the situation that PA just proves what it proves. You don’t miss out on anything!

Let me grant you this: It’s more complicated than one might expect – the fact that the interplay between PA and set theory is not 1:1 in the categoricity sense. I’d say let’s consider it a feature: The mathematics is richer.

I repeat and rephrase for your consideration: You don’t work with a model when you do number theory – if you were to use a different theory and prove things „about the natural numbers, i.e. the smallest model“, then you have (per definition) not made use of just PA and you’ve proven something not about the numbers, but about a model of PA (the cosy small one).

I might be too formalist for your taste, but I have no desire of there being hands-on numbers in the Platonic real about each statement is either true of false. Who benefits from that?

>The surrogate for “P(ω) is uncountable” in ZF is “there’s no bijection from P(ω) to ω”

I don’t follow here – what’s an alternative definition of uncountable than the one involving bijections?

>Two accidents cancel out: first ZF makes the mistake of having models where P(ω) is countable, and second it makes the mistake of losing track of the bijection from P(ω) and ω.

I think this use of language makes the situation more complicated than it is. There is no model where P(ω) in which countable, where I used the phrase „in which“ on purpose. If you say „models where P(ω) is countable“, then this is of course only right when we allow counting via functions in the ambient theory and I’d just be more explicit than saying „where“. Set theory proves |P(X)| > |X| for all sets and it interprets FOL and SOL PA and some model of function in a way consistent with that: There’s no counting function in that model.

I’m sure you know that well, the only difference is that you would like to read a problem into that, speaking of mistakes that the theory makes 😀

Leaving aside the „countable model of P(ω)“ thing, what remains of your query is a question along the lines of: How come a rich theory like ZF ends up being consistent at all? Maybe because in normal math one only really uses something like SOL PA and its axioms look less scary (have you looked into the Reverse Mathematics program?).

Replacement is actually not used much unless you go big 🙂 In case you’re interested in where ZF differs from topos theory, there’s slides googleable by Shulman called „Unbounded quantifiers and strong axioms in topos theory“

et me maybe a little bit more hands-on with the model statement.

The countability of a set PX by a set Y implied by the existence of a bijective function Y->(X->{0,1}), i.e. the existence claim for a certain function definition of type (X x Y)->{0,1}, i.e. the existence of a certain relation definition on X x Y.

If X x Y is in bijection with X, then this is the existence of a certain subset definition of X.

If X = Y (and if either X=N or if we assume Choice), then Cantors theorem can be stated as saying that a certain subset predicate on X does not lead to a subset in that theory.

In other words, the existent claim of that function is provably inconsistent. It’s like the predicate “The biggest non-empty subset without maximal element” is not giving a subset on {3,4,5} (because all subsets inevitably have a biggest element).

Take SOL PA with naturals domain N and write down the statement for X=Y=N.

To model SOL PA within your set theory is to model its primitive symbols and validate the few axioms that theory has using your own rules of inference.

Your set theory cannot break the uncountability theorem (non-existence of a subset in the model) without being inconsistent.

If you now go and let X=N but use as Y=N’ the natural numbers of your ambient set theory (counts PN via N’), the theorem does even apply.

It’s probably not quite fair either. The SOL PA of the model has a number collection comprehension scheme granting existence of subsets of N via properties of numbers n, while you got a set collection comprehension scheme granting existence of elements in V via properties of set x.

Formalism is fascinating and I love thinking about it. My default assumption tends to be that we do have an understanding of the natural numbers as the model itself, not just as formal deductive system, and that this understanding doesn’t need to be grounded in any externally existing Platonic reality. For instance, I think that the theory of true arithmetic makes sense to talk about, and that we can do a good job of precisely characterizing why it’s so hard to nail down (it has Turing degree 0^(ω)). True second order arithmetic is even more uncomputable, though we don’t know its Turing degree yet. I have trouble seeing how one can make sense of basic computational notions like the Busy Beaver numbers without having the natural numbers as a background model. After all, what does it mean for a computation to terminate? It means that if you run it for some number of steps, it will enter the halt state. But what “number of steps”? Can it be a nonstandard number of steps? Presumably not, but to say that you must be committed to an understanding of the standard model of PA. Without this commitment, the question of whether a given Turing machine halts or not no longer has a definite answer, and this feels very uncomfortable to say. Surely there’s a reality about what Turing machines do, and without this assumption I struggle to see how one can even have an unambiguous notion of “formal systems” to begin with.

I note that one can be a realist about the natural numbers, while maintaining a formalist position on controversial set theoretic questions like the continuum hypothesis. (Models of ZFC with standard ω can differ on the relationship between ℵ1 and ℶ1.) Though I’m sometimes sympathetic to set theoretic realism, I’m much more sympathetic to formalism here than formalism with respect to the natural numbers.

As for defining “uncountable” without referencing bijections, one way to do this is using the logic FOL+U. The approach there is to treat uncountability as primitive and build it into the logic, rather than reducing it to other logical notions.

I like the FOL+U possibility, is it conservative over all set theories of can you break things with it?

Thanks for the reply, although it kind of moves away from my point that I think one can’t say ZF is “getting it right by accident”, since a model not getting it right would mean a hard inconsistency.

I don’t follow with the “True arithmetic but not Platonic” line of thought, but I always found that one difficult.

What I can do is imagining a 0 on paper and a long long sequence of S before that to give “reality” to it, and thus if I speak about a Turing machine halting of a number of steps, then this number won’t be a non-constructive one (since there’s no ordinal jumps that anybody could do on paper, not even in principle).

For this, here, I only involve the ordinal aspect to numbers, no real computation apart from adding an S repeatedly. The arithmetic of the numbers will be more complicated.

I don’t think it’s problematic that our logic wouldn’t that some Turing machine must either halt or not. If we can’t prove either and we assume consistency of our theory, the first guess would of course be that this means the machine wouldn’t halt, but that’s already metalogical.

However, bringing Turing machines into the picture is rather an argument against impredicative foundations.

I think, quite formally, ZFC can be consistent and prove that some Turing machine will halt, without that being the case. So just consistency of a theory like that might not have to mean it’s correct in this way.

Three points:

1. You say that ZFC can be consistent and prove that a non-halting TM will halt. This entails that ω contains nonstandard numbers in every model of ZFC; otherwise, ZFC would have a model in which ω contains only the standard naturals and no more, and in that model the statement “TM X halts” would be false, meaning that it couldn’t be proven by ZFC.

2. True arithmetic is the technical name for the set of sentences of first-order PA that are true of ℕ, the standard model of PA {φ : ℕ ⊨ φ}. The set of Gödel numbers for these sentences is the object that I referred to as having Turing degree 0^(ω). The notion of true arithmetic cannot be made sense of without talking about the standard model of PA, and I brought it up to suggest that in fact mathematicians can and do reason about this standard model. We can even describe and explain some of its properties in detail (including why it’s not computable). We even know exactly what type of halting oracle we’d need to decide this set!

3. When I’m talking about “ZFC not getting something right”, I am not talking about ZFC being inconsistent. Consistency is a purely syntactic and correspondingly quite weak criterion; a consistent theory can prove its own inconsistency, can be omega-inconsistent, and can be arithmetically unsound. ZFC has models with countable universes (this is an extremely fundamental and basic result in model theory), and the ZFC-sentence “P(ω) is uncountable” is true in these models. Thus the ZFC-sentence “P(ω) is uncountable” is not adequately capturing the semantics of countability, and this is what I’m referring to as a mistake.

Okay, thanks for the response.

I don’t know follow the thing about the semantics of countability or the right/wrong thing, but I probability will if I read along your blog.

Regarding the Turing machine, I’ve read something on that topic in a paper by a guy named Nik Weaver. I’m not sure if all proven statement about numbers ZF makes must indeed be also be reflected in a model in it. Maybe I can find more about it.

A couple of times in earlier comments you mentioned things being “provable in second order PA”. It’s worth noting that there is no unambiguous notion of provability in second order PA, because unlike first-order logic, second-order logic has no sound and complete proof system. There are multiple different proof systems of varying strengths, all of which are sound but none of which are complete.

From what I’ve seen of him, Nik Weaver doubts the premise that ZFC is arithmetically sound (i.e. that ZFC has a model in which ω contains no non-standard elements). Philosophically, the arithmetical soundness of ZFC is kind of like the consistency of ZFC: if it holds then we’ll never know that for sure, but it seems very likely and only a small minority of mathematicians seriously doubt it. I’d be curious to see a link to the paper by Weaver you mentioned.

To be clear though: Anything that ZFC proves is true in all of its models, this is just the soundness theorem for first-order logic ((ZFC ⊢ φ) ⇒ (ZFC ⊨ φ)). If ZFC is arithmetically sound, then ZFC cannot prove any false statements about natural numbers, including statements about Turing machines halting. But it can still remain agnostic about true statements of arithmetic, and indeed it must. For instance, there’s a 8000 state Turing machine that enumerates the theorems of ZFC and halts if it ever finds a contradiction. Assuming that ZFC doesn’t in fact prove a contradiction, then this TM never halts and ZFC cannot prove this. But it also cannot refute it.

Trying one more time on the semantics of uncountability issue: one of the primary goals of any logic is to develop a deductive system that allows us to prove things about the models of that logic. So for instance, we like having a deductive system in which (A ∧ B ⊢ A) holds, because in every model in which (A ∧ B) holds true, A holds true as well. And we like having a deductive system in which (∀x P(x) ⊢ P(a)) holds, because in a model where the predicate P is true of every object, P is also true of the particular object denoted by a. “Truth” here isn’t an abstract wishy-washy notion; it’s built into the definition of a model. A model of a FOL theory assigns truth values to every sentence of the language, and it does so in a way that’s consistent with the semantics of “∧”, “¬”, “∀” and so on. If your deductive system didn’t allow you to prove A from (A ∧ B), then it would be failing to capture the semantics of “∧”.

That’s the same type of thing that’s going on with countability: try as we might to develop a first-order characterization of the cardinalities of structures in our models, we always end up with mischaracterizations; cases where our models have features that differ from what we prove about them.

That one line should be

“There is no model in which P(ω) is countable, where I used the phrase „in which“ on purpose.”

Yeah you don’t need to lay out logic 101 to me in every post, but thank you very much for the effort. It appears a comment section is a good place to get ones meaning across. E.g. I don’t know for sure how you mean it when you say

>this is an extremely fundamental and basic result in model theory

>“Truth” here isn’t an abstract wishy-washy notion

but likely this isn’t an issue that would pop up in a face to face conversation.

The Nik Weaver paper was linked in some MathOverflow thread. The thread was interesting in itself, although I can’t find it now anymore. The paper is called “Is set theory indispensable?” and is on arxiv here: https://arxiv.org/abs/0905.1680

PS I don’t know for sure where my comments will end up when I post them. E.g. here on your last comment, I didn’t see a direct reply or comment field after the comment:

https://i.imgur.com/9ippCXR.png

I’m on an MacBook Air right now in Chrome.

PPS Sorry for this long line of posts, I don’t know if you want comments or not. Just discussing phrases like “getting right by accident” wasn’t my goal.

Ah, and now I see that one of the comments (one of mine) does have a “Reply” button while the one above does not:

https://i.imgur.com/Gk1tFQN.png

Also, I see I wrote “is a good place to get meaning across”, while I meant the opposite. Can’t edit the comment, though.

Mhm, maybe the design is that after the blog post, a reply is possible, then a reply to a reply is also possible, but it ends at level 2 and after one must must go back and reply to ones own original reply.

You can delete these debug posts btw.

Apologies if anything in my tone came across condescending or rude, I definitely didn’t mean it that way. I think I’ve fixed the issue with nesting comments, and now it should be possible to nest up to 10 levels. Thanks for pointing that out!

Nik Weaver paper looks interesting, I’ll give it a read through

I agree that it’s harder to read tone over text. It’s also harder to figure out what somebody’s level of background knowledge is, which is why I tried to keep my comments fairly basic in order to avoid making inferential leaps that you wouldn’t follow. Again, no offense intended.