# How to draw lambda diagrams

If you don’t want spoilers for my puzzle a few days ago, don’t read ahead!

I think lambda diagrams are extremely cool, and haven’t seen any detailed description on how they work online. I’ll start by showing some very simple examples of lambda diagrams, and then build up to more complicated ones.

First of all, what are lambda diagrams? They are pictorial representations of lambda expressions, and hence count as a pictorial system for a large portion of mathematics. I will assume that you understand how lambda calculus works for this post, and if you aren’t familiar then you can do no better than this video for a basic introduction.

The Basics

Let’s start simple: the lambda expression for “True”: λx.λy.x

Each time we see a new bound variable, we draw a horizontal line to represent that variable. So parsing from left to right, we start with a horizontal line for x.

Next is “λy.“, so we insert another horizontal line for y (below the first horizontal line).

Next we have an x in the function body, which we represent by a vertical line leaving the x line:

And there we have it, the lambda diagram for True!

Now let’s look at False: λx.λy.y

We start the same way as before, with two horizontal lines for our variables x and y:

But now we have a y in our function body, not an x. So we draw our vertical line leaving the y line instead of the x line:

And that’s the diagram for False! The actual diagram needn’t have the labels “x” and “y” in it, because our convention that we draw lines for new variables below existing horizontal lines uniquely specifies which lines stand for which variables.

Let’s now do a slightly more complicated function: λx.λy.y x

Now we have a y in the function body, so we add a vertical line leaving the y bar:

Next is an x, which is being fed as input to the y. We draw this as follows:

Two things have happened here: First I’ve moved the original vertical line for y over to the left to make space. Next, I’ve represented “feeding x to y as input” as a line leaving the x bar and terminating at the vertical line for y.

Take a guess what λx.λy.x y would look like!

Here it is:

Next, let’s look at λx.λy.x x y (which is incidentally the “or” function).

Next, an x:

And another x:

And finally a y:

Notice that this y connects below the first connection, to the original branch. What would it mean if it were connected to the second branch?

As you can see, this would indicate that y is first fed to x, and then the result of THAT is fed to x.

With these principles, you should now be able to draw out diagrams for much more complicated lambda expressions. Try this one: λx.λy.λz.z (y x) (y z x) x

Here it is!

Make sure that this makes sense before moving on!

Lambda Expressions within Lambda Expressions

Next, we’ll look at how to deal with lambda expressions that contain new lambda expressions within their function body. Here’s a simple example: λx.λy.x (λz.z) y

Everything up until the third λ is the same as always:

Now, to deal with our new variable z, we draw a new horizontal line. But importantly, this horizontal line comes after the vertical line for the x that has already been used!

After the “λz.” we have a “z“, so we draw a line from the z bar to our original vertical line leaving x.

And finally, after all of this we feed y to the original vertical line:

Try this one: λx.λy.x (λz.z z) (λw.λv.v (w w)) y

And here’s another one, where the inside function uses outside variables: λx.λy.x (λz.y z)

Alright, the final category you need to know to understand how to draw any lambda diagram whatsoever is…

Function Application

Suppose we have the lambda expression (λx.λy.x) (λz.z z). We first draw the lambda diagrams for each of the two component expressions side by side:

And now we simply attach the second to the first, indicating that the entire second lambda expression is fed as input to the first!

Here’s another example for you to try: (λx.x) (λy.λz.z z y) (λw.λv.v (v w))

And one more example, this one using all the tricks we’ve seen so far: (λx.x (λy.λz.z y y) (λw.λv.w (v x))) (λu.u u)

Beta Reduction

The final thing to know about lambda diagrams is how to actually do computations with them. The basic idea is that when we have one lambda diagram fed as input to another, we can substitute the “input” diagram for each vertical line leaving the topmost horizontal bar, and then erase this bar. Let’s look at some examples:

You can also do function application within larger lambda diagrams. Take a look at the following example, which is a calculation that shows that the successor of zero is one:

The first beta reduction here is just like the previous ones we’ve seen. But the second one does a substitution within the main lambda expression, as does the third. This works in much the same way as the earlier reductions we saw, the primary difference being that references to variables outside the scope of the function being applied must be maintained. You can see this in the final step above, where we remove the line representing the variable y, and attach the line connecting to it to the line representing the variable x.

Now for the fun part! I’ve found some great animations of calculations using lambda diagrams on Youtube. Each of them is using just the rules I’ve described above, nothing more! And I must say that the music is delightful. Take a look!

Beautiful!

# The Pictorial Mathematics of Klak-Adbmal

In the distant land of Klak-Adbmal, cut off from the rest of civilization, mathematicians have developed their own distinctive form of math. It is entirely pictorial; every mathematical concept is translated as a diagram consisting of criss-crossing horizontal and vertical lines, and calculations are manipulations of these diagrams.

Translators have so far been able to figure out the symbols for a few basic mathematical concepts, but there are many remaining gaps in our ability to translate Klak-Adbmalist mathematics. This image shows the symbols we’ve figured out so far, as well as a step-by-step calculation of the successor of 1 equaling 2.

The Klak-Adbmalians are very secretive and slow to give up their secrets. Can you figure out the answers to the two questions above on the right?

It’s worth noting that just as there are multiple ways that ordinary mathematicians can write a given number (2 ⋅ 5 = 8 + 2 = 10),  there are many different pictures for any given concept. If you need more assistance, you can take a look at this recently discovered drawing of a Klak-Adbmalian calculation proving that True Or False is True.

And one final hint: Recently we’ve received verification that the following two calculations are aways valid, no matter what Klak-Adbmalist pictures replace the red and green boxes:

# Logic on Planet Zorko

A group of Zorkan mathematicians are sitting around having a conversation in a language that you are unfamiliar with. You are listening in with a translator. This translator is an expert in formal logic, and has decided to play the following game with you. He says:

“After listening to the full conversation, I will translate all the sentences that were said for you. But I won’t translate them into English; I want something more universal. Instead, I will choose a formal language that captures the mathematical content of all the sentences said, while leaving out the vagaries and subtleties of the Zorkan language. I will describe to you the semantics of the formal language I choose, if you don’t already know it.”

“Furthermore,” (says the translator) “I happen to be intimately familiar with Zorkan society and culture. The Zorkans are having a discussion about one particular mathematical structure, and I know which one that is. The mathematicians are all fantastically precise reasoners, such that none of them ever says a sentence that is false of the structure that they are discussing.”

(So for instance if they are talking about the natural numbers, then no mathematician will say “0 = 1”, and if they are talking about abelian groups, then no mathematician will say “∃x∃y (xy ≠ yx)”. But they could say “∃x∃y (xy ≠ yx)” if they are talking about non-abelian groups.)

You know nothing about Zorkan psychology, besides that the Zorkan way of life is so utterly foreign to you that you cannot reliably assume that the mathematical structures that come most naturally to you will also come naturally to them. It might be, for instance, that nonstandard models of arithmetic are much more intuitive to them than the natural numbers. You cannot assume that the structure they are discussing is the one that you think is “most natural”; you can only conclude this if one of them says a sentence that is true of that model and no others.

The conversation finishes, and you are tasked with answering the following two questions:

(1) What structure are they talking about?
(2) Can you come up with a verification procedure for the mathematicians’ sentences (including possible future sentences they might say on the topic)?

So, that’s the setup. Now, the question I want you to consider is the following: Suppose that the structure that the mathematicians have in mind is actually the natural numbers. Is there some conversation, any conversation at all (even allowing infinitely long conversations, and uncomputable conversations – conversations which cannot be produced as the output of any Turing machine), that the mathematicians could have, and some translation of this conversation, such that you can successfully answer both (1) and (2)? If so, what is that conversation? And if not, then why not?

✯✯✯

Let’s work out some simple examples.

## Example 1

Suppose the conversation is translated into a propositional language with three atomic propositions {P, Q, R}.

Mathematician A: “P ∨ Q”
Mathematician B: “(Q ∨ R) → (¬P)”
Mathematician C: “R”

From this conversation, you can deduce that the model they are talking about is the one that assigns “False” to P, “True” to Q, and “True” to R.

M: {P is false, Q is true, R is true}

This is the answer to the question 1!

As for the second question, we want to know if there’s some general procedure that produces all the future statements the mathematicians could make. For instance, the set generated by our procedure should include (Q ∧ R) but not (Q ∧ P).

It turns out that such a procedure does exist, and is not too difficult to write out and implement.

## Example 2

Take the above conversation and modify it slightly:

Mathematician A: “P ∨ Q”
Mathematician B: “(Q ∨ R) → (¬P)”
Mathematician C: “¬R”

If you work it out, you’ll see that question 1 can no longer be answered unambiguously. The problem is that there are multiple models of the sentences that the mathematicians are saying:

M1: {P is false, Q is true, R is false}
M2: {P is true, Q is false, R is false}

So even though they have one particular structure in mind, you don’t have enough information from their conversation to figure out exactly what that structure is.

Now let’s think about the answer to question 2. We don’t know whether the mathematicians are thinking about M1 or M2, and M1 and M2 differ in what truth value they assign the proposition P. So we can’t construct an algorithm that will generate the set of all their possible future statements, as this would require us to know, in particular, whether P is true or false in the model that they have in mind.

We might suspect that this holds true generally: if you can’t answer question 1, then you won’t be able to answer question 2 either. But we might also wonder: if we can answer question 1, then can we also always answer question 2?

The answer is no, as the next example will show.

## Example 3

For this conversation, the translation is in second-order logic. This will allow us to talk about more interesting mathematical structures than before; namely, structures that have a domain of objects on which functions and predicates can act. In particular, we’re in a second-order language with one constant symbol “c” and one function symbol “f”. Here’s the conversation:

Mathematician A: ¬∃x (f(x) = c)
Mathematician B: ¬∃x∃y ((f(x) = f(y)) ∧ ¬(x = y))
Mathematician C: ∀R (R(c) ∧ ∀x(R(x) → R(f(x))) → ∀x R(x))

Notice that the only truly second-order sentence is the third one, in which we quantify over a predicate variable R rather than an individual variable x, y, z, …. But the second-order status of this sentence it makes it that the translator could not have possibly translated this conversation into a first-order language, much less a propositional language.

This time, questions 1 and 2 are much harder to answer than before. But if you work it out, you’ll see that there is exactly one mathematical structure that satisfies all three of the mathematicians’ statements. And that structure is the natural numbers!

So, we know exactly what structure the mathematicians have in mind. But can we also answer question 2 in the positive? Can we produce some verification procedure that will allow us to generate all the future possible sentences the mathematicians could say? Unfortunately, the answer is no. There is no sound and complete proof system for second-order logic, so in particular, we have no general algorithm for producing all the truths in this second order language. So sad.

## Example 4

Now let’s move to first-order logic for our final example. The language of translation will be a first order language with a constant symbol for every natural number {0,1,2,3,…}, function symbols for ordinary arithmetic {+, ×}, and relation symbols for orders {≥}

Imagine that the conversation consists of literally all the first-order sentences in the language that are true of the natural numbers. Anything which you can say in the language, and which is true as a statement about ℕ, will be said at some point. This will obviously be a very long conversation, and in fact infinitely long, but that’s fine. It will include sentences like “0 ≠ 1”, “0 ≠ 2”, “0 ≠ 3”, and so on.  (These Zorkans are extremely thorough.)

Given this conversation, can we answer (1) and (2)? Take a guess; the answer may surprise you!

It turns out that even though we can answer (2) positively – we can actually produce an algorithm that will generate one-by-one all the possible future statements of the mathematicians (which really means all the sentences in the language that are true of the natural numbers), we cannot answer (1) positively! There are multiple distinct mathematical structures that are compatible with the entirety of true statements about natural numbers in the language. Earlier we hypothesized that any time we have a negative answer to (1), we will also have a negative answer to (2). But this is not true! We can verify all the true statements about natural numbers in the language… without even knowing that we’re actually talking about the natural numbers! This is an important and unintuitive consequence of the expressive limitations (and in particular, of the compactness) of first-order logic.

## The Takeaway

We had an example where we could answer both (1) and (2) for a simple mathematical structure (a model of propositional logic). And we saw examples for natural numbers where we could answer (1) but not (2), as well as examples where we could answer (2) but not (1). But we haven’t yet seen an example for natural numbers where we had both (1) and (2). This is no coincidence!

It is actually a consequence of the theorem I proved and discussed in my last post that no such such conversation can exist. When structures at least as complicated as the natural numbers are being discussed in some language (call it L), you cannot simultaneously (1) know for sure what structure is being talked about and (2) have an algorithmic verification system for L-sentences about the structure.

# A Gödelian Logic Puzzle

There’s an island on which there lives exactly two types of people: truthers and liars. Truthers always say true statements, and liars always say false statements. One day a brilliant logician comes to visit the island. The logician knows all of the above-stated facts about the island. It also happens that the logician is a perfectly sound reasoner – he never proves anything that is false.

The logician encounters an individual named ‘Jal’ that lives on the island. The logician knows that Jal lives on the island, and so is either a truther or a liar. Now, Jal makes a statement from which it logically follows that Jal is a truther. But the logician could never possibly prove that Jal is a truther! (Remember, we never asserted that the logician proves all true things, just that the logician proves only true things). What type of statement could accomplish this?

This puzzle is from a paper by Raymond Smullyan on mathematical logic. Try to answer it for yourself before reading on!

(…)

Alright, so here’s one possible answer. Jal could say to the logician: “You will never prove that I am a truther.” I claim that this sentence logically entails that Jal is a truther, and yet the logician cannot possibly prove it.

First of all, why does it entail that Jal is a truther? Let’s prove it by contradiction. Suppose that Jal is not a truther. Then, since Jal is either a truther or a liar, Jal must be a liar. That means that every statement Jal makes must be false. So in particular, Jal’s statement that “you will never prove that I am a truther” must be false. This entails that the logician must eventually prove that Jal is a truther. But we assumed that Jal isn’t a truther! So the logician must eventually prove a falsehood. But remember, we assumed that our logician’s proofs were always sound, so that he will never prove a falsehood. So we have a contradiction.

Therefore, Jal is a truther.

Now, why can the logician not prove that Jal is a truther? This can be seen very straightforwardly: we just proved that Jal is a truther, which means that all of Jal’s statements must be true. So in particular, Jal’s statement that “you will never prove that I am a truther” must be true. So in other words, it’s true that the logician will never prove that Jal is a truther!

So there you have it, a statement that appears to satisfy both of the criteria!

But now the next question I have for you is a bit trickier. It appears from the line of reasoning above that we have just proven that Jal is a truther. So why couldn’t the logician just run through that exact same line of reasoning? It appears to be perfectly valid, and to use nothing more advanced than basic predicate logic.

But if the logician does go through that line of reasoning, then he will conclude that Jal is a truther, which will make Jal’s statement false, which is a contradiction! So we’ve gone from something which was maybe just unintuitive to an actual paradox. Can you see how to resolve this paradox? (Again, see if you can figure it out yourself before reading on!)

(…)

Okay, so here’s the resolution. If we say that the logician can go through the same line of reasoning as us, then we reach a contradiction (that a truther tells a false statement). So we must deny that the logician can go through the same line of reasoning as us. But why not? As I said above, the reasoning is nothing more complicated than basic predicate logic. So it’s not that we’re using some magical supernatural rules of inference that no mortal logician could get his hands on. It must be that one of the assumptions we used in the argument is an assumption that the logician cannot use.

So look back through the argument, and carefully consider each of the assumptions we used:

First of all, why does it entail that Jal is a truther? Let’s prove it by contradiction. Suppose that Jal is not a truther. Then, since Jal is either a truther or a liar, Jal must be a liar. That means that every statement Jal makes must be false. So in particular, Jal’s statement that “you will never prove that I am a truther” must be false. This entails that the logician must eventually prove that Jal is a truther. But we assumed that Jal isn’t a truther! So the logician must eventually prove a falsehood. But remember, we assumed that our logician’s proofs were always sound, so that he will never prove a falsehood. So we have a contradiction.

In order, we made use of the assumptions that (1) Jal is either a truther or a liar, (2) every statement made by a liar is false, and (3) the logician is a sound reasoner.

I told you at the beginning that facts (1) through (2) are all known to the logician, but I did not say the same of (3)! The logician can only run through this argument if he knows that he is a sound reasoner (that he only proves true things). And this is the problem assumption, which must be rejected.

It’s not that no logician can actually ever be sound (a logician who only ever reasons in first order logic and nothing more fancy would be sound). It’s that the logician, though he really is sound, cannot know himself to be sound. In other words, no sound system can prove its own soundness!

This is very similar to Gödel’s second incompleteness theorem. The only proof system which can assert its own consistency is an inconsistent proof system, and the only type of logician that can prove his own soundness will end up being unsound. Here’s the argument that the logician might make if they believe in their own soundness:

Supposing Jal is a liar, then his statement is false, so I could eventually prove that he is a truther. But then I’d have proven something false, which I know I can never do, so Jal must not be a liar. So he must be a truther.

Since the logician has now produced a proof that Jal is a truther, Jal’s statement is false. This means that Jal cannot be a truther, so the logician has proven a false statement!

# Crazy conditionals

It’s well known that the material implication → of propositional logic does not do a perfect job of capturing what we mean when we make “if… then…” statements in English. The usual examples of failure rest on the fact that any material conditional with a false antecedent is vacuously true (so “if 2 is odd then 2 is even” turns out to be true). But over time, philosophers have come up with a whole lot of different ways in which → can catch us by surprise.

Here’s a list of some such cases. In each case, I will present an argument using if…then… statements that is clearly invalid, but which is actually valid in propositional logic if the if…then… statements are translated as the material conditional!

## 1. Harper

If I put sugar into my coffee, it will taste fine.
Therefore, if I put sugar and motor oil into my coffee, it will taste fine.

S → F
(S ∧ M) → F

## 2. Distributivity

If I pull both switch A and switch B, the engine will start.
Therefore, either the engine will start if I pull switch A or the engine will start if I pull switch B.

(A ∧ B) → S
(A → S) ∨ (B → S)

## 3. Transitivity

If Biden dies before the election, Trump will win.
If Trump wins the election, Biden will retire to his home.
Therefore, if Biden dies before the election, Biden will retire to his home.

B → T
T → R
B → R

## 4. Principle of Explosion

Either zombies will rise from the ground if I bury a chicken head in my backyard, or zombies will rise from the ground if I don’t bury a chicken head in my backyard.

(B → D) ∨ (¬B → D) is a tautology

C → ¬P
P → ¬C

## 6. Simplification

If John is in London then he’s in England, and if he’s in Paris then he’s in France.
Therefore, either (1) if John’s in London he’s in France or (2) if John’s in Paris then he’s in England.

(L → E) ∧ (P → F)
(L → F) ∨ (P → E)

## 7. False Antecedent

It’s not the case that if God exists then human life is a product of random chance.
Therefore, God exists.

¬(G → C)
G

## 8. True Consequent

If I will have eternal life if I believe in God, then God must exist.
I do not believe in God.
Therefore, God exists.

(B → E) → G
~B
G

You can check for yourself that each of these is logically valid! Can you figure out what’s going wrong in each case?

# A Dice Puzzle

Today I have a wonderfully counterintuitive puzzle to share!

You and a friend each throw a dice. Each of you can see how your own die landed, but not how your friend’s die landed. Each of you is to guess how the other’s die landed. If you both guess correctly, then you each get a reward. But if only one of you guesses correctly, neither of you get anything.

The two die rolls are independent and you are not allowed to communicate with your friend after the dice have been thrown, though you can coordinate beforehand. Given this, you would expect that you each have a 1 in 6 chance of guessing the other’s roll correctly, coming out to a total chance of 1 in 36 of getting the reward.

The question is: Is it possible to do any better?

(…)

(…)

(Spoiler space)

(…)

(…)

The answer is that remarkably, yes, you can do better! In fact, you can get your chance of getting the reward as high as 1 in 6. This should seem totally crazy. You and your friend each have zero information about how the other die roll turned out. So certainly each of you has a 1 in 6 chance of guessing correctly. The only way for the chance of both guessing correctly to drop below 1 in 36 would be if the two guesses being correct were somehow dependent on each other. But the two die rolls are independent of one another, and no communication of any kind is allowed once the dice have been rolled! So from where does the dependence come? Sure you can coordinate beforehand, but it’s hard to imagine how this could help out.

It turns out that the coordination beforehand does in fact make a huge difference. Here’s the strategy that both can adopt in order to get a 1 in 6 chance of getting the reward: Each guesses that the others’ die lands the same way that their own die landed. So if my die lands 3, I guess that my friend’s die landed 3 as well. This strategy succeeds whenever the dice actually do land the same way. And what’s the chance of this? 6 out of 36, or 1 out of 6!

1 1       2 1       3 1       4 1       5 1       6 1
1 2       2 2       3 2       4 2       5 2       6 2
1 3       2 3       3 3       4 3       5 3       6 3
1 4       2 4       3 4       4 4       5 4       6 4
1 5       2 5       3 5       4 5       5 5       6 5
1 6       2 6       3 6       4 6       5 6       6 6

# Sum and Product Puzzle

X and Y are two integers.

X < Y
X > 1
Y > 1
X + Y < 100

S and P are two perfect logicians. S knows X + Y, and P knows X × Y.

Everything I’ve just said is common knowledge. S and P have the following conversation:

S: “P, you don’t know X and Y”
P: “Now I do know X and Y!”
S: “And now so do I!”

What are X and Y?

Once you figure out that, here’s a question: If instead of saying that X + Y < 100, we say X + Y < N, then what’s the range of values of N for which this puzzle has a unique solution?

# The full solution to the dog puzzle

A couple of days ago I posted A logic puzzle about dogs. Read that post first and try solving it before reading on!

Below is the full explanation of the puzzle. Heavy spoilers, obviously.

The dog society is structured identically to the Robinson axioms for natural number arithmetic. Dogs are numbers, Spot is 0, the alpha of n is n + 1, the referee for n and m is n + m, the counselor for n and m is n × m, and the strength relation is the < relation. This means that “the marriage counselor for Spot’s alpha and the referee of a fight between Spot’s alpha and Spot’s alpha” is translated to 1 × (1 + 1) = 2, which is Spot’s alpha’s alpha. In Robinson arithmetic, you can also prove that ∀n (n < n + 1).

As for the question of if it’s possible for a dog to be stronger than Spot, Spot’s alpha, Spot’s alpha’s alpha, and so on: The primary difference between Robinson arithmetic and Peano arithmetic (the usual axiomatization of natural numbers) is that the Robinson axioms have no induction axiom (which would be something like “If Spot has a property, and if the alpha of any dog with the property also has the property, then all dogs have the property”). The induction axiom serves to rule out many models of the axioms that are not actually the natural numbers.

If the induction axiom is stated using second-order logic, then the axiom system uniquely pins down (ℕ,+,×,>) and there are no dogs besides those in Spot’s hierarchy. But the induction axiom cannot be stated as a single axiom in first order logic, since it involves quantifying over all properties. For first-order Peano arithmetic, we instead have an infinite axiom schema, one for each property that is definable within the first-order language. This turns out to be strictly weaker than the single second-order axiom, as there are some properties of numbers that cannot be described in a first-order language (like being larger than a finite number of numbers).

What this amounts to is that first-order Peano arithmetic with its infinite axiom schema is too weak to pin down the natural numbers as a unique model. There are what’s called nonstandard models of first order PA, which contains the ordinary numbers but also an infinity of weird extra numbers.(In fact, there exist models of first-order PA with every infinite cardinality!) Some of these numbers have the property that they are bigger than all the regular natural numbers.And since Robinson arithmetic is strictly weaker than first-order PA (lacking an induction axiom as it does), this means that Robinson arithmetic is also not strong enough to rule out numbers greater than all elements of ℕ. Which means that we cannot prove that there are no dogs stronger than every dog in Spot’s hierarchy!

I made this puzzle to illustrate three things: First, how the same axioms, and even the same models of the same axioms, can have wildly different interpretations, and that a shift in how you think about these axioms can make seemingly impossible tasks trivial. Second, how axiom systems for structures like ℕ can (and inevitably do) fail to capture important and intuitively obvious features of the structure. And third, how logic is so interesting! Just trying to create a simple rule system to describe one of the most natural and ordinary mathematical structures that we ALL use ALL THE TIME for reasoning about the world, turns out to be so nontrivial, and in fact impossible to do perfectly!

I’ve spent a lot of time in the past describing the paradoxes that arise when we try to involve infinities into our reasoning. Another way to get paradoxes aplenty is by invoking self-reference. Below are three of the best paradoxes of self-reference for you to sort out.

In each case, I want you to avoid the temptation to just say “Ah, it’s just self-reference that’s causing the problem” and feel that the paradox is thus resolved. After all, there are plenty of benign cases of self-reference. Self-modifying code, flip-flops in computing hardware, breaking the fourth wall in movies, and feeding a function as input to itself are all examples. Self-referential definitions in mathematics are also often unobjectionable: as an example, we can define the min function by saying y = min(X) iff y is in X and for all elements x of X, y ≤ x (the definition quantifies over a group of objects that includes the thing being defined). So if we accept that self-reference is not necessarily paradoxical (just as infinity is sometimes entirely benign), then we must do more to resolve the below paradoxes than just say “self-reference.”

Consider the set of integers definable in an English sentence of under eighty letters. This set is finite, because there are only a finite number of possible strings of English characters of under eighty letters. So since this set is finite and there are an infinity of integers, there must be a smallest integer that’s not in the set.

But hold on: “The smallest positive integer not definable in under eighty letters” appears to now define this integer, and it does so with only 67 letters! So now it appears that there is no smallest positive integer not definable in under eighty letters. And that means that our set cannot be finite! But of course, the cardinality of the set of strings cannot be less than the cardinality of the set of numbers those strings describe. So what’s going on here?

“If this sentence is true, then time is infinite.”

Curry’s paradox tells us that just from the existence of this sentence (assuming nothing about its truth value), we can prove that time is infinite.

## Proof 1

Let’s call this sentence P. We can then rewrite P as “If P is true, then time is infinite.” Now, let’s suppose that the sentence P is true. That means the following:

Under the supposition that P is true, it’s true that “If P is true, then time is infinite.”

And under the supposition that P is true, P is true.

So under the supposition that P is true, time is infinite (by modus ponens within the supposition).

But this conclusion we’ve just reached is just the same thing as P itself! So we’ve proven that P is true.

And therefore, since P is true and “If P is true, then time is infinite” is true, time must be infinite!

If you’re suspicious of this proof, here’s another:

## Proof 2

If P is false, then it’s false that “If P is true then time is infinite.” But the only way that sentence can be false is if the antecedent is true and the consequent false, i.e. P is true and time is finite. So from P’s falsity, we’ve concluded P’s truth. Contradiction, so P must be true.

Now, if P is true, then it’s true that “If P is true, then time is infinite”. But then by modus ponens, time must be infinite.

Nothing in our argument relied on time being infinite or finite, so we could just as easily substitute “every number is prime” for “time is infinite”, or anything we wanted. And so it appears that we’ve found a way to prove the truth any sentence! Importantly, our conclusion doesn’t rest on the assumption of the truth of the sentence we started with! All it requires is the *existence of the sentence*. Is this a proof of the inconsistency of propositional logic? And if not, then where exactly have we gone wrong?

Consider the following two sentences:

1) At least one of these two sentences is false.
2) Not all numbers are prime.

Suppose that (1) is false. Well then at least one of the two sentences is false, which makes (1) true! This is a contradiction, so (1) must be true.

Since (1) is true, at least one of the two sentences must be false. But since we already know that (1) is true, (2) must be false. Which means that all numbers are prime!

Just like last time, the structure of the argument is identical no matter what we put in place of premise 2, so we’ve found a way to disprove any statement! And again, we didn’t need to start out by assuming anything about the truth values of sentences (1) and (2), besides that they have truth values.

Perhaps the right thing to say, then, is that we cannot always be sure that self-referential statements actually have truth values. But then we have to answer the question of how we are to distinguish between self-referential statements that are truth-apt and those that are not! And that seems very non-trivial. Consider the following slight modification:

1) Both of these two sentences are true.
2) Not all numbers are prime.

Now we can just say that both (1) and (2) are true, and there’s no problem! And this seems quite reasonable; (1) is certainly a meaningful sentence, and it seems clear what the conditions for its truth would be. So what’s the difference in the case of our original example?

# A logic puzzle about dogs

Spot is a dog. Every dog has one alpha (also a dog), and no two dogs have the same alpha. But Spot, alone amongst dogs, isn’t anybody’s alpha. 🙁

For any two dogs, there is an assigned referee in case they get into a fight and an assigned marriage counselor in case they get married. The dogs have set up the following rules for deciding who will be the referees and counselors for who:

If Spot fights with any dog, the other dog gets to be the referee. The referee of a fight between dog 1’s alpha and dog 2 has to be the alpha of the referee of a fight between dogs 1 and 2.

Spot has to be his own marriage counselor, no matter who he marries. The marriage counselor for dog 1’s alpha and dog 2 has to referee any fight between dog 2 and the marriage counselor for dog 1 and dog 2.

Finally, dog 1 is stronger then dog 2 if and only if dog 1 is the referee for dog 2 and some other dog. Strength is transitive, and no dog is stronger than itself.

Question 1: Who’s the marriage counselor for Spot’s alpha and the referee of a fight between Spot’s alpha and Spot’s alpha?

Question 2: How many dogs are there?

Question 3: Is the referee for dog 1’s alpha and dog 2 always the same as the referee for dog 2’s alpha and dog 1? What if we asked the same question about marriage counselors?

Question 4: Is any dog stronger than their own alpha?

Bonus Question: Is it possible for there to be a dog that’s stronger than Spot, Spot’s alpha, Spot’s alpha’s alpha, and so on?

A challenge: Try to figure out a trick that allows you to figure out the above questions in your head. I promise, it’s possible!