Lambda calculus is an extremely minimal language that is powerful enough to express every computation that a Turing machine can do. Here is a fantastic video explaining the basic “rules of the game”:
Figuring out how to translate programs into lambda calculus can be a challenging puzzle. I recently discovered a nice way to use lambda calculus to generate for loops and bounded quantifiers, thus allowing recursion without requiring the Y Combinator. I’m sure similar things have been discovered by others, but things feel cooler than they are when you’ve found them, so I’ll present this function here in this post. 🙂
First, let me lay out some of the most important basic functions in lambda calculus. (A lot of this will be rehashing bits of the video lecture.)
To give you a sense of how proving things works in lambda calculus, here’s a quick practice proof that ¬¬T is the same thing as T. We’ll prove this by reducing ¬¬T to T, which will show us that the two functions are extensionally equivalent (they return the same values on all inputs). Each step, we will either be substituting in the definition of some symbol, or evaluating a function by substituting its inputs in.
Another more convoluted approach is to show that the function (¬¬T ↔ T) reduces to the function T, which is to say that the two are equivalent. Notice that we’re not saying that the expression (¬¬T ↔ T) “has the truth value True”. There is no such thing as expressions with truth values in lambda calculus, there are just functions, and some functions are equivalent to the True function.
To shorten this proof, I’ll start out by proving the equivalence of ¬T and F, as well as the equivalence of ¬F and T. This will allow me to translate between these functions in one step.
Got the hang of it? Ok, on to the natural numbers!
In lambda calculus, numbers are adverbs. 1 is not “one”, it’s “once”. 1 is a function that takes in a function f and tells you to apply it one time. 2 is “twice”; it’s a function that tells you to apply f twice. And so on.
Defining things this way allows us to have beautifully simple definitions of addition, multiplication, and exponentiation. For instance, the successor of n is just what you get by one more application of f on top of n applications of f. And n plus m is just n S m, because this means that S (the successor function) should be applied n times to m. See if you can see for yourself why the definitions of multiplication and exponentiation make sense (and how they’re different! It’s not just the order of the n and m!).
Let’s prove that 1 + 1 = 2 using lambda calculus!
One more: Here’s a proof that 0 + k = k.
Interestingly, formally proving that k + 0 = k is significantly more convoluted, and there’s no obvious way to do it in general for all k (as opposed to producing a separate proof for each possible value of k). In addition, the proof length will end up being the size of k.
Okay, let’s dive deeper by looking at our first non-trivial lambda calculus data structure, the pair.
Pair Data Structure
The pair function can be fed two values (a and b) which can then be referenced at a later time by feeding it one more function. Feeding the function either of T or F will just select either the first or the second element in the pair. It might not be immediately obvious, but having this ability to, as it were, store functions “in memory” for later reference gives us enormous power. In fact, at this point we have everything we need to introduce the magical function which opens the door to quantification and recursion:
All that Φ does is take a pair of functions (a, b) to a new pair (f(a, b), g(a, b)). Here it’s written in more familiar notation:
Φ: (a, b) ↦ (f(a, b), g(a, b))
Different choices of f and g give us very different types of behavior when we repeatedly apply Φ. First of all, let’s use Φ to subtract and compare numbers.
Subtraction and Comparison
φ1 simply chooses f and g to get the following behavior:
φ1: (a, b) ↦ (b, b+1)
Here’s what happens with repeated application of φ1 to the pair (0, 0):
(0, 0) → (0, 1) → (1, 2) → (2, 3) → …
Looking at this, you can see how n applications of φ1 to (0, 0) gives the pair (n – 1, n), which justifies our definition of the predecessor function.
Our power is enhancing every minute; now, we create for loops!
For Loops and Quantifiers
B S F is the composition of successor and True, so it takes two inputs, fetches the value of the first input, and then adds one to it. Since B S T is the first input to Φ, it will be the function that determines the value of the first member of the output pair. In other words, φ2 gives the following mapping:
φ2: (a, b) ↦ (a + 1, f(a, b))
To get a for loop, we now just iterate this function the appropriate number of times!
The for function I’ve written takes in four parameters (n, m, f, a), and is equivalent to the following program:
The ∃ function is equivalent to the following:
∃𝑥 ∈ [n, m-1] θ(𝑥)
In code, this is:
And the ∀ function is equivalent to:
∀𝑥 ∈ [n, m-1] θ(𝑥)
With these powerful tools in hand, we can now define more interesting functions, like one that detects if an input number is prime!
Let’s use lambda calculus to see if 2 is prime!
That got a little messy, but it’s nothing compared to the computation of whether 3 is prime!
This might sound strange, but I actually find it amazing how short and simple this is. To be fair, I skipped steps if all that they did was substitute in the definition of a function, instead opting to just immediately apply the definition and cut the number of steps in half. The full proof would actually be twice as long!
But nonetheless, try writing a full proof of the primality of 3 using only ZFC set theory in anywhere near as few lines! It’s interesting to me that a language as minimal and bare-bones as that of lambda calculus somehow manages to produce such concise proofs of interesting and complicated statements.