Gödel’s Second Incompleteness Theorem: Explained in Words of Only One Syllable

Somebody recently referred me to a 1994 paper by George Boolos in which he writes out a description of Gödel’s Second Incompleteness Theorem, using only words of one syllable. I love it so much that I’m going to copy the whole thing here in this post. Enjoy!

First of all, when I say “proved”, what I will mean is “proved with the aid of the whole of math”. Now then: two plus two is four, as you well know. And, of course, it can be proved that two plus two is four (proved, that is, with the aid of the whole of math, as I said, though in the case of two plus two, of course we do not need the whole of math to prove that it is four). And, as may not be quite so clear, it can be proved that it can be proved that two plus two is four, as well. And it can be proved that it can be proved that it can be proved that two plus two is four. And so on. In fact, if a claim can be proved, then it can be proved that the claim can be proved. And that too can be proved.

Now, two plus two is not five. And it can be proved that two plus two is not five. And it can be proved that it can be proved that two plus two is not five, and so on.

Thus: it can be proved that two plus two is not five. Can it be proved as well that two plus two is five? It would be a real blow to math, to say the least, if it could. If it could be proved that two plus two is five, then it could be proved that five is not five, and then there would be no claim that could not be proved, and math would be a lot of bunk.

So, we now want to ask, can it be proved that it can’t be proved that two plus two is five? Here’s the shock: no, it can’t. Or, to hedge a bit: if it can be proved that it can’t be proved that two plus two is five, then it can be proved as well that two plus two is five, and math is a lot of bunk. In fact, if math is not a lot of bunk, then no claim of the form “claim X can’t be proved” can be proved.

So, if math is not a lot of bunk, then, though it can’t be proved that two plus two is five, it can’t be proved that it can’t be proved that two plus two is five.

By the way, in case you’d like to know: yes, it can be proved that if it can be proved that it can’t be proved that two plus two is five, then it can be proved that two plus two is five.

George Boolos, Mind, Vol. 103, January 1994, pp. 1 – 3

Anti-inductive priors

I used to think of Bayesianism as composed of two distinct parts: (1) setting priors and (2) updating by conditionalizing. In my mind, this second part was the crown jewel of Bayesian epistemology, while the first part was a little more philosophically problematic. Conditionalization tells you that for any prior distribution you might have, there is a unique rational set of new credences that you should adopt upon receiving evidence, and tells you how to get it. As to what the right priors are, well, that’s a different story. But we can at least set aside worries about priors with assurances about how even a bad prior will eventually be made up for in the long run after receiving enough evidence.

But now I’m realizing that this framing is pretty far off. It turns out that there aren’t really two independent processes going on, just one (and the philosophically problematic one at that): prior-setting. Your prior fully determines what happens when you update by conditionalization on any future evidence you receive. And the set of priors consistent with the probability axioms is large enough that it allows for this updating process to be extremely irrational.

I’ll illustrate what I’m talking about with an example.

Let’s imagine a really simple universe of discourse, consisting of just two objects and one predicate. We’ll make our predicate “is green” and denote objects a_1 and a_2 . Now, if we are being good Bayesians, then we should treat our credences as a probability distribution over the set of all state descriptions of the universe. These probabilities should all be derivable from some hypothetical prior probability distribution over the state descriptions, such that our credences at any later time are just the result of conditioning that prior on the total evidence we have by that time.

Let’s imagine that we start out knowing nothing (i.e. our starting credences are identical to the hypothetical prior) and then learn that one of the objects (a_1 ) is green. In the absence of any other information, then by induction, we should become more confident that the other object is green as well. Is this guaranteed by just updating?

No! Some priors will allow induction to happen, but others will make you unresponsive to evidence. Still others will make you anti-inductive, becoming more and more confident that the next object is not green the more green things you observe. And all of this is perfectly consistent with the laws of probability theory!

Take a look at the following three possible prior distributions over our simple language:

Screen Shot 2018-10-21 at 1.58.45 PM.png

According to P_1 , your new credence in Ga_2 after observing Ga_1 is P_1(Ga_2 | Ga_1) = 0.80 , while your prior credence in Ga_2 was 0.50. Thus P_1 is an inductive prior; you get more confident in future objects being green when you observe past objects being green.

For P_2 , we have that P_2(Ga_2 | Ga_1) = 0.50 , and P_2(Ga_2) = 0.50 as well. Thus P_2 is a non-inductive prior: observing instances of green things doesn’t make future instances of green things more likely.

And finally, P_3(Ga_2 | Ga_1) = 0.20 , while P_3(Ga_2) = 0.5 . Thus P_3 is an anti-inductive prior. Observing that one object is green makes you more than two times less confident confident that the next object will be green.

The anti-inductive prior can be made even more stark by just increasing the gap between the prior probability of Ga_1 \wedge Ga_2 and Ga_1 \wedge -Ga_2 . It is perfectly consistent with the axioms of probability theory for observing a green object to make you almost entirely certain that the next object you observe will not be green.

Our universe of discourse here was very simple (one predicate and two objects). But the point generalizes. Regardless of how many objects and predicates there are in your language, you can have non-inductive or anti-inductive priors. And it isn’t even the case that there are fewer anti-inductive priors than inductive priors!

The deeper point here is that the prior is doing all the epistemic work. Your prior isn’t just an initial credence distribution over possible hypotheses, it also dictates how you will respond to any possible evidence you might receive. That’s why it’s a mistake to think of prior-setting and updating-by-conditionalization as two distinct processes. The results of updating by conditionalization are determined entirely by the form of your prior!

This really emphasizes the importance of having good criterion for setting priors. If we’re trying to formalize scientific inquiry, it’s really important to make sure our formalism rules out the possibility of anti-induction. But this just amounts to requiring rational agents to have constraints on their priors that go above and beyond the probability axioms!

What are these constraints? Do they select one unique best prior? The challenge is that actually finding a uniquely rationally justifiable prior is really hard. Carnap tried a bunch of different techniques for generating such a prior and was unsatisfied with all of them, and there isn’t any real consensus on what exactly this unique prior would be. Even worse, all such suggestions seem to end up being hostage to problems of language dependence – that is, that the “uniquely best prior” changes when you make an arbitrary translation from your language into a different language.

It looks to me like our best option is to abandon the idea of a single best prior (and with it, the notion that rational agents with the same total evidence can’t disagree). This doesn’t have to lead to total epistemic anarchy, where all beliefs are just as rational as all others. Instead, we can place constraints on the set of rationally permissible priors that prohibit things like anti-induction. While identifying a set of constraints seems like a tough task, it seems much more feasible than the task of justifying objective Bayesianism.

Making sense of improbability

Imagine that you take a coin that you believe to be fair and flip it 20 times. Each time it lands heads. You say to your friend: “Wow, what a crazy coincidence! There was a 1 in 220 chance of this outcome. That’s less than one in a million! Super surprising.”

Your friend replies: “I don’t understand. What’s so crazy about the result you got? Any other possible outcome (say, HHTHTTTHTHHHTHTTHHHH) had an equal probability as getting all heads. So what’s so surprising?”

Responding to this is a little tricky. After all, it is the case that for a fair coin, the probability of 20 heads = the probability of HHTHTTTHTHHHTHTTHHHH = roughly one in a million.

Simpler Example_ Five Tosses.png

So in some sense your friend is right that there’s something unusual about saying that one of these outcomes is more surprising than another.

You might answer by saying “Well, let’s parse up the possible outcomes by the number of heads and tails. The outcome I got had 20 heads and 0 tails. Your example outcome had 12 heads and 8 tails. There are many many ways of getting 12 heads and 8 tails than of getting 20 heads and 0 tails, right? And there’s only one way of getting all 20 heads. So that’s why it’s so surprising.”

Probability vs. Number of heads (1).png

Your friend replies: “But hold on, now you’re just throwing out information. Sure my example outcome had 12 heads and 8 tails. But while there’s many ways of getting that number of heads and tails, there’s only exactly one way of getting the result I named! You’re only saying that your outcome is less likely because you’ve glossed over the details of my outcome that make it equally unlikely: the order of heads and tails!”

I think this is a pretty powerful response. What we want is a way to say that HHHHHHHHHHHHHHHHHHHH is surprising while HHTHTTTHTHHHTHTTHHHH is not, not that 20 heads is surprising while 12 heads and 8 tails is unsurprising. But it’s not immediately clear how we can say this.

Consider the information theoretic formalization of surprise, in which the surprisingness of an event E is proportional to the negative log of the probability of that event: Sur(E) = -log(P(E)). There are some nice reasons for this being a good definition of surprise, and it tells us that two equiprobable events should be equally surprising. If E is the event of observing all heads and E’ is the event of observing the sequence HHTHTTTHTHHHTHTTHHHH, then P(E) = P(E’) = 1/220. Correspondingly, Sur(E) = Sur(E’). So according to one reasonable formalization of what we mean by surprisingness, the two sequences of coin tosses are equally surprising. And yet, we want to say that there is something more epistemically significant about the first than the second.

(By the way, observing 20 heads is roughly 6.7 times more surprising than observing 12 heads and 8 tails, according to the above definition. We can plot the surprise curve to see how maximum surprise occurs at the two ends of the distribution, at which point it is 20 bits.)

Surprise vs. number of heads (1).png

So there is our puzzle: in what sense does it make sense to say that observing 20 heads in a row is more surprising than observing the sequence HHTHTTTHTHHHTHTTHHHH? We certainly have strong intuitions that this is true, but do these intuitions make sense? How can we ground the intuitive implausibility of getting 20 heads? In this post I’ll try to point towards a solution to this puzzle.

Okay, so I want to start out by categorizing three different perspectives on the observed sequence of coin tosses. These correspond to (1) looking at just the outcome, (2) looking at the way in which the observation affects the rest of your beliefs, and (3) looking at how the observation affects your expectation of future observations. In probability terms, these correspond to the P(E), P(T| T) and P(E’ | E).

Looking at things through the first perspective, all outcomes are equiprobable, so there is nothing more epistemically significant about one than the other.

But considering the second way of thinking about things, there can be big differences in the significance of two equally probable observations. For instance, suppose that our set of theories under consideration are just the set of all possible biases of the coin, and our credences are initially peaked at .5 (an unbiased coin). Observing HHTHTTTHTHHHTHTTHHHH does little to change our prior. It shifts a little bit in the direction of a bias towards heads, but not significantly. On the other hand, observing all heads should have a massive effect on your beliefs, skewing them exponentially in the direction of extreme heads biases.

Importantly, since we’re looking at beliefs about coin bias, our distributions are now insensitive to any details about the coin flip beyond the number of heads and tails! As far as our beliefs about the coin bias go, finding only the first 8 to be tails looks identical to finding the last 8 to be tails. We’re not throwing out the information about the particular pattern of heads and tails, it’s just become irrelevant for the purposes of consideration of the possible biases of the coin.

Visualizing change in beliefs about coin bias.png

If we want to give a single value to quantify the difference in epistemic states resulting from the two observations, we can try looking at features of these distributions. For instance, we could look at the change in entropy of our distribution if we see E and compare it to the change in entropy upon seeing E’. This gives us a measure of how different observations might affect our uncertainty levels. (In our example, observing HHTHTTTHTHHHTHTTHHHH decreases uncertainty by about 0.8 bits, while observing all heads decreases uncertainty by 1.4 bits.) We could also compare the means of the posterior distributions after each observation, and see which is shifted most from the mean of the prior distribution. (In this case, our two means are 0.57 and 0.91).

Now, this was all looking at things through what I called perspective #2 above: how observations affect beliefs. Sometimes a more concrete way to understand the effect of intuitively implausible events is to look at how they affect specific predictions about future events. This is the approach of perspective #3. Sticking with our coin, we ask not about the bias of the coin, but about how we expect it to land on the next flip. To assess this, we look at the posterior predictive distributions for each posterior:

Posterior Predictive Distributions.png

It shouldn’t be too surprising that observing all heads makes you more confident that the next coin will land heads than observing HHTHTTTHTHHHTHTTHHHH. But looking at this graph gives a precise answer to how much more confident you should be. And it’s somewhat easier to think about than the entire distribution over coin biases.

I’ll leave you with an example puzzle that relates to anthropic reasoning.

Say that one day you win the lottery. Yay! Super surprising! What an improbable event! But now compare this to the event that some stranger Bob Smith wins the lottery. This doesn’t seem so surprising. But supposing that Bob Smith buys lottery tickets at the same rate as you, the probability that you win is identical to the probability that Bob Smith wins. So… why is it any more surprising when you win?

This seems like a weird question. Then again, so did the coin-flipping question we started with. We want to respond with something like “I’m not saying that it’s improbable that some random person wins the lottery. I’m interested in the probability of me winning the lottery. And if we parse up the outcomes as that either I win the lottery or that somebody else wins the lottery, then clearly it’s much more improbable that I win than that somebody else wins.”

But this is exactly parallel to the earlier “I’m not interested in the precise sequence of coin flips, I’m just interested in the number of heads versus tails.” And the response to it is identical in form: If Bob Smith, a particular individual whose existence you are aware of, wins the lottery and you know it, then it’s cheating to throw away those details and just say “Somebody other than me won the lottery.” When you update your beliefs, you should take into account all of your evidence.

Does the framework I presented here help at all with this case?

A simple probability puzzle

In front of you is an urn containing some unknown quantity of balls. These balls are labeled 1, 2, 3, etc. They’ve been jumbled about so as to be in no particular order within the urn. You initially consider it equally likely that the urn contains 1 ball as that it contains 2 balls, 3 balls, and so on, up to 100 balls, which is the maximum capacity of the urn.

Now you reach in to draw out a ball and read the number on it: 34. What is the most likely theory for how many balls the urn contains?

 

 

(…)

 

(Think of an answer before reading on.)

 

(…)

 

 

The answer turns out to be 34!

Hopefully this is a little unintuitive. Specifically, what seems wrong is that you draw out a ball and then conclude that this is the ball with the largest value on it. Shouldn’t extreme results be unlikely? But remember, the balls were randomly jumbled about inside the urn. So whether or not the number on the ball you drew is at the beginning, middle, or end of the set of numbers is pretty much irrelevant.

What is relevant is the likelihood: Pr(There are N balls | I drew a ball numbered 34). And the value of this is simply 1/N.

In general, comparing the theory that there are N balls to the theory that there are M balls, we look at the likelihood ratio: Pr(There are N balls | I drew a ball numbered 34) / Pr(There are M balls | I drew a ball numbered 34). This is simply M/N.

Thus we see that our prior odds get updated by a factor that favors smaller values of N, as long as N ≥ 34. The likelihood is zero up to N = 33, maxes at 34, and then decreases steadily after it as N goes to infinity. Since our prior was evenly spread out between N = 1 and 100 and zero everywhere else, our posterior will be peaked at 34 and decline until 100, after which it will drop to zero.

One way to make this result seem more intuitive is to realize that while strictly speaking the most probable number of balls in the urn is 34, it’s not that much more probable than 35 or 36. The actual probability of 34 is still quite small, it just happens to be a little bit more probable than its larger neighbors. And indeed, for larger values of the maximum capacity of the urn, the relative difference between the posterior probability of 34 and that of 35 decreases.

The end goal of epistemology

What are we trying to do in epistemology?

Here’s a candidate for an answer: The goal of epistemology is to formalize rational reasoning.

This is pretty good. But I don’t think it’s quite enough. I want to distinguish between three possible end goals of epistemology.

  1. The goal of epistemology is to formalize how an ideal agent with infinite computational power should reason.
  2. The goal of epistemology is to formalize how an agent with limited computational power should reason.
  3. The goal of epistemology is to formalize how a rational human being should reason.

We can understand the second task as asking something like “How should I design a general artificial intelligence to most efficiently and accurately model the world?” Since any general AI is going to be implemented in a particular bit of hardware, the answer to this question will depend on details like the memory and processing power of the hardware.

For the first task, we don’t need to worry about these details. Imagine that you’re a software engineer with access to an oracle that instantly computes any function you hand it. You want to build a program that takes in input from its environment and, with the help of this oracle, computes a model of its environment. Hardware constraints are irrelevant, you are just interested in getting the maximum epistemic juice out of your sensory inputs as logically possible.

The third task is probably the hardest. It is the most constrained of the three tasks; to accomplish it we need to first of all have a descriptively accurate model of the types of epistemic states that human beings have (e.g. belief and disbelief, comparative confidence, credences). Then we want to place norms on these states that are able to accommodate our cognitive quirks (for example, that don’t call things like memory loss or inability to instantly see all the logical consequences of a set of axioms irrational).

But both of these goals are on a spectrum. We aren’t interested in fully describing our epistemic states, because then there’s no space for placing non-trivial norms on them. And we aren’t interested in fully accommodating our cognitive quirks, because some of these quirks are irrational! It seems really hard to come up with precise and non-arbitrary answers to how descriptive we want to be and how many quirks we want to accommodate.

Now, in my experience, this third task is the one that most philosophers are working on. The second seems to be favored by statisticians and machine learning researchers. The first is favored by LessWrong rationalist-types.

For instance, rationalists tend to like Solomonoff induction as a gold standard for rational reasoning. But Solomonoff induction is literally uncomputable, immediately disqualifying it as a solution to tasks (2) and (3). The only sense in which Solomonoff induction is a candidate for the perfect theory of rationality is the sense of task (1). While it’s certainly not the case that Solomonoff induction is the perfect theory of rationality for a human or a general AI, it might be the right algorithm for an ideal agent with infinite computational power.

I think that disambiguating these three different potential goals of epistemology allows us to sidestep confusion resulting from evaluating a solution to one goal according to the standards of another. Let’s see this by purposefully glossing over the differences between the end goals.

We start with pure Bayesianism, which I’ll take to be the claim that rationality is about having credences that align with the probability calculus and updating them by conditionalization. (Let’s ignore the problem of priors for the moment.)

In favor of this theory: it works really well, in principle! Bayesianism has a lot of really nice properties like convergence to truth and maximizing relative entropy in updating on evidence (which is sort of like squeezing out all the information out of your evidence).

In opposition: the problem of logical omniscience. A Bayesian expects that all of the logical consequences of a set of axioms should be immediately obvious to a rational agent, and therefore that all credences of the form P(logical consequence of axioms | axioms) should be 100%. But now I ask you: is 19,973 a prime number? Presumably you understand natural numbers, including how to multiply and divide them and what prime numbers are. But it seems wrong to declare that the inability to conclude that 19,973 is prime from this basic level is knowledge is irrational.

This is an appeal to task (2). We want to say that there’s a difference between rationality and computational power. An agent with infinite computational power can be irrational if it is running poor software. And an agent with finite computational power can be perfectly rational, in that it makes effective use of these limited computational resources.

What this suggests is that we want a theory of rationality that is indexed by the computational capacities of the agent in question. What’s rational for one agent might not be rational for another. Bayesianism by itself isn’t nuanced enough to do this; two agents with the same evidence (and the same priors) should always end up at the same final credences. What we want is a framework in which two agents with the same evidence, priors, and computational capacity have the same beliefs.

It might be helpful to turn to computational complexity theory for insights. For instance, maybe we want a principle that says that a polynomial-powered agent is not rationally expected to solve NP problems. But the exact details of how such a theory would turn out are not obvious to me. Nor is it obvious that there even is a single non-arbitrary choice.

Regardless, let’s imagine for the moment that we have in hand the perfect theory of rationality for task (2). This theory should reduce to (1) as a special case when the agent in question has infinite computational powers. And if we treat human beings very abstractly as having some well-defined quantity of memory and processing power, then the theory also places norms on human reasoning. But in doing this, we open a new possible set of objections. Might this theory condemn as irrational some cognitive features of humans that we want to label as arational (neither rational nor irrational)?

For instance, let’s suppose that this theory involves something like updating by conditionalization. Notice that in this process, your credence in the evidence being conditioned on goes to 100%. Perhaps we want to say that the only things we should be fully 100% confident in are our conscious experiences at the present moment. Your beliefs about past conscious experiences could certainly be mistaken (indeed, many regularly are). Even your beliefs about your conscious experiences from a moment ago are suspect!

What this implies is that the set of evidence you are conditioning on at any given moment is just the set of all your current conscious experiences. But this is way too small a set to do anything useful with. What’s worse, it’s constantly changing. The sound of a car engine I’m updating on right now will no longer be around to be updated in one more moment. But this can’t be right; if at time T we set our credence in the proposition “I heard a car engine at time T” to 100%, then at time T+1 our credence should still be 100%.

One possibility here is to deny that 100% credences always stay 100%, and allow for updating backwards in time. Another is to treat not just your current experiences but also all your past experiences as 100% certain. Both of these are pretty unsatisfactory to me. A more plausible approach is to think about the things you’re updating on as not just your present experiences, but the set of presently accessible memories. Of course, this raises the question of what we mean by accessibility, but let’s set that aside for a moment and rest on an intuitive notion that at a given moment there is some set of memories that you could call up at will.

If we allow for updating on this set of presently accessible memories as well as present experiences, then we solve the problem of the evidence set being too small. But we don’t solve the problem of past certainties becoming uncertain. Humans don’t have perfect memory, and we forget things over time. If we don’t want to call this memory loss irrational, then we have to abandon the idea that what counts as evidence at one moment will always count as evidence in the future.

The point I’m making here is that the perfect theory of rationality for task (2) might not be the perfect theory of rationality for task (3). Humans have cognitive quirks that might not be well-captured by treating our brain as a combination of a hard drive and processor. (Another example of this is the fact that our confidence levels are not continuous like real numbers. Trying to accurately model the set of qualitatively distinct confidence levels seems super hard.)

Notice that as we move from (1) to (2) to (3), things get increasingly difficult and messy. This makes sense if we think about the progression as adding more constraints to the problem (as well as making it increasingly vague constraints).

While I am hopeful that we can find an optimal algorithm for inference with infinite computing power, I am less hopeful that there is a unique best solution to (2), and still less for (3). This is not merely a matter of difficulty, the problems themselves become increasingly underspecified as we include constraints like “these rational norms should apply to humans.”

Deciphering conditional probabilities

How would you evaluate the following two probabilities?

  1. P(B | A)
  2. P(A → B)

In words, the first is “the probability that B is true, given that A is true” and the second is “the probability that if A is true, then B is true.” I don’t know about you, but these sound pretty darn similar to me.

But in fact, it turns out that they’re different. In fact, you can prove that P(B | A) is always greater than or equal to P(A → B) (the equality only in the case that P(A) = 1 or P(A → B) = 1). The proof of this is not too difficult, but I’ll leave it to you to figure out.

Conditional probabilities are not the same as probabilities of conditionals. But maybe this isn’t actually too strange. After all, material conditionals don’t do such a great job of capturing what we actually mean when we say “If… then…” For instance, consult your intuitions about the truth of the sentence “If 2 is odd then 2 is even.” This turns out to be true (because any material conditional with a true consequent is true). Similarly, think about the statement “If I am on Mars right now, then string theory is correct.” Again, this turns out to be true if we treat the “If… then…” as a material conditional (since any material conditional with a false antecedent is true).

The problem here is that we actually use “If… then…” clauses in several different ways, the logical structure of which are not well captured by the material implication. A → B is logically equivalent to “A is true or B is false,” which is not always exactly what we mean by “If A then B”. Sometimes “If A then B” means “B, because A.” Other times it means something more like “A gives epistemic support for B.” Still other times, it’s meant counterfactually, as something like “If A were to be the case, then B would be the case.”

So perhaps what we want is some other formula involving A and B that better captures our intuitions about conditional statements, and maybe conditional probabilities are the same as probabilities in these types of formulas.

But as we’re about to prove, this is wrong too. Not only does the material implication not capture the logical structure of conditional probabilities, but neither does any other logical truth function! You can prove a triviality result: that if such a formula exists, then all statements must be independent of one another (in which case conditional probabilities lose their meaning).

The proof:

  1. Suppose that there exists a function Γ(A, B) such that P(A | B) = P(Γ(A, B)).
  2. Then P(A | B & A) = P(Γ | A).
  3. So 1 = P(Γ | A).
  4. Similarly, P(A | B & -A) = (Γ | -A).
  5. So 0 = P(Γ | -A).
  6. P(Γ) = P(Γ | A) P(A) + P(Γ | -A) P(-A).
  7. P(Γ) = 1 * P(A) + 0 * P(-A).
  8. P(Γ) = P(A).
  9. So P(A | B) = P(A).

This is a surprisingly strong result. No matter what your formula Γ is, we can say that either it doesn’t capture the logical structure of the conditional probability P(B | A), or it trivializes it.

We can think of this as saying that the language of first order logic is insufficiently powerful to express the conditionals in conditional probabilities. If you take any first order language and apply probabilities to all its valid sentences, none of those credences will be conditional probabilities. To get conditional probabilities, you have to perform algebraic operations like division on the first order probabilities. This is an important (and unintuitive) thing to keep in mind when trying to map epistemic intuitions to probability theory.

The Problem of Logical Omniscience

Bayesian epistemology says that rational agents have credences that align with the probability calculus. A common objection to this is that this is actually really really demanding. But we don’t have to say that rationality is about having perfectly calibrated credences that match the probability calculus to an arbitrary number of decimal points. Instead we want to say something like “Look, this is just our idealized model of perfectly rational reasoning. We understand that any agent with finite computational capacities is incapable of actually putting real numbers over the set of all possible worlds and updating them with perfect precision. All we say is that the closer to this ideal you are, the better.”

Which raises an interesting question: what do we mean by ‘closeness’? We want some metric to say how rational/irrational a given a given person is being (and how they can get closer to perfect rationality), but it’s not obvious what this metric should be. Also, it’s important to notice that the details of this metric are not specified by Bayesianism!  If we want a precise theory of rationality that can be applied in the real world, we probably have to layer on at least this one additional premise.

Trying to think about candidates for a good metric is made more difficult by the realization that descriptively, our actual credences almost certainly don’t form a probability distribution. Humans are notoriously sub additive when considering the probabilities of disjuncts versus their disjunctions. And I highly doubt that most of my actual credences are normalized.

That said, even if we imagine that we have some satisfactory metric for comparing probability distributions to non-probability-distributions-that-really-ought-to-be-probability-distributions, our problems still aren’t over. The demandingness objection doesn’t just say that it’s hard to be rational. It says that in some cases the Bayesian standard for rationality doesn’t actually make sense. Enter the problem of logical omniscience.

The Bayesian standard for ideal rationality is the Kolmogorov axioms (or something like it). One of these axioms says that for any tautology T, P(T) = 1. In other words, we should be 100% confident in the truth of any tautology. This raises some thorny issues.

For instance, if the Collatz conjecture is true, then it is a tautology (given the definitions of addition, multiplication, natural numbers, and so on). So a perfectly rational being should instantly adopt a 100% credence in its truth. This already seems a bit wrong to me. Whether or not we have deduced the Collatz conjectures from the axioms looks more like an issue of raw computational power than one of rationality. I want to make a distinction between what it takes to be rational, and what it takes to be smart. Raw computing power is not necessarily rationality. Rationality is good software running on that hardware.

But even if we put that worry aside, things get even worse for the Bayesian. Not only can a Bayesian not say that your credences in tautologies can be reasonably non-1, they also have no way to account for the phenomenon of obtaining evidence for mathematical truths.

If somebody comes up to you and shows you that the first 10^20 numbers all satisfy the Collatz conjecture, then, well, the Collatz conjecture is still either a tautology or a contradiction. Updating on the truth of the first 10^20 cases shouldn’t sway your credences at all, because nothing should sway your credences in mathematical truths. Credences of 1 stay 1, always. Same for credences of 0.

That is really really undesirable behavior for an epistemic framework.  At this moment there are thousands of graduate students sitting around feeling uncertain about mathematical propositions and updating on evidence for or against them, and it looks like they’re being perfectly rational to do so. (Both to be uncertain, and to move that uncertainty around with evidence.)

The problem here is not a superficial one. It goes straight to the root of the Bayesian formalism: the axioms that define probability theory. You can’t just throw out the axiom… what you end up with if you do so is an entirely different mathematical framework. You’re not talking about probabilities anymore! And without it you don’t even have the ability to say things like P(X) + P(-X) = 1. But keeping it entails that you can’t have non-1 credences in tautologies, and correspondingly that you can’t get evidence for them. It’s just true that P(theorem | axioms) = 1.

Just to push this point one last time: Suppose I ask you whether 79 is a prime number. Probably the first thing that you automatically do is run a few quick tests (is it even? Does it end in a five or a zero? No? Okay, then it’s not divisible by 2 or 5.) Now you add 7 to 9 to see whether the sum (16) is divisible by three. Is it? No. Upon seeing this, you become more confident that 79 is prime. You realize that 79 is only 2 more than 77, which is a multiple of 7 and 11. So 79 can’t be divisible by either 7 or 11. Your credence rises still more. A reliable friend tells you that it’s not divisible by 13. Now you’re even more confident! And so on.

It sure looks like each step of this thought process was perfectly rational. But what is P(79 is prime | 79 is not divisible by 3)? The exact same thing as P(79 is prime): 100%. The challenge for Bayesians is to account for this undesirable behavior, and to explain how we can reason inductively about logical truths.