Anti-inductive priors

I used to think of Bayesianism as composed of two distinct parts: (1) setting priors and (2) updating by conditionalizing. In my mind, this second part was the crown jewel of Bayesian epistemology, while the first part was a little more philosophically problematic. Conditionalization tells you that for any prior distribution you might have, there is a unique rational set of new credences that you should adopt upon receiving evidence, and tells you how to get it. As to what the right priors are, well, that’s a different story. But we can at least set aside worries about priors with assurances about how even a bad prior will eventually be made up for in the long run after receiving enough evidence.

But now I’m realizing that this framing is pretty far off. It turns out that there aren’t really two independent processes going on, just one (and the philosophically problematic one at that): prior-setting. Your prior fully determines what happens when you update by conditionalization on any future evidence you receive. And the set of priors consistent with the probability axioms is large enough that it allows for this updating process to be extremely irrational.

I’ll illustrate what I’m talking about with an example.

Let’s imagine a really simple universe of discourse, consisting of just two objects and one predicate. We’ll make our predicate “is green” and denote objects a_1 and a_2 . Now, if we are being good Bayesians, then we should treat our credences as a probability distribution over the set of all state descriptions of the universe. These probabilities should all be derivable from some hypothetical prior probability distribution over the state descriptions, such that our credences at any later time are just the result of conditioning that prior on the total evidence we have by that time.

Let’s imagine that we start out knowing nothing (i.e. our starting credences are identical to the hypothetical prior) and then learn that one of the objects (a_1 ) is green. In the absence of any other information, then by induction, we should become more confident that the other object is green as well. Is this guaranteed by just updating?

No! Some priors will allow induction to happen, but others will make you unresponsive to evidence. Still others will make you anti-inductive, becoming more and more confident that the next object is not green the more green things you observe. And all of this is perfectly consistent with the laws of probability theory!

Take a look at the following three possible prior distributions over our simple language:

Screen Shot 2018-10-21 at 1.58.45 PM.png

According to P_1 , your new credence in Ga_2 after observing Ga_1 is P_1(Ga_2 | Ga_1) = 0.80 , while your prior credence in Ga_2 was 0.50. Thus P_1 is an inductive prior; you get more confident in future objects being green when you observe past objects being green.

For P_2 , we have that P_2(Ga_2 | Ga_1) = 0.50 , and P_2(Ga_2) = 0.50 as well. Thus P_2 is a non-inductive prior: observing instances of green things doesn’t make future instances of green things more likely.

And finally, P_3(Ga_2 | Ga_1) = 0.20 , while P_3(Ga_2) = 0.5 . Thus P_3 is an anti-inductive prior. Observing that one object is green makes you more than two times less confident confident that the next object will be green.

The anti-inductive prior can be made even more stark by just increasing the gap between the prior probability of Ga_1 \wedge Ga_2 and Ga_1 \wedge -Ga_2 . It is perfectly consistent with the axioms of probability theory for observing a green object to make you almost entirely certain that the next object you observe will not be green.

Our universe of discourse here was very simple (one predicate and two objects). But the point generalizes. Regardless of how many objects and predicates there are in your language, you can have non-inductive or anti-inductive priors. And it isn’t even the case that there are fewer anti-inductive priors than inductive priors!

The deeper point here is that the prior is doing all the epistemic work. Your prior isn’t just an initial credence distribution over possible hypotheses, it also dictates how you will respond to any possible evidence you might receive. That’s why it’s a mistake to think of prior-setting and updating-by-conditionalization as two distinct processes. The results of updating by conditionalization are determined entirely by the form of your prior!

This really emphasizes the importance of having good criterion for setting priors. If we’re trying to formalize scientific inquiry, it’s really important to make sure our formalism rules out the possibility of anti-induction. But this just amounts to requiring rational agents to have constraints on their priors that go above and beyond the probability axioms!

What are these constraints? Do they select one unique best prior? The challenge is that actually finding a uniquely rationally justifiable prior is really hard. Carnap tried a bunch of different techniques for generating such a prior and was unsatisfied with all of them, and there isn’t any real consensus on what exactly this unique prior would be. Even worse, all such suggestions seem to end up being hostage to problems of language dependence – that is, that the “uniquely best prior” changes when you make an arbitrary translation from your language into a different language.

It looks to me like our best option is to abandon the idea of a single best prior (and with it, the notion that rational agents with the same total evidence can’t disagree). This doesn’t have to lead to total epistemic anarchy, where all beliefs are just as rational as all others. Instead, we can place constraints on the set of rationally permissible priors that prohibit things like anti-induction. While identifying a set of constraints seems like a tough task, it seems much more feasible than the task of justifying objective Bayesianism.

Making sense of improbability

Imagine that you take a coin that you believe to be fair and flip it 20 times. Each time it lands heads. You say to your friend: “Wow, what a crazy coincidence! There was a 1 in 220 chance of this outcome. That’s less than one in a million! Super surprising.”

Your friend replies: “I don’t understand. What’s so crazy about the result you got? Any other possible outcome (say, HHTHTTTHTHHHTHTTHHHH) had an equal probability as getting all heads. So what’s so surprising?”

Responding to this is a little tricky. After all, it is the case that for a fair coin, the probability of 20 heads = the probability of HHTHTTTHTHHHTHTTHHHH = roughly one in a million.

Simpler Example_ Five Tosses.png

So in some sense your friend is right that there’s something unusual about saying that one of these outcomes is more surprising than another.

You might answer by saying “Well, let’s parse up the possible outcomes by the number of heads and tails. The outcome I got had 20 heads and 0 tails. Your example outcome had 12 heads and 8 tails. There are many many ways of getting 12 heads and 8 tails than of getting 20 heads and 0 tails, right? And there’s only one way of getting all 20 heads. So that’s why it’s so surprising.”

Probability vs. Number of heads (1).png

Your friend replies: “But hold on, now you’re just throwing out information. Sure my example outcome had 12 heads and 8 tails. But while there’s many ways of getting that number of heads and tails, there’s only exactly one way of getting the result I named! You’re only saying that your outcome is less likely because you’ve glossed over the details of my outcome that make it equally unlikely: the order of heads and tails!”

I think this is a pretty powerful response. What we want is a way to say that HHHHHHHHHHHHHHHHHHHH is surprising while HHTHTTTHTHHHTHTTHHHH is not, not that 20 heads is surprising while 12 heads and 8 tails is unsurprising. But it’s not immediately clear how we can say this.

Consider the information theoretic formalization of surprise, in which the surprisingness of an event E is proportional to the negative log of the probability of that event: Sur(E) = -log(P(E)). There are some nice reasons for this being a good definition of surprise, and it tells us that two equiprobable events should be equally surprising. If E is the event of observing all heads and E’ is the event of observing the sequence HHTHTTTHTHHHTHTTHHHH, then P(E) = P(E’) = 1/220. Correspondingly, Sur(E) = Sur(E’). So according to one reasonable formalization of what we mean by surprisingness, the two sequences of coin tosses are equally surprising. And yet, we want to say that there is something more epistemically significant about the first than the second.

(By the way, observing 20 heads is roughly 6.7 times more surprising than observing 12 heads and 8 tails, according to the above definition. We can plot the surprise curve to see how maximum surprise occurs at the two ends of the distribution, at which point it is 20 bits.)

Surprise vs. number of heads (1).png

So there is our puzzle: in what sense does it make sense to say that observing 20 heads in a row is more surprising than observing the sequence HHTHTTTHTHHHTHTTHHHH? We certainly have strong intuitions that this is true, but do these intuitions make sense? How can we ground the intuitive implausibility of getting 20 heads? In this post I’ll try to point towards a solution to this puzzle.

Okay, so I want to start out by categorizing three different perspectives on the observed sequence of coin tosses. These correspond to (1) looking at just the outcome, (2) looking at the way in which the observation affects the rest of your beliefs, and (3) looking at how the observation affects your expectation of future observations. In probability terms, these correspond to the P(E), P(T| T) and P(E’ | E).

Looking at things through the first perspective, all outcomes are equiprobable, so there is nothing more epistemically significant about one than the other.

But considering the second way of thinking about things, there can be big differences in the significance of two equally probable observations. For instance, suppose that our set of theories under consideration are just the set of all possible biases of the coin, and our credences are initially peaked at .5 (an unbiased coin). Observing HHTHTTTHTHHHTHTTHHHH does little to change our prior. It shifts a little bit in the direction of a bias towards heads, but not significantly. On the other hand, observing all heads should have a massive effect on your beliefs, skewing them exponentially in the direction of extreme heads biases.

Importantly, since we’re looking at beliefs about coin bias, our distributions are now insensitive to any details about the coin flip beyond the number of heads and tails! As far as our beliefs about the coin bias go, finding only the first 8 to be tails looks identical to finding the last 8 to be tails. We’re not throwing out the information about the particular pattern of heads and tails, it’s just become irrelevant for the purposes of consideration of the possible biases of the coin.

Visualizing change in beliefs about coin bias.png

If we want to give a single value to quantify the difference in epistemic states resulting from the two observations, we can try looking at features of these distributions. For instance, we could look at the change in entropy of our distribution if we see E and compare it to the change in entropy upon seeing E’. This gives us a measure of how different observations might affect our uncertainty levels. (In our example, observing HHTHTTTHTHHHTHTTHHHH decreases uncertainty by about 0.8 bits, while observing all heads decreases uncertainty by 1.4 bits.) We could also compare the means of the posterior distributions after each observation, and see which is shifted most from the mean of the prior distribution. (In this case, our two means are 0.57 and 0.91).

Now, this was all looking at things through what I called perspective #2 above: how observations affect beliefs. Sometimes a more concrete way to understand the effect of intuitively implausible events is to look at how they affect specific predictions about future events. This is the approach of perspective #3. Sticking with our coin, we ask not about the bias of the coin, but about how we expect it to land on the next flip. To assess this, we look at the posterior predictive distributions for each posterior:

Posterior Predictive Distributions.png

It shouldn’t be too surprising that observing all heads makes you more confident that the next coin will land heads than observing HHTHTTTHTHHHTHTTHHHH. But looking at this graph gives a precise answer to how much more confident you should be. And it’s somewhat easier to think about than the entire distribution over coin biases.

I’ll leave you with an example puzzle that relates to anthropic reasoning.

Say that one day you win the lottery. Yay! Super surprising! What an improbable event! But now compare this to the event that some stranger Bob Smith wins the lottery. This doesn’t seem so surprising. But supposing that Bob Smith buys lottery tickets at the same rate as you, the probability that you win is identical to the probability that Bob Smith wins. So… why is it any more surprising when you win?

This seems like a weird question. Then again, so did the coin-flipping question we started with. We want to respond with something like “I’m not saying that it’s improbable that some random person wins the lottery. I’m interested in the probability of me winning the lottery. And if we parse up the outcomes as that either I win the lottery or that somebody else wins the lottery, then clearly it’s much more improbable that I win than that somebody else wins.”

But this is exactly parallel to the earlier “I’m not interested in the precise sequence of coin flips, I’m just interested in the number of heads versus tails.” And the response to it is identical in form: If Bob Smith, a particular individual whose existence you are aware of, wins the lottery and you know it, then it’s cheating to throw away those details and just say “Somebody other than me won the lottery.” When you update your beliefs, you should take into account all of your evidence.

Does the framework I presented here help at all with this case?

A simple probability puzzle

In front of you is an urn containing some unknown quantity of balls. These balls are labeled 1, 2, 3, etc. They’ve been jumbled about so as to be in no particular order within the urn. You initially consider it equally likely that the urn contains 1 ball as that it contains 2 balls, 3 balls, and so on, up to 100 balls, which is the maximum capacity of the urn.

Now you reach in to draw out a ball and read the number on it: 34. What is the most likely theory for how many balls the urn contains?

 

 

(…)

 

(Think of an answer before reading on.)

 

(…)

 

 

The answer turns out to be 34!

Hopefully this is a little unintuitive. Specifically, what seems wrong is that you draw out a ball and then conclude that this is the ball with the largest value on it. Shouldn’t extreme results be unlikely? But remember, the balls were randomly jumbled about inside the urn. So whether or not the number on the ball you drew is at the beginning, middle, or end of the set of numbers is pretty much irrelevant.

What is relevant is the likelihood: Pr(There are N balls | I drew a ball numbered 34). And the value of this is simply 1/N.

In general, comparing the theory that there are N balls to the theory that there are M balls, we look at the likelihood ratio: Pr(There are N balls | I drew a ball numbered 34) / Pr(There are M balls | I drew a ball numbered 34). This is simply M/N.

Thus we see that our prior odds get updated by a factor that favors smaller values of N, as long as N ≥ 34. The likelihood is zero up to N = 33, maxes at 34, and then decreases steadily after it as N goes to infinity. Since our prior was evenly spread out between N = 1 and 100 and zero everywhere else, our posterior will be peaked at 34 and decline until 100, after which it will drop to zero.

One way to make this result seem more intuitive is to realize that while strictly speaking the most probable number of balls in the urn is 34, it’s not that much more probable than 35 or 36. The actual probability of 34 is still quite small, it just happens to be a little bit more probable than its larger neighbors. And indeed, for larger values of the maximum capacity of the urn, the relative difference between the posterior probability of 34 and that of 35 decreases.

Deciphering conditional probabilities

How would you evaluate the following two probabilities?

  1. P(B | A)
  2. P(A → B)

In words, the first is “the probability that B is true, given that A is true” and the second is “the probability that if A is true, then B is true.” I don’t know about you, but these sound pretty darn similar to me.

But in fact, it turns out that they’re different. In fact, you can prove that P(B | A) is always greater than or equal to P(A → B) (the equality only in the case that P(A) = 1 or P(A → B) = 1). The proof of this is not too difficult, but I’ll leave it to you to figure out.

Conditional probabilities are not the same as probabilities of conditionals. But maybe this isn’t actually too strange. After all, material conditionals don’t do such a great job of capturing what we actually mean when we say “If… then…” For instance, consult your intuitions about the truth of the sentence “If 2 is odd then 2 is even.” This turns out to be true (because any material conditional with a true consequent is true). Similarly, think about the statement “If I am on Mars right now, then string theory is correct.” Again, this turns out to be true if we treat the “If… then…” as a material conditional (since any material conditional with a false antecedent is true).

The problem here is that we actually use “If… then…” clauses in several different ways, the logical structure of which are not well captured by the material implication. A → B is logically equivalent to “A is true or B is false,” which is not always exactly what we mean by “If A then B”. Sometimes “If A then B” means “B, because A.” Other times it means something more like “A gives epistemic support for B.” Still other times, it’s meant counterfactually, as something like “If A were to be the case, then B would be the case.”

So perhaps what we want is some other formula involving A and B that better captures our intuitions about conditional statements, and maybe conditional probabilities are the same as probabilities in these types of formulas.

But as we’re about to prove, this is wrong too. Not only does the material implication not capture the logical structure of conditional probabilities, but neither does any other logical truth function! You can prove a triviality result: that if such a formula exists, then all statements must be independent of one another (in which case conditional probabilities lose their meaning).

The proof:

  1. Suppose that there exists a function Γ(A, B) such that P(A | B) = P(Γ(A, B)).
  2. Then P(A | B & A) = P(Γ | A).
  3. So 1 = P(Γ | A).
  4. Similarly, P(A | B & -A) = (Γ | -A).
  5. So 0 = P(Γ | -A).
  6. P(Γ) = P(Γ | A) P(A) + P(Γ | -A) P(-A).
  7. P(Γ) = 1 * P(A) + 0 * P(-A).
  8. P(Γ) = P(A).
  9. So P(A | B) = P(A).

This is a surprisingly strong result. No matter what your formula Γ is, we can say that either it doesn’t capture the logical structure of the conditional probability P(B | A), or it trivializes it.

We can think of this as saying that the language of first order logic is insufficiently powerful to express the conditionals in conditional probabilities. If you take any first order language and apply probabilities to all its valid sentences, none of those credences will be conditional probabilities. To get conditional probabilities, you have to perform algebraic operations like division on the first order probabilities. This is an important (and unintuitive) thing to keep in mind when trying to map epistemic intuitions to probability theory.

The Problem of Logical Omniscience

Bayesian epistemology says that rational agents have credences that align with the probability calculus. A common objection to this is that this is actually really really demanding. But we don’t have to say that rationality is about having perfectly calibrated credences that match the probability calculus to an arbitrary number of decimal points. Instead we want to say something like “Look, this is just our idealized model of perfectly rational reasoning. We understand that any agent with finite computational capacities is incapable of actually putting real numbers over the set of all possible worlds and updating them with perfect precision. All we say is that the closer to this ideal you are, the better.”

Which raises an interesting question: what do we mean by ‘closeness’? We want some metric to say how rational/irrational a given a given person is being (and how they can get closer to perfect rationality), but it’s not obvious what this metric should be. Also, it’s important to notice that the details of this metric are not specified by Bayesianism!  If we want a precise theory of rationality that can be applied in the real world, we probably have to layer on at least this one additional premise.

Trying to think about candidates for a good metric is made more difficult by the realization that descriptively, our actual credences almost certainly don’t form a probability distribution. Humans are notoriously sub additive when considering the probabilities of disjuncts versus their disjunctions. And I highly doubt that most of my actual credences are normalized.

That said, even if we imagine that we have some satisfactory metric for comparing probability distributions to non-probability-distributions-that-really-ought-to-be-probability-distributions, our problems still aren’t over. The demandingness objection doesn’t just say that it’s hard to be rational. It says that in some cases the Bayesian standard for rationality doesn’t actually make sense. Enter the problem of logical omniscience.

The Bayesian standard for ideal rationality is the Kolmogorov axioms (or something like it). One of these axioms says that for any tautology T, P(T) = 1. In other words, we should be 100% confident in the truth of any tautology. This raises some thorny issues.

For instance, if the Collatz conjecture is true, then it is a tautology (given the definitions of addition, multiplication, natural numbers, and so on). So a perfectly rational being should instantly adopt a 100% credence in its truth. This already seems a bit wrong to me. Whether or not we have deduced the Collatz conjectures from the axioms looks more like an issue of raw computational power than one of rationality. I want to make a distinction between what it takes to be rational, and what it takes to be smart. Raw computing power is not necessarily rationality. Rationality is good software running on that hardware.

But even if we put that worry aside, things get even worse for the Bayesian. Not only can a Bayesian not say that your credences in tautologies can be reasonably non-1, they also have no way to account for the phenomenon of obtaining evidence for mathematical truths.

If somebody comes up to you and shows you that the first 10^20 numbers all satisfy the Collatz conjecture, then, well, the Collatz conjecture is still either a tautology or a contradiction. Updating on the truth of the first 10^20 cases shouldn’t sway your credences at all, because nothing should sway your credences in mathematical truths. Credences of 1 stay 1, always. Same for credences of 0.

That is really really undesirable behavior for an epistemic framework.  At this moment there are thousands of graduate students sitting around feeling uncertain about mathematical propositions and updating on evidence for or against them, and it looks like they’re being perfectly rational to do so. (Both to be uncertain, and to move that uncertainty around with evidence.)

The problem here is not a superficial one. It goes straight to the root of the Bayesian formalism: the axioms that define probability theory. You can’t just throw out the axiom… what you end up with if you do so is an entirely different mathematical framework. You’re not talking about probabilities anymore! And without it you don’t even have the ability to say things like P(X) + P(-X) = 1. But keeping it entails that you can’t have non-1 credences in tautologies, and correspondingly that you can’t get evidence for them. It’s just true that P(theorem | axioms) = 1.

Just to push this point one last time: Suppose I ask you whether 79 is a prime number. Probably the first thing that you automatically do is run a few quick tests (is it even? Does it end in a five or a zero? No? Okay, then it’s not divisible by 2 or 5.) Now you add 7 to 9 to see whether the sum (16) is divisible by three. Is it? No. Upon seeing this, you become more confident that 79 is prime. You realize that 79 is only 2 more than 77, which is a multiple of 7 and 11. So 79 can’t be divisible by either 7 or 11. Your credence rises still more. A reliable friend tells you that it’s not divisible by 13. Now you’re even more confident! And so on.

It sure looks like each step of this thought process was perfectly rational. But what is P(79 is prime | 79 is not divisible by 3)? The exact same thing as P(79 is prime): 100%. The challenge for Bayesians is to account for this undesirable behavior, and to explain how we can reason inductively about logical truths.

Anthropic reasoning in everyday life

Thought experiment from a past post:

A stranger comes up to you and offers to play the following game with you: “I will roll a pair of dice. If they land snake eyes (i.e. they both land 1), you give me one dollar. Otherwise, if they land anything else, I give you a dollar.”

Do you play this game?

[…]

Now imagine that the stranger is playing the game in the following way: First they find one person and offer to play the game with them. If the dice land snake eyes, then they collect a dollar and stop playing the game. Otherwise, they find ten new people and offer to play the game with them. Same as before: snake eyes, the stranger collects $1 from each and stops playing, otherwise he moves on to 100 new people. Et cetera forever.

When we include this additional information about the other games the stranger is playing, then the thought experiment becomes identical in form to the dice killer thought experiment. Thus updating on the anthropic information that you have been kidnapped gives a 90% chance of snake-eyes, which means you have a 90% chance of losing a dollar and only a 10% chance of gaining a dollar. Apparently you should now not take the offer!

This seems a little weird. Shouldn’t it be irrelevant if the game if being offered to other people? To an anthropic reasoner, the answer is a resounding no. It matters who else is, or might be, playing the game, because it gives us additional information about our place in the population of game-players.

Thus far this is nothing new. But now we take one more step: Just because you don’t know the spatiotemporal distribution of game offers doesn’t mean that you can ignore it!

So far the strange implications of anthropic reasoning have been mostly confined to bizarre thought experiments that don’t seem too relevant to the real world. But the implication of this line of reasoning is that anthropic calculations bleed out into ordinary scenarios. If there is some anthropically relevant information that would affect your probabilities, then you need to consider the probability that this information

In other words, if somebody comes up to you and makes you the offer described above, you can’t just calculate the expected value of the game and make your decision. Instead, you have to consider all possible distributions of game offers, calculate the probability of each, and average over the implied probabilities! This is no small order.

For instance, suppose that you have a 50% credence that the game is being offered only one time to one person: you. The other 50% is given to the “dice killer” scenario: that the game is offered in rounds to a group that decuples in size each round, and that this continues until the dice finally land snake-eyes. Presumably you then have to average over the expected value of playing the game for each scenario.

EV_1 = - \$1 \cdot \frac{35}{36} + \$1 \cdot \frac{1}{36} = \$ \frac{34}{36} \approx \$0.94 \\~\\ EV_2 = \$1 \cdot 0.1 + - \$1 \cdot 0.9 = - \$ 0.80 \\~\\ EV = 0.50 \cdot EV_1 + 0.50 \cdot EV_2 \approx \$ .07

In this case, the calculation wasn’t too bad. But that’s because it was highly idealized. In general, representing your knowledge of the possible distributions of games offered seems quite difficult. But the more crucial point is that it is apparently not enough to go about your daily life calculating the expected value of the decisions facing you. You have to also consider who else might be facing the same decisions, and how this influences your chances of winning.

Can anybody think of a real-life example where these considerations change the sign of the expected value calculation?

Adam and Eve’s Anthropic Superpowers

The truly weirdest consequences of anthropic reasoning come from a cluster of thought experiments I’ll call the Adam and Eve thought experiments. These thought experiments all involve agents leveraging anthropic probabilities in their favor in bizarre ways that appear as if they have picked up superpowers. We’ll get there by means of a simple thought experiment designed to pump your intuitions in favor of our eventual destination.

In front of you is a jar. This jar contains either 10 balls or 100 balls. The balls are numbered in order from 1 to either 10 or 100. You reach in and pull out a ball, and find that it is numbered ‘7’. Which is now more likely: that the jar contains 10 balls or that it contains 100 balls? (Suppose that you were initially evenly split between the two possibilities.)

The answer should be fairly intuitive: It is now more likely that the jar contains ten balls.

If the jar contains 10 balls, you had a 1/10 chance of drawing #7. On the other hand, if the jar contains 100 balls you had a 1/100 chance of drawing #7. This corresponds to a likelihood ratio of 10:1 in favor of the jar having ten balls. Since your prior odds in the two possibilities were 1:1, your posterior odds should be 10:1. Thus the posterior probability of 10 balls is 10/11, or roughly 91%.

Now, let’s apply this same reasoning to something more personal. Imagine two different theories of human history. In the first theory, there are 200 billion people that will ever live. In the second, there are 2 trillion people that will ever live. We want to update on the anthropic evidence of our birth order in the history of humanity.

There have been roughly 100 billion people that ever lived, so our birth order is about 100 billion. The self-sampling assumption says that this is just like drawing a ball from a jar that contains either 200 billion numbered balls or 2 trillion numbered balls, and finding that the ball you drew is numbered 100 billion.

The likelihood ratio you get is 10:1, so your posterior odds are ten times more favorable for the “200 billion total humans” theory than for the “2 trillion total humans”. If you were initially evenly split between these two, then noticing your birth order should bring you to a 91% confidence in the ‘extinction sooner’ hypothesis.

This line of reasoning is called the Doomsday argument, and it leads down into quite a rabbit hole. I don’t want to explore that rabbit hole quite yet. For the moment, let’s just note that ordinary Bayesian updating on your own birth order favors theories in which there are less total humans to ever live. The strength of this update depends linearly on the number of humans being considered: comparing Theory 1 (100 people) to Theory 2 (100 trillion people) gives a likelihood ratio of one trillion in favor of Theory 1 over Theory 2. So in general, it appears that we should be extremely confident that extremely optimistic pictures of a long future for humanity are wrong. The more optimistic, the less likely.

With this in mind, let’s go on to Adam and Eve.

Suspend your disbelief for a moment and imagine that there was at some point just two humans on the face of the Earth – Adam and Eve. This fateful couple gave rise to all of human history, and we are all their descendants. Now, imagine yourself in their perspective.

From this perspective, there are two possible futures that might unfold. In one of them, the two original humans procreate and start the chain of actions leading to the rest of human history. In another, the two original humans refuse to procreate, thus preventing human history from happening. For the sake of this thought experiment, let’s imagine that Adam and Eve know that these are the only two possibilities (that is, suppose that there’s no scenario in which they procreate and have kids, but then those kids die off or somehow else prevent the occurence of history as we know it).

By the above reasoning, Adam and Eve should expect that the second of these is enormously more likely than the first. After all, if they never procreate and eventually just die off, then their birth orders are 1 and 2 out of a grand total of 2. If they do procreate, though, then their birth orders are 1 and 2 out of at least 100 billion. This is 50 billion times less likely than the alternative!

Now, the unusual bit of this comes from the fact that it seems like Adam and Eve have control over whether or not they procreate. For the sake of the thought experiment, imagine that they are both fertile, and they can take actions that will certainly result in pregnancy. Also assume that if they don’t procreate, Eve won’t get accidentally pregnant by some unusual means.

This control over their procreation, coupled with the improbability of their procreation, allows them to wield apparently magical powers. For instance, Adam is feeling hungry and needs to go out and hunt. He makes a firm commitment with Eve: “I shall wait for an hour for a healthy young deer to die in front of our cave entrance. If no such deer dies, then we will procreate and have children, leading to the rest of human history. If so, then we will not procreate, and guarantee that we don’t have kids for the rest of our lives.”

Now, there’s some low prior on a healthy young deer just dying right in front of them. Let’s say it’s something like 1 in a billion. Thus our prior odds are 1:1,000,000,000 against Adam and Eve getting their easy meal. But now when we take into account the anthropic update, it becomes 100 billion times more likely that the deer does die, because this outcome has been tied to the nonexistence of the rest of human history. The likelihood ratio here is 100,000,000,000:1. So our posterior odds will be 100:1 in favor of the deer falling dead, just as the two anthropic reasoners desire! This is a 99% chance of a free meal!

This is super weird. It sure looks like Adam is able to exercise telekinetic powers to make deer drop dead in front of him at will. Clearly something has gone horribly wrong here! But the argument appears to be totally sound, conditional on the acceptance of the principles we started off with. All that is required is that we allow ourselves to update on evidence of the form “I am the Nth human being to have been born.” (as well as the very unusual setup of the thought experiment).

This setup is so artificial that it can be easy to just dismiss it as not worth considering. But now I’ll give another thought experiment of the same form that is much less outlandish, so much so that it may actually one day occur!

Here goes…

The year is 2100 AD, and independent, autonomous countries are no more. The entire planet is governed by a single planetary government which has an incredibly tight grip on everything that happens under their reign. Anything that this World Government dictates is guaranteed to be carried out, and there is no serious chance that it will lose power. Technology has advanced to the point that colonization of other planets is easily feasible. In fact, the World Government has a carefully laid out plan for colonization of the entire galaxy. The ships have been built and are ready for deployment, the target planets have been identified, and at the beckoning of the World Government galactic colonization can begin, sure to lead to a vast sprawling Galactic Empire of humanity.

Now, the World Government is keeping these plans at a standstill for a very particular reason: they are anthropic reasoners! They recognize that by firmly committing to either carry out the colonization or not, they are able to wield enormous anthropic superpowers and do things that would otherwise be completely impossible.

For instance, a few years ago scientists detected a deadly cosmic burst from the Sun headed towards Earth. Scientists warned that given the angle of the blast, there was only a 1% chance that it would miss Earth. In addition, they assessed that if the blast hit Earth, it would surely result in the extinction of everybody on the planet.

The World Government made the following plans: They launched tens of millions of their colonizing ships and had them wait for further instructions in a region safe from the cosmic rays. Then, the World Government instructed that if the cosmic rays hit Earth, the ships commence galactic colonization. Otherwise, if the cosmic rays missed Earth, the ships should return to Earth’s surface and abandon the colonization plans.

Why these plans? Well, because by tying the outcome of the cosmic ray collision to the future history of humanity, they leverage the enormous improbability of their early birth order in the history of humanity given galactic colonization in their favor! If a Galactic Empire future contains 1 billion times more total humans than a Earth future, then the prior odds of 1:100 in favor of the cosmic rays hitting Earth get multiplied by a likelihood ratio of 1,000,000,000:1 in favor of the cosmic rays missing Earth. The result is posterior odds of 10,000,000:1, or a posterior confidence of 99.99999% that the cosmic rays will miss!

In short, by wielding tight control over decisions about the future existence of enormous numbers of humans, the World Government is able to exercise apparently magical abilities. Cosmic rays headed their way? No problem, just threaten the universe with a firm commitment to send out their ships to colonize the galaxy. The improbability of this result makes the cosmic rays “swerve”, keeping the people of Earth safe.

The reasoning employed in this story should be very disturbing to you. The challenge is to explain how to get out of its conclusion.

One possibility is to deny anthropic reasoning. But then we also have to deny all the ordinary everyday cases of anthropic reasoning that seem thoroughly reasonable (and necessary for rationality). (See here.)

We could deny that updating on our indexical evidence of our place in history actually has the effects that I’ve said it has. But this seems wrong as well. The calculation done here is the exact type of calculation we’ve done in less controversial scenarios.

We could accept the conclusions. But that seems out of the question. Probabilities are supposed to be a useful tool we apply to make sense of a complicated world. In the end, the world is not run by probabilities, but by physics. And physics doesn’t have clauses that permit cosmic rays to swerve out of the way for clever humans with plans of conquering the stars.

There’s actually one other way out. But I’m not super fond of it. This way out is to craft a new anthropic principle, specifically for the purpose of counteracting the anthropic considerations in cases like these. The name of this anthropic principle is the Self-Indication Assumption. This principle says that theories that predict more observers like you become more likely in virtue of this fact.

Self Indication Assumption: The prior probability of a theory is proportional to the number of observers like you it predicts will exist.

Suppose we have narrowed it down to two possible theories of the universe. Theory 1 predicts that there will be 1 million observers like you. Theory 2 predicts that there will be 1 billion observers like you. The self indication assumption says that before we assess the empirical evidence for these two theories, we should place priors in them that make Theory 2 one thousand times more likely than Theory 1.

It’s important to note that this principle is not about updating on evidence. It’s about setting of priors. If we update on our existence, this favors neither Theory 1 nor Theory 2. Both predict the existence of an observer like us with 100% probability, so the likelihood ratio is 1/1, and no update is performed.

Instead, we can think of the self indication assumption as saying the following: Look at all the possible observers that you could be, in all theories of the world. You should distribute your priors evenly across all these possible observers. Then for each theory you’re interested in, just compile these priors to determine its prior probability.

When I first heard of this principle, my reaction was “…WHY???” For whatever reason, it just made no sense to me as a procedure for setting priors. But it must be said that this principle beautifully solves all of the problems I’ve brought up in this post. Remember that the improbability being wielded in every case was the improbability of existing in a world where there are many other future observers like you. The self indication assumption tilts our priors in these worlds exactly in the opposite direction, perfectly canceling out the anthropic update.

So, for instance, let’s take the very first thought experiment in this post. Compare a theory of the future in which 200 billion people exist total to a theory in which 2 trillion people exist total. We said that our birth order gives us a likelihood ratio of 10:1 in favor of the first theory. But now the self indication assumption tells us that our prior odds should be 1:10, in favor of the second theory! Putting these two anthropic principles together, we get 1:1 odds. The two theories are equally likely! This seems refreshingly sane.

As far as I currently know, this is the only way out of the absurd results of the thought experiments I’ve presented in this post. The self-indication assumption seems really weird and unjustified to me, but I think a really good argument for it is just that it restores sanity to rationality in the face of anthropic craziness.

Some final notes:

Applying the self-indication assumption to the sleeping beauty problem (discussed here) makes you a thirder instead of a halfer. I previously defended the halfer position on the grounds that (1) the prior on Heads and Tails should be 50/50 and (2) Sleeping Beauty has no information to update on. The self-indication assumption leads to the denial of (1). Since there are different numbers of observers like you if the coin lands Heads versus if it lands Tails, the prior odds should not be 1:1. Instead they should be 2:1 in favor of Heads, reflecting the fact that there are two possible “you”s if the coin lands Heads, and only one if the coin lands Tails.

In addition, while I’ve presented the thought experiments that show the positives of the self-indication assumption, there are cases where the self-indication gives very bizarre answers. I won’t go into them now, but I do want to plant a flag here to indicate that the self-indication assumption is by no means uncontroversial.