# The end goal of epistemology

What are we trying to do in epistemology?

Here’s a candidate for an answer: The goal of epistemology is to formalize rational reasoning.

This is pretty good. But I don’t think it’s quite enough. I want to distinguish between three possible end goals of epistemology.

1. The goal of epistemology is to formalize how an ideal agent with infinite computational power should reason.
2. The goal of epistemology is to formalize how an agent with limited computational power should reason.
3. The goal of epistemology is to formalize how a rational human being should reason.

We can understand the second task as asking something like “How should I design a general artificial intelligence to most efficiently and accurately model the world?” Since any general AI is going to be implemented in a particular bit of hardware, the answer to this question will depend on details like the memory and processing power of the hardware.

For the first task, we don’t need to worry about these details. Imagine that you’re a software engineer with access to an oracle that instantly computes any function you hand it. You want to build a program that takes in input from its environment and, with the help of this oracle, computes a model of its environment. Hardware constraints are irrelevant, you are just interested in getting the maximum epistemic juice out of your sensory inputs as logically possible.

The third task is probably the hardest. It is the most constrained of the three tasks; to accomplish it we need to first of all have a descriptively accurate model of the types of epistemic states that human beings have (e.g. belief and disbelief, comparative confidence, credences). Then we want to place norms on these states that are able to accommodate our cognitive quirks (for example, that don’t call things like memory loss or inability to instantly see all the logical consequences of a set of axioms irrational).

But both of these goals are on a spectrum. We aren’t interested in fully describing our epistemic states, because then there’s no space for placing non-trivial norms on them. And we aren’t interested in fully accommodating our cognitive quirks, because some of these quirks are irrational! It seems really hard to come up with precise and non-arbitrary answers to how descriptive we want to be and how many quirks we want to accommodate.

Now, in my experience, this third task is the one that most philosophers are working on. The second seems to be favored by statisticians and machine learning researchers. The first is favored by LessWrong rationalist-types.

For instance, rationalists tend to like Solomonoff induction as a gold standard for rational reasoning. But Solomonoff induction is literally uncomputable, immediately disqualifying it as a solution to tasks (2) and (3). The only sense in which Solomonoff induction is a candidate for the perfect theory of rationality is the sense of task (1). While it’s certainly not the case that Solomonoff induction is the perfect theory of rationality for a human or a general AI, it might be the right algorithm for an ideal agent with infinite computational power.

I think that disambiguating these three different potential goals of epistemology allows us to sidestep confusion resulting from evaluating a solution to one goal according to the standards of another. Let’s see this by purposefully glossing over the differences between the end goals.

We start with pure Bayesianism, which I’ll take to be the claim that rationality is about having credences that align with the probability calculus and updating them by conditionalization. (Let’s ignore the problem of priors for the moment.)

In favor of this theory: it works really well, in principle! Bayesianism has a lot of really nice properties like convergence to truth and maximizing relative entropy in updating on evidence (which is sort of like squeezing out all the information out of your evidence).

In opposition: the problem of logical omniscience. A Bayesian expects that all of the logical consequences of a set of axioms should be immediately obvious to a rational agent, and therefore that all credences of the form P(logical consequence of axioms | axioms) should be 100%. But now I ask you: is 19,973 a prime number? Presumably you understand natural numbers, including how to multiply and divide them and what prime numbers are. But it seems wrong to declare that the inability to conclude that 19,973 is prime from this basic level is knowledge is irrational.

This is an appeal to task (2). We want to say that there’s a difference between rationality and computational power. An agent with infinite computational power can be irrational if it is running poor software. And an agent with finite computational power can be perfectly rational, in that it makes effective use of these limited computational resources.

What this suggests is that we want a theory of rationality that is indexed by the computational capacities of the agent in question. What’s rational for one agent might not be rational for another. Bayesianism by itself isn’t nuanced enough to do this; two agents with the same evidence (and the same priors) should always end up at the same final credences. What we want is a framework in which two agents with the same evidence, priors, and computational capacity have the same beliefs.

It might be helpful to turn to computational complexity theory for insights. For instance, maybe we want a principle that says that a polynomial-powered agent is not rationally expected to solve NP problems. But the exact details of how such a theory would turn out are not obvious to me. Nor is it obvious that there even is a single non-arbitrary choice.

Regardless, let’s imagine for the moment that we have in hand the perfect theory of rationality for task (2). This theory should reduce to (1) as a special case when the agent in question has infinite computational powers. And if we treat human beings very abstractly as having some well-defined quantity of memory and processing power, then the theory also places norms on human reasoning. But in doing this, we open a new possible set of objections. Might this theory condemn as irrational some cognitive features of humans that we want to label as arational (neither rational nor irrational)?

For instance, let’s suppose that this theory involves something like updating by conditionalization. Notice that in this process, your credence in the evidence being conditioned on goes to 100%. Perhaps we want to say that the only things we should be fully 100% confident in are our conscious experiences at the present moment. Your beliefs about past conscious experiences could certainly be mistaken (indeed, many regularly are). Even your beliefs about your conscious experiences from a moment ago are suspect!

What this implies is that the set of evidence you are conditioning on at any given moment is just the set of all your current conscious experiences. But this is way too small a set to do anything useful with. What’s worse, it’s constantly changing. The sound of a car engine I’m updating on right now will no longer be around to be updated in one more moment. But this can’t be right; if at time T we set our credence in the proposition “I heard a car engine at time T” to 100%, then at time T+1 our credence should still be 100%.

One possibility here is to deny that 100% credences always stay 100%, and allow for updating backwards in time. Another is to treat not just your current experiences but also all your past experiences as 100% certain. Both of these are pretty unsatisfactory to me. A more plausible approach is to think about the things you’re updating on as not just your present experiences, but the set of presently accessible memories. Of course, this raises the question of what we mean by accessibility, but let’s set that aside for a moment and rest on an intuitive notion that at a given moment there is some set of memories that you could call up at will.

If we allow for updating on this set of presently accessible memories as well as present experiences, then we solve the problem of the evidence set being too small. But we don’t solve the problem of past certainties becoming uncertain. Humans don’t have perfect memory, and we forget things over time. If we don’t want to call this memory loss irrational, then we have to abandon the idea that what counts as evidence at one moment will always count as evidence in the future.

The point I’m making here is that the perfect theory of rationality for task (2) might not be the perfect theory of rationality for task (3). Humans have cognitive quirks that might not be well-captured by treating our brain as a combination of a hard drive and processor. (Another example of this is the fact that our confidence levels are not continuous like real numbers. Trying to accurately model the set of qualitatively distinct confidence levels seems super hard.)

Notice that as we move from (1) to (2) to (3), things get increasingly difficult and messy. This makes sense if we think about the progression as adding more constraints to the problem (as well as making it increasingly vague constraints).

While I am hopeful that we can find an optimal algorithm for inference with infinite computing power, I am less hopeful that there is a unique best solution to (2), and still less for (3). This is not merely a matter of difficulty, the problems themselves become increasingly underspecified as we include constraints like “these rational norms should apply to humans.”

# Deciphering conditional probabilities

How would you evaluate the following two probabilities?

1. P(B | A)
2. P(A → B)

In words, the first is “the probability that B is true, given that A is true” and the second is “the probability that if A is true, then B is true.” I don’t know about you, but these sound pretty darn similar to me.

But in fact, it turns out that they’re different. In fact, you can prove that P(B | A) is always greater than or equal to P(A → B) (the equality only in the case that P(A) = 1 or P(A → B) = 1). The proof of this is not too difficult, but I’ll leave it to you to figure out.

Conditional probabilities are not the same as probabilities of conditionals. But maybe this isn’t actually too strange. After all, material conditionals don’t do such a great job of capturing what we actually mean when we say “If… then…” For instance, consult your intuitions about the truth of the sentence “If 2 is odd then 2 is even.” This turns out to be true (because any material conditional with a true consequent is true). Similarly, think about the statement “If I am on Mars right now, then string theory is correct.” Again, this turns out to be true if we treat the “If… then…” as a material conditional (since any material conditional with a false antecedent is true).

The problem here is that we actually use “If… then…” clauses in several different ways, the logical structure of which are not well captured by the material implication. A → B is logically equivalent to “A is true or B is false,” which is not always exactly what we mean by “If A then B”. Sometimes “If A then B” means “B, because A.” Other times it means something more like “A gives epistemic support for B.” Still other times, it’s meant counterfactually, as something like “If A were to be the case, then B would be the case.”

So perhaps what we want is some other formula involving A and B that better captures our intuitions about conditional statements, and maybe conditional probabilities are the same as probabilities in these types of formulas.

But as we’re about to prove, this is wrong too. Not only does the material implication not capture the logical structure of conditional probabilities, but neither does any other logical truth function! You can prove a triviality result: that if such a formula exists, then all statements must be independent of one another (in which case conditional probabilities lose their meaning).

The proof:

1. Suppose that there exists a function Γ(A, B) such that P(A | B) = P(Γ(A, B)).
2. Then P(A | B & A) = P(Γ | A).
3. So 1 = P(Γ | A).
4. Similarly, P(A | B & -A) = (Γ | -A).
5. So 0 = P(Γ | -A).
6. P(Γ) = P(Γ | A) P(A) + P(Γ | -A) P(-A).
7. P(Γ) = 1 * P(A) + 0 * P(-A).
8. P(Γ) = P(A).
9. So P(A | B) = P(A).

This is a surprisingly strong result. No matter what your formula Γ is, we can say that either it doesn’t capture the logical structure of the conditional probability P(B | A), or it trivializes it.

We can think of this as saying that the language of first order logic is insufficiently powerful to express the conditionals in conditional probabilities. If you take any first order language and apply probabilities to all its valid sentences, none of those credences will be conditional probabilities. To get conditional probabilities, you have to perform algebraic operations like division on the first order probabilities. This is an important (and unintuitive) thing to keep in mind when trying to map epistemic intuitions to probability theory.

# The Problem of Logical Omniscience

Bayesian epistemology says that rational agents have credences that align with the probability calculus. A common objection to this is that this is actually really really demanding. But we don’t have to say that rationality is about having perfectly calibrated credences that match the probability calculus to an arbitrary number of decimal points. Instead we want to say something like “Look, this is just our idealized model of perfectly rational reasoning. We understand that any agent with finite computational capacities is incapable of actually putting real numbers over the set of all possible worlds and updating them with perfect precision. All we say is that the closer to this ideal you are, the better.”

Which raises an interesting question: what do we mean by ‘closeness’? We want some metric to say how rational/irrational a given a given person is being (and how they can get closer to perfect rationality), but it’s not obvious what this metric should be. Also, it’s important to notice that the details of this metric are not specified by Bayesianism!  If we want a precise theory of rationality that can be applied in the real world, we probably have to layer on at least this one additional premise.

Trying to think about candidates for a good metric is made more difficult by the realization that descriptively, our actual credences almost certainly don’t form a probability distribution. Humans are notoriously sub additive when considering the probabilities of disjuncts versus their disjunctions. And I highly doubt that most of my actual credences, insofar as I have them, are normalized.

That said, even if we imagine that we have some satisfactory metric for comparing probability distributions to non-probability-distributions-that-really-ought-to-be-probability-distributions, our problems still aren’t over. The demandingness objection doesn’t just say that it’s hard to be rational. It says that in some cases the Bayesian standard for rationality doesn’t actually make sense. Enter the problem of logical omniscience.

The Bayesian standard for ideal rationality is the Kolmogorov axioms (or something like it). One of these axioms says that for any tautology T, P(T) = 1. In other words, we should be 100% confident in the truth of any tautology. This raises some thorny issues.

For instance, if the Collatz conjecture is true, then it is a tautology (given the definitions of addition, multiplication, natural numbers, and so on). So a perfectly rational being should instantly adopt a 100% credence in its truth. This already seems a bit wrong to me. Whether or not we have deduced the Collatz conjectures from the axioms looks more like an issue of raw computational power than one of rationality. I want to make a distinction between what it takes to be rational, and what it takes to be smart. Raw computing power is not necessarily rationality. Rationality is good software running on that hardware.

But even if we put that worry aside, things get even worse for the Bayesian. Not only can a Bayesian not say that your credences in tautologies can be reasonably non-1, they also have no way to account for the phenomenon of obtaining evidence for mathematical truths.

If somebody comes up to you and shows you that the first 10^20 numbers all satisfy the Collatz conjecture, then, well, the Collatz conjecture is still either a tautology or a contradiction. Updating on the truth of the first 10^20 cases shouldn’t sway your credences at all, because nothing should sway your credences in mathematical truths. Credences of 1 stay 1, always. Same for credences of 0.

That is really really undesirable behavior for an epistemic framework.  At this moment there are thousands of graduate students sitting around feeling uncertain about mathematical propositions and updating on evidence for or against them, and it looks like they’re being perfectly rational to do so. (Both to be uncertain, and to move that uncertainty around with evidence.)

The problem here is not a superficial one. It goes straight to the root of the Bayesian formalism: the axioms that define probability theory. You can’t just throw out the axiom… what you end up with if you do so is an entirely different mathematical framework. You’re not talking about probabilities anymore! And without it you don’t even have the ability to say things like P(X) + P(-X) = 1. But keeping it entails that you can’t have non-1 credences in tautologies, and correspondingly that you can’t get evidence for them. It’s just true that P(theorem | axioms) = 1.

Just to push this point one last time: Suppose I ask you whether 79 is a prime number. Probably the first thing that you automatically do is run a few quick tests (is it even? Does it end in a five or a zero? No? Okay, then it’s not divisible by 2 or 5.) Now you add 7 to 9 to see whether the sum (16) is divisible by three. Is it? No. Upon seeing this, you become more confident that 79 is prime. You realize that 79 is only 2 more than 77, which is a multiple of 7 and 11. So 79 can’t be divisible by either 7 or 11. Your credence rises still more. A reliable friend tells you that it’s not divisible by 13. Now you’re even more confident! And so on.

It sure looks like each step of this thought process was perfectly rational. But what is P(79 is prime | 79 is not divisible by 3)? The exact same thing as P(79 is prime): 100%. The challenge for Bayesians is to account for this undesirable behavior, and to explain how we can reason inductively about logical truths.

# Sapiens: How Shared Myths Change the World

I recently read Yuval Noah Harari’s book Sapiens and loved it. In additional to fascinating and disturbing details about the evolutionary history of Homo sapiens and a wonderful account of human history, he has a really interesting way of talking about the cognitive abilities that make humans distinct from other species. I’ll dive right into this latter topic in this post.

Imagine two people in a prisoner’s dilemma. To try to make it relevant to our ancestral environment, let’s say that they are strangers running into one another, and each see that the other has some resources. There are four possible outcomes. First, they could both cooperate and team up to catch some food that neither would be able to get on their own, and then share the food. Second, they could both defect, attacking each other and both walking away badly injured. And third and fourth, one could cooperate while the other defects, corresponding to one of them stabbing the other in the back and taking their resources. (Let’s suppose that each of the two are currently holding resources of more value than they could obtain by teaming up and hunting.)

Now, the problem is that on standard accounts of rational decision making, the decision that maximizes expected reward for each individual is to defect. That’s bad! The best outcome for everybody is that the two team up and share the loot, and neither walks away injured!

You might just respond “Well, who cares about what our theory of rational decision making says? Humans aren’t rational.” We’ll come back to this in a bit. But for now I’ll say that the problem is not just that our theory of rationality says that we should defect. It’s that this line of reasoning implies that cooperating is an unstable strategy. Imagine a society fully populated with cooperators. Now suppose an individual appears with a mutation that causes them to defect. This defector outperforms the cooperators, because they get to keep stabbing people in the back and stealing their loot and never have to worry about anybody doing the same to them. The result is then that the “gene for defecting” (speaking very metaphorically at this point; the behavior doesn’t necessarily have to be transmitted genetically) spreads like a virus through the population, eventually transforming our society of cooperators to a society of defectors. And everybody’s worse off.

One the other hand, imagine a society full of defectors. What if a cooperator is born into this society? Well, they pretty much right away get stabbed in the back and die out. So a society of defectors stays a society of defectors, and a society of cooperators degenerates into a society of defectors. The technical way of speaking about this is to say that in prisoner’s dilemmas, cooperation is not a Nash equilibrium – a strategy that is stable against mutations when universally adopted. The only Nash equilibrium is universal defection.

Okay, so this is all bad news. We have good game theoretic reasons to expect society to degenerate into a bunch of people stabbing each other in the back. But mysteriously, the record of history has humans coming together to form larger and larger cooperative institutions. What Yuval Noah Harari and many others argue is that the distinctively human force that saves us from these game theoretic traps and creates civilizations is the power of shared myths.

For instance, suppose that the two strangers happened to share a belief in a powerful all-knowing God that punishes defectors in the afterlife and rewards cooperators. Think about how this shifts the reasoning. Now each person thinks “Even if I successfully defect and loot this other person’s resources, I still will have hell to pay in the afterlife. It’s just not worth it to risk incurring God’s wrath! I’ll cooperate.” And thus we get a cooperative equilibrium!

Still you might object “Okay, but what if an atheist is born into this society of God-fearing cooperative people? They’ll begin defecting and successfully spread through the population, right? And then so much for your cooperative equilibrium.”

The superbly powerful thing about these shared myths is the way in which they can restructure society around them. So for instance, it would make sense for a society with the cooperator-punishing God myth to develop social norms around punishing defectors. The mythical punishment becomes an actual real-world punishment by the myth’s adherents. And this is enough to tilt the game-theoretic balance even for atheists.

The point being: The spreading of a powerful shared myth can shift the game theoretic structure of the world, altering the landscape of possible social structures. What’s more, such myths can increase the overall fitness of a society. And we need not rely on group selection arguments here; the presence of the shared myth increases the fitness of every individual.

A deeper point is that the specific way in which the landscape is altered depends on the details of the shared myth. So if we contrast the God myth above to a God that punishes defectors but also punishes mortals who punish defectors, we lose the stability property that we sought. The suggestion being: different ideas alter the game theoretic balance of the world in different ways, and sometimes subtle differences can be hugely important.

Another take-away from this simple example is that shared myths can become embodied within us, both in our behavior and in our physiology. Thus we come back to the “humans aren’t rational” point: The cooperator equilibrium becomes more stable if the God myth somehow becomes hardwired into our brains. These ideas take hold of us and shape us in their image.

Let’s go further into this. In our sophisticated secular society, it’s not too controversial to refer to the belief in all-good and all-knowing gods as a myth. But Yuval Noah Harari goes further. To him, the concept of the shared myth goes much deeper than just our ideas about the supernatural. In fact, most of our native way of viewing the world consists of a network of shared myths and stories that we tell one another.

After all, the universe is just physics. We’re atoms bumping into one another. There are no particles of fairness or human rights, no quantum fields for human meaning or karmic debts. These are all shared myths. Economic systems consist of mostly shared stories that we tell each other, stories about how much a dollar bill is worth and what the stock price of Amazon is. None of these things are really out there in the world. They are in our brains, and they are there for an important reason: they open up the possibility for societal structures that would otherwise be completely impossible. Imagine having a global trade network without the shared myth of the value of money. Or a group of millions of humans living packed together in a city that didn’t all on some level believe in the myths of human value and respect.

Just think about this for a minute. Humans have this remarkable ability to radically change our way of interacting with one another and our environments by just changing the stories that we tell one another. We are able to do this because of two features of our brains. First, we are extraordinarily creative. We can come up with ideas like money and God and law and democracy and whole-heartedly believe in them, to the point that we are willing to sacrifice our lives for them. Second, we are able to communicate these ideas to one another. This allows the ideas to spread and become shared myths. And most remarkably, all of these ideas (capitalism and communism, democracy and fascism) are running on essentially the same hardware! In Harari’s words:

While the behaviour patterns of archaic humans remained fixed for tens of thousands of years, Sapiens could transform their social structures, the nature of their interpersonal relations, their economic activities and a host of other behaviours within a decade or two. Consider a resident of Berlin, born in 1900 and living to the ripe age of one hundred. She spent her childhood in the Hohenzollern Empire of Wilhelm II; her adult years in the Weimar Republic, the Nazi Third Reich and Communist East Germany; and she died a citizen of a democratic and reunited Germany. She had managed to be a part of five very different sociopolitical systems, though her DNA remained exactly the same.

# Anthropic reasoning in everyday life

Thought experiment from a past post:

A stranger comes up to you and offers to play the following game with you: “I will roll a pair of dice. If they land snake eyes (i.e. they both land 1), you give me one dollar. Otherwise, if they land anything else, I give you a dollar.”

Do you play this game?

[…]

Now imagine that the stranger is playing the game in the following way: First they find one person and offer to play the game with them. If the dice land snake eyes, then they collect a dollar and stop playing the game. Otherwise, they find ten new people and offer to play the game with them. Same as before: snake eyes, the stranger collects $1 from each and stops playing, otherwise he moves on to 100 new people. Et cetera forever. When we include this additional information about the other games the stranger is playing, then the thought experiment becomes identical in form to the dice killer thought experiment. Thus updating on the anthropic information that you have been kidnapped gives a 90% chance of snake-eyes, which means you have a 90% chance of losing a dollar and only a 10% chance of gaining a dollar. Apparently you should now not take the offer! This seems a little weird. Shouldn’t it be irrelevant if the game if being offered to other people? To an anthropic reasoner, the answer is a resounding no. It matters who else is, or might be, playing the game, because it gives us additional information about our place in the population of game-players. Thus far this is nothing new. But now we take one more step: Just because you don’t know the spatiotemporal distribution of game offers doesn’t mean that you can ignore it! So far the strange implications of anthropic reasoning have been mostly confined to bizarre thought experiments that don’t seem too relevant to the real world. But the implication of this line of reasoning is that anthropic calculations bleed out into ordinary scenarios. If there is some anthropically relevant information that would affect your probabilities, then you need to consider the probability that this information In other words, if somebody comes up to you and makes you the offer described above, you can’t just calculate the expected value of the game and make your decision. Instead, you have to consider all possible distributions of game offers, calculate the probability of each, and average over the implied probabilities! This is no small order. For instance, suppose that you have a 50% credence that the game is being offered only one time to one person: you. The other 50% is given to the “dice killer” scenario: that the game is offered in rounds to a group that decuples in size each round, and that this continues until the dice finally land snake-eyes. Presumably you then have to average over the expected value of playing the game for each scenario. $EV_1 = - \1 \cdot \frac{35}{36} + \1 \cdot \frac{1}{36} = \ \frac{34}{36} \approx \0.94 \\~\\ EV_2 = \1 \cdot 0.1 + - \1 \cdot 0.9 = - \ 0.80 \\~\\ EV = 0.50 \cdot EV_1 + 0.50 \cdot EV_2 \approx \ .07$ In this case, the calculation wasn’t too bad. But that’s because it was highly idealized. In general, representing your knowledge of the possible distributions of games offered seems quite difficult. But the more crucial point is that it is apparently not enough to go about your daily life calculating the expected value of the decisions facing you. You have to also consider who else might be facing the same decisions, and how this influences your chances of winning. Can anybody think of a real-life example where these considerations change the sign of the expected value calculation? # Adam and Eve’s Anthropic Superpowers The truly weirdest consequences of anthropic reasoning come from a cluster of thought experiments I’ll call the Adam and Eve thought experiments. These thought experiments all involve agents leveraging anthropic probabilities in their favor in bizarre ways that appear as if they have picked up superpowers. We’ll get there by means of a simple thought experiment designed to pump your intuitions in favor of our eventual destination. In front of you is a jar. This jar contains either 10 balls or 100 balls. The balls are numbered in order from 1 to either 10 or 100. You reach in and pull out a ball, and find that it is numbered ‘7’. Which is now more likely: that the jar contains 10 balls or that it contains 100 balls? (Suppose that you were initially evenly split between the two possibilities.) The answer should be fairly intuitive: It is now more likely that the jar contains ten balls. If the jar contains 10 balls, you had a 1/10 chance of drawing #7. On the other hand, if the jar contains 100 balls you had a 1/100 chance of drawing #7. This corresponds to a likelihood ratio of 10:1 in favor of the jar having ten balls. Since your prior odds in the two possibilities were 1:1, your posterior odds should be 10:1. Thus the posterior probability of 10 balls is 10/11, or roughly 91%. Now, let’s apply this same reasoning to something more personal. Imagine two different theories of human history. In the first theory, there are 200 billion people that will ever live. In the second, there are 2 trillion people that will ever live. We want to update on the anthropic evidence of our birth order in the history of humanity. There have been roughly 100 billion people that ever lived, so our birth order is about 100 billion. The self-sampling assumption says that this is just like drawing a ball from a jar that contains either 200 billion numbered balls or 2 trillion numbered balls, and finding that the ball you drew is numbered 100 billion. The likelihood ratio you get is 10:1, so your posterior odds are ten times more favorable for the “200 billion total humans” theory than for the “2 trillion total humans”. If you were initially evenly split between these two, then noticing your birth order should bring you to a 91% confidence in the ‘extinction sooner’ hypothesis. This line of reasoning is called the Doomsday argument, and it leads down into quite a rabbit hole. I don’t want to explore that rabbit hole quite yet. For the moment, let’s just note that ordinary Bayesian updating on your own birth order favors theories in which there are less total humans to ever live. The strength of this update depends linearly on the number of humans being considered: comparing Theory 1 (100 people) to Theory 2 (100 trillion people) gives a likelihood ratio of one trillion in favor of Theory 1 over Theory 2. So in general, it appears that we should be extremely confident that extremely optimistic pictures of a long future for humanity are wrong. The more optimistic, the less likely. With this in mind, let’s go on to Adam and Eve. Suspend your disbelief for a moment and imagine that there was at some point just two humans on the face of the Earth – Adam and Eve. This fateful couple gave rise to all of human history, and we are all their descendants. Now, imagine yourself in their perspective. From this perspective, there are two possible futures that might unfold. In one of them, the two original humans procreate and start the chain of actions leading to the rest of human history. In another, the two original humans refuse to procreate, thus preventing human history from happening. For the sake of this thought experiment, let’s imagine that Adam and Eve know that these are the only two possibilities (that is, suppose that there’s no scenario in which they procreate and have kids, but then those kids die off or somehow else prevent the occurence of history as we know it). By the above reasoning, Adam and Eve should expect that the second of these is enormously more likely than the first. After all, if they never procreate and eventually just die off, then their birth orders are 1 and 2 out of a grand total of 2. If they do procreate, though, then their birth orders are 1 and 2 out of at least 100 billion. This is 50 billion times less likely than the alternative! Now, the unusual bit of this comes from the fact that it seems like Adam and Eve have control over whether or not they procreate. For the sake of the thought experiment, imagine that they are both fertile, and they can take actions that will certainly result in pregnancy. Also assume that if they don’t procreate, Eve won’t get accidentally pregnant by some unusual means. This control over their procreation, coupled with the improbability of their procreation, allows them to wield apparently magical powers. For instance, Adam is feeling hungry and needs to go out and hunt. He makes a firm commitment with Eve: “I shall wait for an hour for a healthy young deer to die in front of our cave entrance. If no such deer dies, then we will procreate and have children, leading to the rest of human history. If so, then we will not procreate, and guarantee that we don’t have kids for the rest of our lives.” Now, there’s some low prior on a healthy young deer just dying right in front of them. Let’s say it’s something like 1 in a billion. Thus our prior odds are 1:1,000,000,000 against Adam and Eve getting their easy meal. But now when we take into account the anthropic update, it becomes 100 billion times more likely that the deer does die, because this outcome has been tied to the nonexistence of the rest of human history. The likelihood ratio here is 100,000,000,000:1. So our posterior odds will be 100:1 in favor of the deer falling dead, just as the two anthropic reasoners desire! This is a 99% chance of a free meal! This is super weird. It sure looks like Adam is able to exercise telekinetic powers to make deer drop dead in front of him at will. Clearly something has gone horribly wrong here! But the argument appears to be totally sound, conditional on the acceptance of the principles we started off with. All that is required is that we allow ourselves to update on evidence of the form “I am the Nth human being to have been born.” (as well as the very unusual setup of the thought experiment). This setup is so artificial that it can be easy to just dismiss it as not worth considering. But now I’ll give another thought experiment of the same form that is much less outlandish, so much so that it may actually one day occur! Here goes… The year is 2100 AD, and independent, autonomous countries are no more. The entire planet is governed by a single planetary government which has an incredibly tight grip on everything that happens under their reign. Anything that this World Government dictates is guaranteed to be carried out, and there is no serious chance that it will lose power. Technology has advanced to the point that colonization of other planets is easily feasible. In fact, the World Government has a carefully laid out plan for colonization of the entire galaxy. The ships have been built and are ready for deployment, the target planets have been identified, and at the beckoning of the World Government galactic colonization can begin, sure to lead to a vast sprawling Galactic Empire of humanity. Now, the World Government is keeping these plans at a standstill for a very particular reason: they are anthropic reasoners! They recognize that by firmly committing to either carry out the colonization or not, they are able to wield enormous anthropic superpowers and do things that would otherwise be completely impossible. For instance, a few years ago scientists detected a deadly cosmic burst from the Sun headed towards Earth. Scientists warned that given the angle of the blast, there was only a 1% chance that it would miss Earth. In addition, they assessed that if the blast hit Earth, it would surely result in the extinction of everybody on the planet. The World Government made the following plans: They launched tens of millions of their colonizing ships and had them wait for further instructions in a region safe from the cosmic rays. Then, the World Government instructed that if the cosmic rays hit Earth, the ships commence galactic colonization. Otherwise, if the cosmic rays missed Earth, the ships should return to Earth’s surface and abandon the colonization plans. Why these plans? Well, because by tying the outcome of the cosmic ray collision to the future history of humanity, they leverage the enormous improbability of their early birth order in the history of humanity given galactic colonization in their favor! If a Galactic Empire future contains 1 billion times more total humans than a Earth future, then the prior odds of 1:100 in favor of the cosmic rays hitting Earth get multiplied by a likelihood ratio of 1,000,000,000:1 in favor of the cosmic rays missing Earth. The result is posterior odds of 10,000,000:1, or a posterior confidence of 99.99999% that the cosmic rays will miss! In short, by wielding tight control over decisions about the future existence of enormous numbers of humans, the World Government is able to exercise apparently magical abilities. Cosmic rays headed their way? No problem, just threaten the universe with a firm commitment to send out their ships to colonize the galaxy. The improbability of this result makes the cosmic rays “swerve”, keeping the people of Earth safe. The reasoning employed in this story should be very disturbing to you. The challenge is to explain how to get out of its conclusion. One possibility is to deny anthropic reasoning. But then we also have to deny all the ordinary everyday cases of anthropic reasoning that seem thoroughly reasonable (and necessary for rationality). (See here.) We could deny that updating on our indexical evidence of our place in history actually has the effects that I’ve said it has. But this seems wrong as well. The calculation done here is the exact type of calculation we’ve done in less controversial scenarios. We could accept the conclusions. But that seems out of the question. Probabilities are supposed to be a useful tool we apply to make sense of a complicated world. In the end, the world is not run by probabilities, but by physics. And physics doesn’t have clauses that permit cosmic rays to swerve out of the way for clever humans with plans of conquering the stars. There’s actually one other way out. But I’m not super fond of it. This way out is to craft a new anthropic principle, specifically for the purpose of counteracting the anthropic considerations in cases like these. The name of this anthropic principle is the Self-Indication Assumption. This principle says that theories that predict more observers like you become more likely in virtue of this fact. Self Indication Assumption: The prior probability of a theory is proportional to the number of observers like you it predicts will exist. Suppose we have narrowed it down to two possible theories of the universe. Theory 1 predicts that there will be 1 million observers like you. Theory 2 predicts that there will be 1 billion observers like you. The self indication assumption says that before we assess the empirical evidence for these two theories, we should place priors in them that make Theory 2 one thousand times more likely than Theory 1. It’s important to note that this principle is not about updating on evidence. It’s about setting of priors. If we update on our existence, this favors neither Theory 1 nor Theory 2. Both predict the existence of an observer like us with 100% probability, so the likelihood ratio is 1/1, and no update is performed. Instead, we can think of the self indication assumption as saying the following: Look at all the possible observers that you could be, in all theories of the world. You should distribute your priors evenly across all these possible observers. Then for each theory you’re interested in, just compile these priors to determine its prior probability. When I first heard of this principle, my reaction was “…WHY???” For whatever reason, it just made no sense to me as a procedure for setting priors. But it must be said that this principle beautifully solves all of the problems I’ve brought up in this post. Remember that the improbability being wielded in every case was the improbability of existing in a world where there are many other future observers like you. The self indication assumption tilts our priors in these worlds exactly in the opposite direction, perfectly canceling out the anthropic update. So, for instance, let’s take the very first thought experiment in this post. Compare a theory of the future in which 200 billion people exist total to a theory in which 2 trillion people exist total. We said that our birth order gives us a likelihood ratio of 10:1 in favor of the first theory. But now the self indication assumption tells us that our prior odds should be 1:10, in favor of the second theory! Putting these two anthropic principles together, we get 1:1 odds. The two theories are equally likely! This seems refreshingly sane. As far as I currently know, this is the only way out of the absurd results of the thought experiments I’ve presented in this post. The self-indication assumption seems really weird and unjustified to me, but I think a really good argument for it is just that it restores sanity to rationality in the face of anthropic craziness. Some final notes: Applying the self-indication assumption to the sleeping beauty problem (discussed here) makes you a thirder instead of a halfer. I previously defended the halfer position on the grounds that (1) the prior on Heads and Tails should be 50/50 and (2) Sleeping Beauty has no information to update on. The self-indication assumption leads to the denial of (1). Since there are different numbers of observers like you if the coin lands Heads versus if it lands Tails, the prior odds should not be 1:1. Instead they should be 2:1 in favor of Heads, reflecting the fact that there are two possible “you”s if the coin lands Heads, and only one if the coin lands Tails. In addition, while I’ve presented the thought experiments that show the positives of the self-indication assumption, there are cases where the self-indication gives very bizarre answers. I won’t go into them now, but I do want to plant a flag here to indicate that the self-indication assumption is by no means uncontroversial. # Sleeping Beauty Problem I’ve been talking a lot about anthropic reasoning, so it’s only fair that I present what’s probably the most well-known thought experiment in this area: the sleeping beauty problem. Here’s a description of the problem from Wiki: Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake: • If the coin comes up heads, Beauty will be awakened and interviewed on Monday only. • If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday. In either case, she will be awakened on Wednesday without interview and the experiment ends. Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Beauty is asked: “What is your credence now for the proposition that the coin landed heads?” There are two popular positions: the thirder position and the halfer position. Thirders say: “Sleeping Beauty knows that she is in one of three situations: {Monday & Heads}, {Monday & Tails}, or {Tuesday and Tails}. All three of these situations are equally compatible with her experience (she can’t distinguish between them from the inside), so she should be indifferent about which one she is in. Thus there is a 1/3 chance of each, implying that there is a 1/3 chance of Heads and a 2/3 chance of Tails.” Halfers say: “The coin is fair, so there is a 1/2 chance of Heads and Tails. When Sleeping Beauty wakes up, she gets no information that she didn’t have before (she would be woken up in either scenario). Since she has no new information, there is no reason to update her credences. So there is still a 1/2 chance of Heads and a 1/2 chance of Tails.” I think that the Halfers are right. The anthropic information she could update on is the fact W = “I have been awakened.” We want to see what happens when we update our prior odds with respect to W. Using Bayes rule we get… $\frac {P(H)} {P(T)} = \frac {1/2} {1/2} = 1 \\~\\ \frac {P(H | W)} {P(T | W)} = \frac {P(W | H)} {P(W | T)} \cdot \frac {P(H)} {P(T)} = \frac {1} {1} \cdot \frac {1/2} {1/2} = 1 \ \ \\~\\ So \ P(H | W) = \frac{1}{2}, \ P(T | W) = \frac{1}{2}$ The important feature of this calculation is that the likelihood ratio is 1. This is because both the theory that the coin landed Heads, and the theory that the coin landed Tails, predict with 100% confidence that Sleeping Beauty will be woken up. The fact that Sleeping Beauty is woken up twice if the coin comes up Tails and only once if the coin comes up Heads is, apparently, irrelevant to Bayes’ theorem. However, Thirders also have a very strong response up their sleeves: “Let’s imagine that every time Sleeping Beauty is right, she gets$1. Now, suppose that Sleeping Beauty always says that the coin landed Tails. Now if she is right, she gets $2… one dollar for each day that she is woken up. What if she always says that the coin lands Heads? Then if she is right, she only gets$1. In other words, if the setup is rerun some large amount of times, the Sleeping Beauty that always says Tails gets twice as much money as the Sleeping Beauty that says Heads. If Sleeping Beauty is indifferent between Heads and Tails, as you Halfers suggest, then she would not have any preference about which one to say. But she would be wrong! She is better off by thinking Tails is more likely… in particular, she should think that Tails is two times more likely than Heads!”

This is a response along the lines of “rationality should not function as a handicap.” I am generally very fond of these arguments, but am uncomfortable with what it implies here. If the above reasoning is correct, then Bayes’ theorem tells us to take a position that leaves us worse off. And if this is true, then it seems we’ve found a flaw in using Bayes’ theorem as a guide to rational belief-formation!

But maybe this is too hasty. Is it really true that an expected value calculation using 1/2 probabilities will result in being indifferent between saying that the coin will land Heads and saying that it will land Tails?

Plausibly not. If the coin lands Heads, then you have twice as many opportunities to make money. In addition, since your qualitative experience is identical on both of these opportunities, you should expect that whatever decision process you perform on Monday will be identical to the decision process on Tuesday. Thus if Sleeping Beauty is a timeless decision theorist, she will see her decision on both days as a single decision. What will she calculate?

Expected value of saying Heads = 50% chance of Heads $\cdot$ $2 gain for saying Heads on both days + 50% chance of Tails $\cdot$$0 = $1 Expected value of saying Tails = 50% chance of Heads $\cdot$$0 + 50% chance of Tails $\cdot$ $1 gain for saying Tails on Tuesday =$0.50

So the expected value of saying Heads is still higher even if you think that the probability of Heads and Tails are equal, provided that you know about subjunctive dependence and timeless decision theory!

# Infinities in the anthropic dice killer thought experiment

It’s time for yet another post about the anthropic dice killer thought experiment. 😛

In this post, I’ll point out some features of the thought experiment that have gone unmentioned in this blog thus far. Perhaps it is in these features that we can figure out how to think about this the right way.

First of all, there are lots of hidden infinities in the thought experiment. And as we’ve seen before, where there are infinities, things start getting really wacky. Perhaps some of the strangeness of the puzzle can be chalked up to these infinities.

For instance, we stipulated that the population from which people are being kidnapped is infinite. This was to allow for the game to go on for arbitrarily many rounds, but it leads to some trickiness. As we saw in the last post, it becomes important to calculate the probability of a particular individual being kidnapped if randomly drawn from the population. But… what is this probability if the population is infinite? The probability of selecting a particular person from an infinite population is just like the probability of picking 5 if randomly selecting from all natural numbers: zero!

Things get a little conceptually tricky here. Imagine that you’re randomly selecting a real number between 0 and 1. The probability that you select any particular number is zero. But at the same time, you will end up selecting some number. Whichever number you end up selecting, is a number that you would have said had a 0% chance of being selected! For situations like these, the term “almost never” is used. Rather than saying that any particular number is impossible, you say that it will “almost never” be picked. While this linguistic trick might make you feel less uneasy about the situation, there still seems to be some remaining confusion to be resolved here.

So in the case of our thought experiment, no matter how many rounds the game ends up going on, you have a 0% chance of being kidnapped. At the same time, by stipulation you have been kidnapped. Making sense of this is only the first puzzle. The second one is to figure out if it makes sense to talk about some theories making it more likely than others that you’ll be kidnapped (is $\frac{11}{\infty}$ smaller than $\frac{111}{\infty}$?)

An even more trouble infinity is in the expected number of people that are kidnapped. No matter how many rounds end up being played, there are always only a finite number of people that are ever kidnapped. But let’s calculate the expected number of people that play the game.

$Number \ by \ n^{th} \ round = \frac{10^n - 1}{9} \\~\\ \sum\limits_{n=1}^{\infty} { \frac{35^{n-1}}{36^n} \cdot \frac{10^n - 1}{9} }$

But wait, this sum diverges! To see this, let’s just for a moment consider the expected number of people in the last round:

$Number \ in \ n^{th} \ round = 10^{n-1} \\~\\ \sum\limits_{n=1}^{\infty} { \frac{35^{n-1}}{36^n} \cdot 10^{n-1} } = \sum\limits_{n=1}^{\infty} { (\frac{350}{36})^{n-1} }$

Since $\frac{350}{36} > 1$, this sum diverges. So on average there are an infinite number of people on the last round (even though the last round always contains a finite number of people). Correspondingly, the expected number of people kidnapped is infinite.

Why might these infinities matter? Well, one reason is that there is a well known problem with playing betting games against sources with infinite resources. Consider the Martingale betting system:

A gambler makes a bet of $1 at some odds. If they win, then good for them! Otherwise, if they lose, they bet$2 on the same odds. If they lose this time, they double down again, betting $4. And so on until eventually they win. The outcome of this is that by the time they win, they have lost $\(1 + 2 + 4 + ... + 2^n)$ and gained $\ 2^{n+1}$. This is a net gain of$1. In other words, no matter what the odds they are betting on, this betting system guarantees a gain of $1 with probability 100%. However, this guaranteed$1 only applies if the gambler can continue doubling down arbitrarily long. If they have a finite amount of money, then at some point they can no longer double down, and they suffer an enormous loss. For a gambler with finite resources, they stand a very good chance of gaining \$1 and a very tiny chance of losing massively. If you calculate the expected gain, it turns out to be no better than what you expect from any ordinary betting system.

Summing up: With finite resources, continually doubling down gives no advantage on average. But with infinite resources, continually doubling down gives a guaranteed profit. Hopefully you see the similarity to the dice killer thought experiment. With an infinite population to draw from, the killer can keep “doubling down” (actually “decupling” down) until they finally get their “payout”: killing all of their current captives. On the other hand, with a finite population, the killer eventually loses the ability to get a new group of 10x the population of the previous one and lets everybody free. In this case, exactly like the Martingale system, the odds for a kidnappee end up coming out to the prior odds of 1/36.

What this indicates is that at least some of the weirdness of the dice killer scenario can be chalked up to the exploitability of infinities by systems like the Martingale system. If you have been kidnapped by the dice killer, you should think that your odds are 90% only if you know you are drawn from an infinite population. Otherwise, your odds should come out to 1/36.

But now consider the following: If you are a casino owner, should you allow into your casino a person with infinite money? Clearly not! It doesn’t matter how much of a bias the games in the casino give in favor of the house. An infinitely wealthy person can always exploit this infinity to give themselves an advantage.

But what about allowing a person with infinite money into your casino to place a single bet? In this case, I think that the answer is yes, you should allow them. After all, with only a finite number of bets, the odds still come out in favor of the house. This is actually analogous to the original dice killer puzzle! You are only selected in one round, and know that you will not be selected at any other time. So perhaps the infinity does not save us here.

One final point. It looks like a lot of the weirdness of this thought experiment is the same type of weirdness as you get from infinitely wealthy people using the Martingale betting system. But now we can ask: Is it possible to construct a variant of the dice killer thought experiment in which the anthropic calculation differs from the non-anthropic calculation, AND the expected number of people kidnapped is finite? It doesn’t seem obvious to me that this is impossible. Since the expected number of captives takes the form of an infinite sum with the number of people by the Nth round multiplied by roughly $(\frac{35}{36})^N$, all that is required is that the number of people by the Nth round be less than $(\frac{36}{35})^N$. Then the anthropic calculation should give a different answer from the non-anthropic calculation, and we can place the chance of escape in between these two. Now we have a finite expected number of captives, but a reversal in decision depending on whether you update on anthropic evidence or not. Perhaps I’ll explore this more in future posts.

# Not a solution to the anthropic dice killer puzzle

I recently came up with what I thought was a solution to the dice killer puzzle. It turns out that I was wrong, but in the process of figuring this out I discovered a few subtleties in the puzzle that I had missed first time around.

First I’ll repost the puzzle here:

One piece of information that you have is that you are aware of the maniacal schemes of your captor. His plans began by capturing one random person. He then rolled a pair of dice to determine their fate. If the dice landed snake eyes (both 1), then the captive would be killed. If not, then they would be let free.

But if they are let free, the killer will search for new victims, and this time bring back ten new people and lock them alone in rooms. He will then determine their fate just as before, with a pair of dice. Snake eyes means they die, otherwise they will be let free and he will search for new victims.

His murder spree will continue until the first time he rolls snake eyes. Then he will kill the group that he currently has imprisoned and retire from the serial-killer life.

Now. You become aware of a risky way out of the room you are locked in and to freedom. The chances of surviving this escape route are only 50%. Your choices are thus either (1) to traverse the escape route with a 50% chance of survival or (2) to just wait for the killer to roll his dice, and hope that it doesn’t land snake eyes.

What should you do?

As you’ll recall, there are two possible estimates of the probability of the dice landing snake eyes: 1/36 and 90%. Briefly, the arguments for each are…

Argument 1  The probability of the dice landing snake eyes is 1/36. If the dice land snake eyes, you die. So the probability that you die is 1/36.

Argument 2  The probability that you are in the last round is above 90%. Everybody in the last round dies. So the probability that you die is above 90%.

The puzzle is trying to explain what is wrong with the second argument, given its unintuitive consequences. So, here’s an attempt at a resolution!

Imagine that you find out that you’re in the fourth round with 999 other people. The probability that you’re interested in is the probability that the fourth round is the last round (which is equivalent to the fourth round being the round in which you get snake-eyes and thus die). To calculate this, we want to consider all possible worlds (i.e. all possible number of rounds that the game might go for) and calculate the probability weight for each.

In other words, we want to be able to calculate P(Game ends on the Nth round) for every N. We can calculate this a priori by just considering the conditions for the game ending on the Nth round. This happens if the dice roll something other than snake eyes N-1 times and then snake eyes once, on the final round. Thus the probability should be:

Now, to calculate the probability that the game ends on the fourth round, we just plug in N = 4 and we’re done!

But hold on. There’s an obvious problem with this approach. If you know that you’ve been kidnapped on the fourth round, then you should have zero credence that the game ended on the third, second, or first rounds. But the probability calculation above gives a non-zero credence to each of these scenarios! What’s gone wrong?

Answer: While the probability above is the right prior probability for the game progressing to the Nth round, what we actually want is the posterior probability, conditioned on the information that you have about your own kidnapping.

In other words, we’re not interested in the prior probability P(Game ends on the Nth round). We’re interested in the conditional probability P(Game ends on the Nth round | I was kidnapped in the fourth round). To calculate this requires Bayes’ rule.

The top term P(You are in the fourth round | N total rounds) is zero whenever N is less than four, which is a good sign. But what happens when N is ≥ 4? Does the probability grow with N or shrink?

Intuitively, we might think that if there are a very large number of rounds, then it is very unlikely that we are in the fourth one. Taking into account the 10x growth in number of people each round, it looks like for any N > 4, the theory that there are N rounds strongly predicts that you are in the Nth round. The larger N is, the more strongly it predicts that you are not in the fourth round. In other words, the update on your being in the fourth round strongly favors possible worlds in which the fourth round is the last one

But this is not the whole story! There’s another update to be considered. Remember that in this setup, you exist as a member of a boundless population and are at some point kidnapped. We can ask the question: How likely is it that you would have been kidnapped if there were N rounds?  Clearly, the more rounds there are before the game ends, the more people are kidnapped, and so the higher chance you have of being kidnapped in the first place! This means that we should expect it to be very likely that the fourth round is not the last round, because worlds in which the fourth round is not the last one contain many more people, thus making it more likely that you would have been kidnapped at all.

In other words, we can break our update into two components: (1) that you were kidnapped, and (2) that it was in the fourth round that you were kidnapped. The first of these updates strongly favors theories in which you are not in the last round. The second strongly favors theories in which you are in the last round. Perhaps, if we’re lucky, these two updates cancel out, leaving us with only the prior probability based on the objective chance of the dice rolling snake eyes (1/36)!

Recapping: If we know which round we are in, then when we update on this information, the probability that this round is the last one is just equal to the objective chance that the dice roll lands snake eyes (1/36). Since this should be true no matter what particular round we happen to be in, we should be able to preemptively update on being in the Nth round (for some N) and bring our credence to 1/36.

This is the line of thought that I had a couple of days ago, which I thought pointed the way to a solution to the anthropic dice killer puzzle. But unfortunately… I was wrong. It turns out that even when we consider both of these updates, we still end up with a probability > 90% of being in the last round.

Here’s an intuitive way to think about why this is the case.

In the solution I wrote up in my initial post on the anthropic dice killer thought experiment, I gave the following calculation:

Basically, we look at the fraction of people that die if the game ends on the Nth round, calculate the probability of the game ending on the Nth round, and then average the fraction over all possible N. This gives us the average fraction of people that die in the last round.

We now know that this calculation was wrong. The place where I went wrong was in calculating the chance of getting snake eyes in the nth round. The probability I wrote was the prior probability, where what we want instead is the posterior probability after performing an anthropic update on the fact of your own kidnapping.

So maybe if we plug in the correct values for these probabilities, we’ll end up getting a saner answer!

Unfortunately, no. The fraction of people that die starts at 100% and then gradually decreases, converging at infinity to 90% (the limit of $\frac{1000...}{1111...}$ is .9). This means that no matter what probabilities we plug in there, the average fraction of people will be greater than 90%. (If the possible values of a quantity are all greater than 90%, then the average value of this quantity cannot possibly be less than 90%.)

This means that without even calculating the precise posterior probabilities, we can confidently say that the average probability of death must be greater than 90%. And therefore our proposed solution fails, and the mystery remains.

It’s worth noting that even if our calculation had come out with the conclusion that 1/36 was the actual average chance of death, we would still have a little explaining to do. Namely, it actually is the case that the average person does better by trying to escape (i.e. acting as if the probability of their death is greater than 90%) than by staying around (i.e. acting as if the probability of their death is 1/36).

This is something that we can say with really high confidence: accepting the apparent anthropic calculation of 90% leaves you better off on average than rejecting it. On its own, this is a very powerful argument for accepting 90% as the answer. The rational course of action should not be one that causes us to lose where winning is an option.