A stranger comes up to you and offers to play the following game with you: “I will roll a pair of dice. If they land snake eyes (i.e. they both land 1), you give me one dollar. Otherwise, if they land anything else, I give you a dollar.”
Do you play this game?
Here’s an intuitive response: Yes, of course you should! You have a 35/36 chance of gaining $1, and only a 1/36 chance of losing $1. You’d have to be quiterisk averse to refuse those odds.
What if the stranger tells you that they are giving this same bet to many other people? Should that change your calculation?
Intuitively: No, of course not! It doesn’t matter what else the stranger is doing with other people.
What if they tell you that they’ve given this offer to people in the past, and might give the offer to others in the future? Should that change anything?
Once again, it seems intuitively not to matter. The offers given to others simply have nothing to do with you. What matters are your possible outcomes and the probabilities of each of these outcomes. And what other people are doing has nothing to do with either of these.
Now imagine that the stranger is playing the game in the following way: First they find one person and offer to play the game with them. If the dice land snake eyes, then they collect a dollar and stop playing the game. Otherwise, they find ten new people and offer to play the game with them. Same as before: snake eyes, the stranger collects $1 from each and stops playing, otherwise he moves on to 100 new people. Et cetera forever.
We now ask the question: How does the average person given the offer do if they take the offer? Well, no matter how many rounds of offers the stranger gives, at least 90% of people end up in his last round. That means that at least 90% of people end up giving over $1 and at most 10% gain $1. This is clearly net negative for those that hand over money!
Think about it this way: Imagine a population of individuals who all take the offer, and compare them to a population that all reject the offer. Which population does better on average?
For the population who takes the offer, the average person loses money. An upper bound on how much they lose is 10% ($1) + 90% (-$1) = -$.80. For the population that reject the offer, nobody gains money or loses It either: the average case is exactly $0. $0 is better than -$.80, so the strategy of rejecting the offer is better, on average!
This thought experiment is very closely related to the dice killer thought experiment. I think of it as a variant that pushes our anti-anthropic-reasoning intuitions. It just seems really wrong to me that if somebody comes up to you and offers you this deal that has a 35/36 chance of paying out you should reject it. The details of who else is being offered the deal seem totally irrelevant.
But of course, all of the previous arguments I’ve made for anthropic reasoning apply here as well. And it is just true that the average person that rejects the offer does better than the average person that accepts it. Perhaps this is just another bullet that we have to bite in our attempt to formalize rationality!
(This post is the culmination of my last week of posts on anthropics and conservation of expected evidence.)
In this post, I described how anthropic reasoning can apparently give you a way to update on theories of consciousness. This is already weird enough, but I want to make things a little weirder. I want to present an argument that in fact anthropic reasoning implies that we should be functionalists about consciousness.
But first, a brief recap (for more details see the post linked above):
Whenever this experiment is run, roughly 90% of experimental subjects observe snake eyes, and roughly 10% observe not snake eyes. What this means is that 90% of the people update in favor of functionalism (by a factor of 9), and only 10% of people update in favor of substrate dependence theory (also by a factor of 9).
Now suppose that we have a large population that starts out completely agnostic on the question of functionalism vs. substrate dependence. That is, the prior ratio for each individual is 1:
Now imagine that we run arbitrarily many dice-killer experimental setups on the population. We would see an upwards drift in the average beliefs of the population towards functionalism. And in the limit of infinite experiments, we would see complete convergence towards functionalism as the correct theory of consciousness.
Now, the only remaining ingredient is what I’ve been going on about the past two days: if you can predict beforehand that a piece of evidence is going to make you on average more functionalist, then you should preemptively update in favor of functionalism.
What we end up with is the conclusion that considering the counterfactual infinity of experimental results we could receive, we should conclude with arbitrarily high confidence that functionalism is correct.
To be clear, the argument is the following:
If we were to be members of a population that underwent arbitrarily many dice-killer trials, we would converge towards functionalism.
Conservation of expected evidence: if you can predict beforehand which direction some observation would move you, then you should pre-emptively adjust your beliefs in that direction.
Thus, we should preemptively converge towards functionalism.
Premise 2 follows from the axioms of probability theory. It is more or less the statement that you should update your beliefs with evidence, even if this evidence is counterfactual information about the possible results of future experiments.
(If this sounds unintuitive to you at all, consider the following thought experiment: We have two theories of cosmology, one in which 99% of people live in Region A and 1% in Region B, and the other in which 1% live in Region A and 99% in Region B. We now ask where we expect to find ourselves. If we expect to find ourselves in Region A, then we must have higher credence in the first theory than the second. And if we initially did not have this higher credence, then considering the counterfactual question “Where would I find myself if I were to look at which region I am in?” should cause us to update in favor of the first theory.)
Altogether, this argument looks really bullet proof to me. And yet its conclusion seems very wrong.
Can we really conclude with arbitrarily high certainty that functionalism is correct by just going through this sort of armchair reasoning from possible experimental results that we will never do? Should we now be hardcore functionalists?
I’m not quite sure yet what the right way to think about this is. But here is one objection I’ve thought of.
We have only considered one possible version of the dice killer thought experiment (in which the experimenter starts off with 1 human, then chooses 1 human and 9 androids, then 1 human and 99 androids, and so on). In this version, observing snake eyes was evidence for functionalism over substrate dependence theory, which is what causes the population-wide drift towards functionalism.
We can ask, however, if we can construct a variant of the dice killer thought experiment in which snake eyes counts as evidence for substrate dependence theory over functionalism. If so, then we could construct an experimental setup that we can predict beforehand will end up with us converging with arbitrary certainty to substrate dependence theory!
Let’s see how this might be done. We’ll imagine the set of all variants on the thought experiment (that is, the set of all choices the dice killer could make about how many humans and androids to kidnap in each round.)
For ease of notation, we’ll abbreviate functionalism and substrate dependence theory as F and S respectively.
And we’ll also introduce a convenient notation for calculating the total number of humans and the total number androids ever kidnapped by round N.
Now, we want to calculate the probability of snake eyes given functionalism in this general setup, and compare it to the probability of snake eyes given substrate dependence theory. The first step will be to consider the probability of snake eyes if the experiment happens to end on the nth round, for some n. This is just the number of individuals in the last round divided by the total number of kidnapped individuals.
Now, we calculate the average probability of snake eyes (the average fraction of individuals in the last round).
The question is thus if we can find a pair of sequences
such that the first term is larger than the second.
It seems hard to imagine that there are no such pairs of sequences that satisfy this inequality, but thus far I haven’t been able to find an example. For now, I’ll leave it as an exercise for the reader!
If there are no such pairs of sequences, then it is tempting to take this as extremely strong evidence for functionalism. But I am concerned about this whole line of reasoning. What if there are a few such pairs of sequences? What if there are far more in which functionalism is favored than those in which substrate dependence is favored? What if there are an infinity of each?
While I buy each step of the argument, it seems wrong to say that the right thing to do is to consider the infinite set of all possible anthropic experiments you could do, and then somehow average over the results of each to determine the direction in which we should update our theories of consciousness. Indeed, I suspect that any such averaging procedure would be vulnerable to arbitrariness in the way that the experiments are framed, such that different framings give different results.
At this point, I’m pretty convinced that I’m making some fundamental mistake here, but I’m not sure exactly where this mistake is. Any help from readers would be greatly appreciated. 🙂
Bayes’ rule is a pretty simple piece of mathematics, and it’s extraordinary to me the amount of deep insight that can be plumbed by looking closely at it and considering its implications.
Principle 1: The surprisingness of an observation is proportional to the amount of evidence it provides.
Evidence that you expect to observe is weak evidence, while evidence that is unexpected is strong evidence.
This follows directly from Bayes’ theorem:
If E is very unexpected, then P(E) is very small. This puts an upwards pressure on the posterior probability, entailing a large belief update. If E is thoroughly unsurprising, then P(E) is near 1, which means that this upward pressure is not there.
A more precise way to say this is to talk about how surprising evidence is given a particular theory.
On the left is a term that (1) is large when E provides strong evidence for H, (2) is near zero when it provides strong evidence against H, and (3) is near 1 when it provides weak evidence regarding H.
On the right is a term that (1) is large if E is very unsurprising given H, (2) is near zero when E is very surprising given H, and (3) is near 1 when E is not made much more surprising or unsurprising by H.
What we get is that (1) E provides strong evidence for H when E is very unsurprising given H, (2) E provides strong evidence against H when it is very surprising given H, and (3) E provides weak evidence regarding H when it is not much more surprising or unsurprising given H.
This makes a lot of sense when you think through it. Theories that make strong and surprising predictions that turn out to be right, are given stronger evidential weight than theories that make weak and unsurprising predictions.
Principle 2: Conservation of expected evidence
I stole the name of this principle from Eliezer Yudkowsky, who wrote about this here.
The idea here is that for any expectation you have of receiving evidence for a belief, you should have an equal and opposite expectation of receiving evidence against a belief. It cannot be the case that all possible observations support a theory. If some observations support a theory, then there must be some other observations that undermine it. And the precise amount that these observations undermine this theory balances the expected evidential support of the theory.
Proof of this:
The first term is the expected change in credence in H after observing E, and the second is the expected change in credence in H after observing -E. Thus, the average expected change in credence is exactly zero.
Putting these together, we see that a strong expectation corresponds to weak evidence, and this strong expectation of weak evidence also corresponds to a weak expectation of strong evidence!
I realize that I’ve been a little unclear in my last few posts. I presupposed a degree of familiarity with anthropic reasoning that most people don’t have. I want to remedy that by providing a short explanation of what anthropic reasoning is and why it is useful.
First of all, one thing that may be confusing is that the term ‘anthropic reasoning’ is used in multiple very different ways. In particular, its most common usage is probably in arguments about the existence of God, where it is sometimes presented as an argument against the evidential force of fine tuning. I have no interest in this, so please don’t take me to be using the term this way. My usage is identical with that of Nick Bostrom, who wrote a fantastic book about anthropic reasoning. You’ll see precisely what this usage entails shortly, but I just want to plant a flag now in case I use the word ‘anthropic’ in a way that you are unfamiliar with.
Good! Now, let’s start with a few thought experiments.
Suppose that the universe consists of one enormous galaxy, divided into a central region and an outer region. The outer region is densely populated with intelligent life, containing many trillions of planetary civilizations at any given moment. The inner region is hostile to biology, and at any given time only has a few hundred planetary civilizations. It is impossible for life to develop beyond the outer region of the galaxy.
Now, you are a member of a planetary civilization that knows all of this, but doesn’t know its location in the galaxy. You reason that it is:
(a) As likely that you are in the central region as it is that you are in the the outer region
(b) More likely
(c) Less likely
Suppose that the universe consists of one galaxy that goes through life phases. In its early phase, life is very rare and the galaxy is typically populated by only a few hundred planetary civilizations. In its middle phase, life is plentiful and the galaxy is typically populated by billions of planetary civilizations. And in its final phase, which lasts for the rest of the history of the universe, it is impossible for life to evolve.
You are born into a planetary civilization that knows all of this, but doesn’t know what life phase the galaxy is in. You reason that it is:
(a) As likely that you are in the early phase as the middle phase
(b) More likely
(c) Less likely
You are considering two competing theories of cosmology. In Cosmology X, 1% of life exists in Region A and 99% in Region B. In Cosmology Y, 99% of life is in Region A and 1% in Region B. You currently don’t know which region you are in, and have equal credence in Cosmology X and Cosmology Y.
Now you perform an experiment that locates yourself in the universe. You find that you are in Region A. How should your beliefs change?
(a) They should stay the same
(b) Cosmology X becomes more likely than Cosmology Y
(c) Cosmology Y becomes more likely than Cosmology X
If you answered (c) for all three, then congratulations, you’re already an expert anthropic reasoner!
What we want to do is explain why (c) was the right answer in all three cases, and see if we can unearth any common principles. You might think that this is unnecessary; after all, aren’t we just using a standard application of Bayes’ theorem? Sort of, but there’s a little more going on here. Consider, for instance the following argument:
1. Most people have property X,
2. Therefore, I probably have property X.
Ignoring the base rate fallacy here, there is an implicit assumption involved in the jump from 1 to 2. This assumption can be phrased as follows:
I should reason about myself as if I am randomly sampled from the set of all people.
A similar principle turns out to be implicit in the reasoning behind our answers to the three starting questions. For question 1, it was something like
I should reason about myself as if I am randomly sampled from the set of all intelligent organisms in the universe at this moment.
For 2, it might be
I should reason about myself as if I am randomly sampled from the set of all intelligent organisms in the history of the universe.
And for 3, it is pretty much the same as 1:
I should reason about myself as if I am randomly sampled from all intelligent organisms in the universe.
These various sampling assumptions really amount to the notion that we should reason about ourselves the same way we reason about anything else. If somebody hands us a marble from an urn that contains 99% black marbles, (and we have no other information) we should think this marble has a 99% chance of being black. If we learn that 99% of individuals like us exist in Region A rather than Region B (and we have no other information), then we should think that we have a 99% chance of being in Region A.
In general, we can assert the Self-Sampling Assumption (SSA):
SSA: In the absence of more information, I should reason about myself as if I am randomly sampled from the set of all individuals like me.
The “individuals like me” is what gives this principle the versatility to handle all the various cases we’ve discussed so far. It’s slightly vague, but will do for now.
And now we have our first anthropic principle! We’ve seen how eminently reasonable this principle is in the way that it handles the cases we started with. But at the same time, accepting this basic principle pretty quickly leads to some unintuitive conclusions. For instance:
It’s probably not the case that there are other intelligent civilizations that have populations many times larger than ours (for instance, galactic societies).
It’s probably not the case that we exist in the first part of a long and glorious history of humanity in which we expand across space and populate the galaxy (this is called the Doomsday argument).
On average, you are probably pretty average in most ways. (Though there might be a selection effect to be considered in who ends up regularly reading this blog.)
These are pretty dramatic conclusions for a little bit of armchair reasoning! Can it really be that we can assert the extreme improbability of a glorious future and the greater likelihood of doomsday from simply observing our birth order in the history of humanity? Can we really draw these types of conclusions about the probable distributions of intelligent life in our universe from simply looking at facts about the size of our species?
It is tempting to just deny that this reasoning is valid. But to do so is to reject the simple and fairly obvious-seeming principle that justified our initial conclusions. Perhaps we can find some way to accept (c) as the answer for the three questions we started with while still denying the three conclusions I’ve just listed, but it’s not at all obvious how.
Just to drive the point a little further, let’s look at (2) – the Doomsday argument – again. The argument is essentially this:
Consider two theories of human history. In Theory 1, humans have a brief flash of exponential growth and planetary domination, but then go extinct not much later. In this view, we (you and me) are living in a fairly typical point in the history of humanity, existing near its last few years when its population is greatest.
In Theory 2, humans continue to expand and expand, spreading civilization across the solar system and eventually the galaxy. In this view, the future of humanity is immense and glorious, and involves many trillions of humans spread across hundreds or thousands of planets for many hundreds of thousands of years.
We’d all like Theory 2 to be the right one. But when we consider our place in history, we must admit that it seems incredibly less likely for us to be in the very tiny period of human history in which we still exist on one planet, than it is for us to be in the height of human history where most people live.
By analogy, imagine a bowl filled with numbered marbles. We have two theories about the number of marbles in the bowl. Theory 1 says that there are 10 marbles in the bowl. Theory 2 says that there are 10,000,000. Now we draw a marble and see that it is numbered 7. How should this update our credences in these two theories?
Well, on Theory 2, getting a 7 is one million times less likely than it is on Theory 1. So Theory 1 gets a massive evidential boost from the observation. In fact, if we consider the set of all possible theories of how many marbles there are in the jar, the greatest update goes to the theory that says that there are exactly 7 marbles. Theories that say any fewer than 7 are made impossible by the observation, and theories that say more than 7 are progressively less likely as the number goes up.
This is exactly analogous to our birth order in the history of humanity. The self-sampling assumption says that given that you are a human, you should treat yourself as if you are randomly sampled from the set of all humans there will ever be. If you are, say, the one trillionth human, then the most likely theory is that there are not many more than a trillion humans that will ever exist. And theories that say there will be fewer than a trillion humans are ruled out definitively by the observation. Comparing the theory that says there will be a trillion trilllion humans throughout history to the theory that says there will be a trillion humans throughout history, the first is a trillion times less likely!
In other words, applying the self-sampling assumption to your birth order in the history of humanity, we update in favor of a shortly upcoming doomsday. To be clear, this is not the same as saying that doomsday soon is inevitable and that all other sources of evidence for doomsday or not-doomsday are irrelevant. This is just another piece of evidence to be added to the set of all evidence we have when drawing inferences about the future of humanity, albeit a very powerful one.
Okay, great! So far we’ve just waded into anthropic reasoning. The self-sampling assumption is just one of a few anthropic principles that Nick Bostrom discusses, and there are many other mind boggling implications of this style of reasoning. But hopefully I have whetted your appetite for more, as well as given you a sense that this style of reasoning is both nontrivial to refute and deeply significant to our reasoning about our circumstance.
One interesting fact about Bayesianism is that you should never be able to predict beforehand how a piece of evidence will change your credences.
For example, sometimes I think that the more I study economics, the more I will become impressed by the power of free markets, and that therefore I will become more of a capitalist. Another example is that I often think that over time I will become and more moderate in many ways (this is partially just because of induction on the direction of my change – it seems like when I’ve held extreme beliefs in the past, these beliefs tend to have become more moderate over time.)
Now, in these two cases I might be right as a matter of psychology. It might be the case that if I study economics, I’ll be more likely to end up a supporter of capitalism than if I don’t. It might be the case that as time passes, I’ll become increasingly moderate until I die out of boredom with my infinitely bland beliefs.
But even if I’m right about these things as a matter of psychology, I’m wrong about them as a matter of probability theory. It should not be the case that I can predict beforehand how my beliefs will change, for if it were, then why wait to change them? If I am really convinced that studying economics will make me a hardcore capitalist, then I should update in favor of capitalism in advance.
I think that this is a pretty common human mistake to make. Part of an explanation for this might be based off of the argumentative theory of reason – the idea that human rationality evolved as a method of winning arguments, not necessarily being well-calibrated in our pursuit of truth. If this is right, then it would make sense that we would only want to hold strong beliefs when we can confidently display strong evidence for them. It’s not that easy to convincingly argue for your position when part of our justification for it is something like “Well I don’t have the evidence yet, but I’ve got a pretty strong hunch that it’ll come out in this direction.”
Another factor might be the way that personality changes influence belief changes. It might be that when we say “I will become more moderate in beliefs as I age”, we’re really saying something like “My personality will become less contrarian as I age.” There’s still some type of mistake here, but it has to do with an overly strong influence of personality on beliefs, not the epistemic principle in question here.
Suppose that you have in front of you a coin with some unknown bias X. Your beliefs about the bias of the coin are captured in a probability distribution over possible values of X, P(X). Now you flip the coin and observe whether it lands heads or tails. If it lands H, your new state of belief is P(X | H). If it lands T, your new state of belief is P(X | T). So before you observe the coin flip, your new expected distribution over the possible biases is just the weighted sum of these two:
Posterior is P(X | H) with probability P(H)
Posterior is P(X | T) with probability P(T)
Our expected posterior distribution is exactly as our prior posterior distribution! In other words, you can’t anticipate any particular change in your prior distribution. This makes some intuitive sense… if you knew beforehand that your prior would change in a particular direction, then you should have already changed in that direction!
In general: Take any hypothesis H and some piece of relevant evidence that will either turn out E or -E. Suppose you have some prior credence P(H) in this hypothesis before observing either E or -E. Your expected final credence is just:
P(H | E) P(E) + P(H | -E) P(-E)
Which of course is just another way of writing the prior credence P(H).
P(H) = P(H | E) P(E) + P(H | -E) P(-E)
Eliezer Yudkowsky has named this idea “conservation of expected evidence” – for any possible expectation of evidence you might receive, you should have an equal and opposite expectation of evidence in the other direction. It should never be the case that a hypothesis is confirmed no matter what evidence you receive. If E counts as evidence for H, then -E should count against it. And if you have a strong expectation of weak evidence E, then you should have a weak expectation of strong evidence -E. (If you strongly expect to see H, then it is weak evidence, and correspondingly you should have a weak expectation of seeing T, which would be strong evidence.)
This is a pretty powerful idea, and I find myself applying it as a thinking tool pretty regularly. (And Yudkowsky’s writings on LessWrong are chalk full of this type of probability theoretic wisdom, I highly recommend them.)
Today I was thinking about how we could determine, if not the average change in beliefs, the average amount that beliefs change. That is, while you can’t say beforehand that you will become more confident that a hypothesis is true, you can still say something about how much you expect your confidence to change as an absolute value.
Let’s stick with our simple coin-tossing example for illustration. Our prior distribution over possible biases is P(X). This distribution has some characteristic mean µ and standard deviation σ. We can also describe the mean of each possible posterior distribution:
Now we can look at how far away each of these updated means is from the original mean:
We want to average these differences, weighted by how likely we think we are to observe them. But if we did this, we’d just get zero. Why? Conservation of expected evidence rearing its head again! The average of the differences in means is the average amount that you think that your expectation of heads will move, and this cannot be nonzero.
What we want is a quantity that captures the absolute distance between the new means and the original mean. The standard way of doing this is the following:
This gives us:
This gives us a measure of how strong of a belief update we should expect to receive. I haven’t heard very much about this quantity (the square root of the weighted sum of the squares of the changes in means for all possible evidential updates), but it seems pretty important.
Notice also that this scales with the variance on our prior distribution. This makes a lot of sense, because a small variance on your prior implies a high degree of confidence, which entails a weak belief update. Similarly, a large variance on your prior implies a weakly held belief, and thus a strong belief update.
Let’s see what this measure gives for some simple distributions. First, the uniform distribution:
In this case, the predicted change in mean is exactly right! If we get H, our new mean becomes 2/3 (a change of +1/6) and if we get T, our new mean becomes 1/3 (a change of -1/6). Either way, the mean value of our distribution shifts by 1/6, just as our calculation predicted!
Let’s imagine you start maximally uncertain about the bias (the uniform prior) and then observe a H. Now with this new distribution, what is your expected magnitude of belief change?
Notice that this magnitude of belief change is smaller than for the original distribution. Once again, this makes a lot of sense: after getting more information, your beliefs become more ossified and less mutable.
In general, we can calculate ∆µ for the distribution that results from observing n heads and m tails, starting with a flat prior:
Asymptotically as n and m go to infinity (as you flip the coin arbitrarily many times), this relation becomes
Which clearly goes to zero, and pretty quickly as well. This amounts to the statement that as you get lots and lots of information, each subsequent piece of information matters comparatively less.
There are lots of things that I don’t know, like, say, what the birth rate in Sweden is or what the effect of poverty on IQ is. There are also lots of things that I find really confusing and hard to understand, like quantum field theory and monetary policy. There’s also a special category of things that I find conceptually puzzling. These things aren’t difficult to grasp because the facts about them are difficult to understand or require learning complicated jargon. Instead, they’re difficult to grasp because I suspect that I’m confused about the concepts in use.
This is a much deeper level of confusion. It can’t be adjudicated by just reading lots of facts about the subject matter. It requires philosophical reflection on the nature of these concepts, which can sometimes leave me totally confused about everything and grasping for the solid ground of mere factual ignorance.
As such, it feels like a big deal when something I’ve been conceptually puzzled about becomes clear. I want to compile a list for future reference of things that I’m currently conceptually puzzled about and things that I’ve become un-puzzled about. (This is not a complete list, but I believe it touches on the major themes.)
Things I’m conceptually puzzled about
What is the relationship between consciousness and physics?
Essentially, at this point every available viewpoint on consciousness seems wrong to me.
Eliminativism amounts to a denial of pretty much the only thing that we can be sure can’t be denied – that we are having conscious experiences. Physicalism entails the claim that facts about conscious experience can be derived from laws of physics, which is wrong as a matter of logic.
Dualism entails that the laws of physics by themselves cannot account for the behavior of the matter in our brains, which is wrong. And epiphenomenalism entails that our beliefs about our own conscious experience are almost certainly wrong, and are no better representations of our actual conscious experiences than random chance.
How do we make sense of decision theory if we deny libertarian free will?
Decision theory is ultimately about finding the decision D that maximizes expected utility EU(D). But to do this calculation, we have to decide what the set of possibledecisions we are searching is.
Make this set too large, and you end up getting fantastical and impossible results (like that the optimal decision is to snap your fingers and make the world into a utopia). Make it too small, and you end up getting underwhelming results (in the extreme case, you just get that the optimal decision is to do exactly what you are going to do, since this is the only thing you can do in a strictly deterministic world).
We want to find a nice middle ground between these two – a boundary where we can say “inside here the things that are actually possible for us to do, and outside are those that are not.” But any principled distinction between what’s in the set and what’s not must be based on some conception of some actions being “truly possible” to us, and others being truly impossible. I don’t know how to make this distinction in the absence of a robust conception of libertarian free will.
If you say no, then there are no objectively right answers to questions like “What should I believe given the evidence I have?” And if you say yes, then you have to deal with thought experiments like the cube problem, where any choice of priors looks arbitrary and unjustifiable.
(If you are going to be handed a cube, and all you know is that it has a volume less than 1 cm3, then setting maximum entropy priors over volumes gives different answers than setting maximum entropy priors over side areas or side lengths. This means that what qualifies as “maximally uncertain” depends on whether we frame our reasoning in terms of side length, areas, or cube volume. Other approaches besides MaxEnt have similar problems of concept dependence.)
How should we deal with infinities in decision theory?
The basic problem is that expected utility theory does great at delivering reasonable answers when the rewards are finite, but becomes wacky when the rewards become infinite. There are a huge amount of examples of this. For instance, in the St. Petersburg paradox, you are given the option to play a game with an infinite expected payout, suggesting that you should buy in to the game no matter how high the cost. You end up making obviously irrational choices, such as spending $1,000,000 on the hope that a fair coin will land heads 20 times in a row. Variants of this involve the inability of EU theory to distinguish between obviously better and worse bets that have infinite expected value.
And Pascal’s mugging is an even worse case. Roughly speaking, a person comes up to you and threatens you with infinite torture if you don’t submit to them and give them 20 dollars. Now, the probability that this threat is credible is surely tiny. But it is non-zero! (as long as you don’t think it is literally logically impossible for this threat to come true)
An infinite penalty times a finite probability is still an infinite expected penalty. So we stand to gain an infinite expected utility by just handing over the 20 dollars. This seems ridiculous, but I don’t know any reasonable formalization of decision theory that allows me to refute it.
Is causality fundamental?
Causality has been nicely formalized by Pearl’s probabilistic graphical models. This is a simple extension of probability theory, out of which naturally falls causality and counterfactuals.
One can use this framework to represent the states of fundamental particles and how they change over time and interact with one another. What I’m confused about is that in some ways of looking at it, the causal relations appear to be useful but un-fundamental constructs for the sake of easing calculations. In other ways of looking at it, causal relations are necessarily built into the structure of the world, and we can go out and empirically discover them. I don’t know which is right. (Sorry for the vagueness in this one – it’s confusing enough to me that I have trouble even precisely phrasing the dilemma).
How should we deal with the apparent dependence of inductive reasoning upon our choices of concepts?
I’ve written about this here. Beyond just the problem of concept-dependence in our choices of priors, there’s also the problem presented by the grue/bleen thought experiment.
This thought experiment proposes two new concepts: grue (= the set of things that are either green before 2100 or blue after 2100) and bleen (the inverse of grue). It then shows that if we reasoned in terms of grue and bleen, standard induction would have us concluding that all emeralds will suddenly turn blue after 2100. (We repeatedly observed them being grue before 2100, so we should conclude that they will be grue after 2100.)
In other words, choose the wrong concepts and induction breaks down. This is really disturbing – choices of concepts should be merely pragmatic matters! They shouldn’t function as fatal epistemic handicaps. And given that they appear to, we need to develop some criterion we can use to determine what concepts are good and what concepts are bad.
The trouble with this is that the only proposals I’ve seen for such a criterion reference the idea of concepts that “carve reality at its joints”; in other words, the world is composed of green and blue things, not grue and bleen things, so we should use the former rather than the latter. But this relies on the outcome of our inductive process to draw conclusions about the starting step on which this outcome depends!
I don’t know how to cash out “good choices of concepts” without ultimately reasoning circularly. I also don’t even know how to make sense of the idea of concepts being better or worse for more than merely pragmatic reasons.
How should we reason about self defeating beliefs?
The classic self-defeating belief is “This statement is a lie.” If you believe it, then you are compelled to disbelieve it, eliminating the need to believe it in the first place. Broadly speaking, self-defeating beliefs are those that undermine the justifications for belief in them.
Here’s an example that might actually apply in the real world: Black holes glow. The process of emission is known as Hawking radiation. In principle, any configuration of particles with a mass less than the black hole can be emitted from it. Larger configurations are less likely to be emitted, but even configurations such as a human brain have a non-zero probability of being emitted. Henceforth, we will call such configurations black hole brains.
Now, imagine discovering some cosmological evidence that the era in which life can naturally arise on planets circling stars is finite, and that after this era there will be an infinite stretch of time during which all that exists are black holes and their radiation. In such a universe, the expected number of black hole brains produced is infinite (a tiny finite probability multiplied by an infinite stretch of time), while the expected number of “ordinary” brains produced is finite (assuming a finite spatial extent as well).
What this means is that discovering this cosmological evidence should give you an extremely strong boost in credence that you are a black hole brain. (Simply because most brains in your exact situation are black hole brains.) But most black hole brains have completely unreliable beliefs about their environment! They are produced by a stochastic process which cares nothing for producing brains with reliable beliefs. So if you believe that you are a black hole brain, then you should suddenly doubt all of your experiences and beliefs. In particular, you have no reason to think that the cosmological evidence you received was veridical at all!
I don’t know how to deal with this. It seems perfectly possible to find evidence for a scenario that suggests that we are black hole brains (I’d say that we have already found such evidence, multiple times). But then it seems we have no way to rationally respond to this evidence! In fact, if we do a naive application of Bayes’ theorem here, we find that the probability of receiving any evidence in support of black hole brains to be 0!
So we have a few options. First, we could rule out any possible skeptical scenarios like black hole brains, as well asanything that could provide any amount of evidence for them (no matter how tiny). Or we could accept the possibility of such scenarios but face paralysis upon actually encountering evidence for them! Both of these seem clearly wrong, but I don’t know what else to do.
How should we reason about our own existence and indexical statements in general?
This is called anthropic reasoning. I haven’t written about it on this blog, but expect future posts on it.
A thought experiment: imagine a murderous psychopath who has decided to go on an unusual rampage. He will start by abducting one random person. He rolls a pair of dice, and kills the person if they land snake eyes (1, 1). If not, he lets them free and hunts down ten new people. Once again, he rolls his pair of die. If he gets snake eyes he kills all ten. Otherwise he frees them and kidnaps 100 new people. On and on until he eventually gets snake eyes, at which point his murder spree ends.
Now, you wake up and find that you have been abducted. You don’t know how many others have been abducted alongside you. The murderer is about to roll the dice. What is your chance of survival?
Your first thought might be that your chance of death is just the chance of both dice landing 1: 1/36. But think instead about the proportion of all people that are ever abducted by him that end up dying. This value ends up being roughly 90%! So once you condition upon the information that you have been captured, you end up being much more worried about your survival chance.
But at the same time, it seems really wrong to be watching the two dice tumble and internally thinking that there is a 90% chance that they land snake eyes. It’s as if you’re imagining that there’s some weird anthropic “force” pushing the dice towards snake eyes. There’s way more to say about this, but I’ll leave it for future posts.
Things I’ve become un-puzzled about
Newcomb’s problem – one box or two box?
To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.
– Nozick, 1969
I’ve spent months and months being hopelessly puzzled about Newcomb’s problem. I now am convinced that there’s an unambiguous right answer, which is to take the one box. I wrote up a dialogue here explaining the justification for this choice.
In a few words, you should one-box because one-boxing makes it nearly certain that the simulation of you run by the predictor also one-boxed, thus making it nearly certain that you will get 1 million dollars. The dependence between your action and the simulation is not an ordinary causal dependence, nor even a spurious correlation – it is a logical dependence arising from the shared input-output structure. It is the same type of dependence that exists in the clone prisoner dilemma, where you can defect or cooperate with an individual you are assured is identical to you in every single way. When you take into account this logical dependence (also called subjunctivedependence), the answer is unambiguous: one-boxing is the way to go.
I’m in the mood for puzzles, so here’s another one. This one is so good that it deserves its own post.
The setup (from wiki):
The director of a prison offers 100 death row prisoners, who are numbered from 1 to 100, a last chance. A room contains a cupboard with 100 drawers. The director randomly puts one prisoner’s number in each closed drawer. The prisoners enter the room, one after another. Each prisoner may open and look into 50 drawers in any order. The drawers are closed again afterwards.
If, during this search, every prisoner finds his number in one of the drawers, all prisoners are pardoned. If just one prisoner does not find his number, all prisoners die. Before the first prisoner enters the room, the prisoners may discuss strategy—but may not communicate once the first prisoner enters to look in the drawers. What is the prisoners’ best strategy?
Suppose that each prisoner selects 50 at random, and don’t coordinate with one another. Then the chance that any particular prisoner gets their own number is 50%. This means that the chance that all 100 get their own number is 1/2¹⁰⁰.
Let me emphasize how crazily small this is. 1/2100 is 1/1,267,650,600,228,229,401,496,703,205,376; less than one in a decillion. If there were 100 prisoners trying exactly this setup every millisecond, it would take them 40 billion billion years to get out alive once. This is 3 billion times longer than the age of the universe.
Okay, so that’s a bad strategy. Can we do better?
It’s hard to imagine how… While the prisoners can coordinate beforehand, they cannot share any information. So every time a prisoner comes in for their turn at the drawers, they are in exactly the same state of knowledge as if they hadn’t coordinated with the others.
Given this, how could we possibly increase the survival chance beyond 1/2¹⁰⁰?
(Try to answer for yourself before continuing)
Let’s consider a much simpler case. Imagine we have just two prisoners, two drawers, and each one can only open one of them. Now if both prisoners choose randomly, there’s only a 1 in 4 chance that they both survive.
What if they agree to open the same drawer? Then they have reduced their survival chance from 25% to 0%! Why? Because by choosing the same drawer, they either both get the number 1, or they both get the number 2. In either case, they are guaranteed that only one of them gets their own number.
So clearly the prisoners can decrease the survival probability by coordinating beforehand. Can they increase it?
Yes! Suppose that they agree to open different drawers. Then this doubles their survival chance from 25% to 50%. Either they both get their own number, or they both get the wrong number.
The key here is to minimize the overlap between the choices of the prisoners. Unfortunately, this sort of strategy doesn’t scale well. If we have four prisoners, each allowed to open two drawers, then random drawing gives a 1/16 survival chance.
Let’s say they open according to the following scheme: 12, 34, 13, 24 (first prisoner opens drawers 1 and 2, second opens 3 and 4, and so on). Then out of the 24 possible drawer layouts, the only layouts that work are 1432 and 3124:
Scaling this up to 100 prisoners, the odds of survival look pretty measly. Can they do better than this?
(Try to answer for yourself before continuing)
It turns out that yes, there is a strategy that does better at ensuring survival. In fact, it does so much better that the survival chance is over 30 percent!
Take a moment to boggle at this. Somehow we can leverage the dependency induced by the prisoners’ coordination to increase the chance of survival by a factor of one decillion, even though none of their states of knowledge are any different. It’s pretty shocking to me that this is possible.
Here’s the strategy: Each time a prisoner opens a drawer, they consult the number in that drawer to determine which drawer they will open next. Thus each prisoner only has to decide on the first drawer to open, and all the rest of the drawers follow from this. Importantly, the prisoner only knows the first drawer they’ll pick; the other 49 are determined by thedistribution of numbers in the drawers.
We can think about each drawer as starting a chain through the other drawers. These chains always cycle back into the starting number, the longest possible cycle being 100 numbers and the shortest being 1. Now, each prisoner can guarantee that they are in a cycle that contains their own number by choosing the drawer corresponding to their own number!
So, the strategy is that Prisoner N starts by choosing Drawer N, looking at the number within, then choosing the drawer labeled with that number. Repeat 50 times per each prisoner.
The wiki page has a good description of how to calculate the survival probability with this strategy:
The prison director’s assignment of prisoner numbers to drawers can mathematically be described as a permutation of the numbers 1 to 100. Such a permutation is a one-to-one mapping of the set of natural numbers from 1 to 100 to itself. A sequence of numbers which after repeated application of the permutation returns to the first number is called a cycle of the permutation. Every permutation can be decomposed into disjoint cycles, that is, cycles which have no common elements.
In the initial problem, the 100 prisoners are successful if the longest cycle of the permutation has a length of at most 50. Their survival probability is therefore equal to the probability that a random permutation of the numbers 1 to 100 contains no cycle of length greater than 50. This probability is determined in the following.
A permutation of the numbers 1 to 100 can contain at most one cycle of length ways to select the numbers of such a cycle. Within this cycle, these numbers can be arranged in ways since there are permutations to represent distinct cycles of length because of cyclic symmetry. The remaining numbers can be arranged in ways. Therefore, the number of permutations of the numbers 1 to 100 with a cycle of length
The probability, that a (uniformly distributed) random permutation contains no cycle of length greater than 50 is calculated with the formula for single events and the formula for complementary events thus given by
This is about 31.1828%.
This formula easily generalizes to other numbers of prisoners. We can plot the survival chance using this strategy as a function of the number of prisoners:
Amazingly, we can see that this value is asymptoting at a value greater than 30%. The precise limit is 1 – ln2 ≈ 30.685%.
In other words, no matter how many prisoners there are, we can always ensure that the survival probability is greater than 30%! This is pretty remarkable, and I think there are some deep lessons to be drawn from it, but I’m not sure what they are.
(Most of these are taken from Ian Hacking’s Introduction to Probability and Inductive Logic.)
About as many boys as girls are born in hospitals. Many babies are born every week at City General. In Cornwall, a country town, there is a small hospital where only a few babies are born every week.
Define a normal week as one where between 45% and 55% of babies are female. An unusual week is one where more than 55% or less than 45% are girls.
Which of the following is true:
(a) Unusual weeks occur equally often at City General and at Cornwall.
(b) Unusual weeks are more common at City General than at Cornwall.
(c) Unusual weeks are more common at Cornwall than at City General.
Pia is 31 years old, single, outspoken, and smart. She was a philosophy major. When a student, she was an ardent supporter of Native American rights, and she picketed a department store that had no facilities for nursing mothers.
Which of the following statements are most probable? Which are least probable?
(a) Pia is an active feminist.
(b) Pia is a bank teller.
(c) Pia works in a small bookstore.
(d) Pia is a bank teller and an active feminist.
(e) Pia is a bank teller and an active feminist who takes yoga classes.
(f) Pia works in a small bookstore and is an active feminist who takes yoga classes.
You have been called to jury duty in a town with only green and blue taxis. Green taxis dominate the market, with 85% of the taxis on the road.
On a misty winter night a taxi sideswiped another car and drove off. A witness said it was a blue cab. This witness is tested under similar conditions, and gets the color right 80% of the time.
You conclude about the sideswiping taxi:
(a) The probability that it is blue is 80%.
(b) It is probably blue, but with a lower probability than 80%.
(c) It is equally likely to be blue or green.
(d) It is more likely than not to be green.
You are a physician. You think that it’s quite likely that a patient of yours has strep throat. You take five swabs from the throat of this patient and send them to a lab for testing.
If the patient has strep throat, the lab results are right 70% of the time. If not, then the lab is right 90% of the time.
The test results come back: YES, NO, NO, YES, NO
(a) The results are worthless.
(b) It is likely that the patient does not have strep throat.
(c) It is slightly more likely than not that the patient does have strep throat.
(d) It is very much more likely than not that the patient does have strep throat.
In a country, all families wants a boy. They keep having babies till a boy is born. What is the expected ratio of boys and girls in the country?
Answer the following series of questions:
If you flip a fair coin twice, do you have the same chance of getting HH as you have of getting HT?
If you flip the coin repeatedly until you get HH, does this result in the same average number of flips as if you repeat until you get HT?
If you flip it repeatedly until either HH emerges or HT emerges, is either outcome equally likely?
You play a game with a friend in which you each choose a sequence of three possible flips (e.g HHT and TTH). You then flip the coin repeatedly until one of the two patterns emerges, and whosever pattern it is wins the game. You get to see your friend’s choice of pattern before deciding yours. Are you ever able to bias the game in your favor?
Are you always able to bias the game in your favor?
Solutions (and lessons)
The correct answer is (a): Unusual weeks occur more often at Cornwall than at City General. Even though the chance of a boy is the same at Cornwall as it is at City General, the percentage of boys from week to week is larger in the smaller city (for N patients a week, the percentage boys goes like 1/sqrt(N)). Indeed, if you think about an extreme case where Cornwall has only one birth a week, then every week will be an unusual week (100% boys or 0% boys).
There is room to debate the exact answer but whatever it is, it has to obey some constraints. Namely, the most probable statement cannot be (d), (e), or (f), and the least probable statement cannot be (a), (b), or (c). Why? Because of the conjunction rule of probability: each of (d), (e), and (f) are conjunctions of (a), (b), and (c), so they cannot be more likely. P(A & B) ≤ P(A).
It turns out that most people violate this constraint. Many people answer that (f) is the most probable description, and (b) is the least probable. This result is commonly interpreted to reveal a cognitive bias known as the representativeness heuristic – essentially, that our judgements of likelihood are done by considering which descriptions most closely resemble the known facts. In this case,
Another factor to consider is that prior to considering the evidence, your odds on a given person being a bank teller as opposed to working in a small bookstore should be heavily weighted towards her being a bank teller. There are just far more bank tellers than small bookstore workers (maybe a factor of around 20:1). This does not necessarily mean that (b) is more likely than (c), but it does mean that the evidence must discriminate strongly enough against her being a bank teller so as to overcome the prior odds.
This leads us to another lesson, which is to not neglect the base rate. It is easy to ignore the prior odds when it feels like we have strong evidence (Pia’s age, her personality, her major, etc.). But the base rate on small bookstore workers and bank tellers are very relevant to the final judgement.
The correct answer is (d) – it is more likely than not that the sideswiper was green. This is a basic case of base rate neglect – many people would see that the witness is right 80% of the time and conclude that the witness’s testimony has an 80% chance of being correct. But this is ignoring the prior odds on the content of the witness’s testimony.
In this case, there were prior odds of 17:3 (85%:15%) in favor of the taxi being green. The evidence had a strength of 1:4 (20%:80%), resulting in the final odds being 17:12 in favor of the taxi being green. Translating from odds to probabilities, we get a roughly 59% chance of the taxi having been green.
We could have concluded (d) very simply by just comparing the prior probability (85% for green) with the evidence (80% for blue), and noticing that the evidence would not be strong enough to make blue more likely than green (since 85% > 80%). Being able to very quickly translate between statistics and conclusions is a valuable skill to foster.
The right answer is (d). We calculate this just like we did the last time:
The results were YES, NO, NO, YES, NO.
Each YES provides evidence with strength 7:1 (70%/10%) in favor of strep, and each NO provides evidence with strength 1:3 (30%/90%).
So our strength of evidence is 7:1 ⋅ 1:3 ⋅ 1:3 ⋅ 7:1 ⋅ 1:3 = 49:27, or roughly 1.81:1 in favor of strep. This might be a little surprising… we got more NOs than YESs and the NO was correct 90% of the time for people without strep, compared to the YES being correct only 70% of the time in people with strep.
Since the evidence is in favor of strep, and we started out already thinking that strep was quite likely, in the end we should be very convinced that they have strep. If our prior on the patient having strep was 75% (3:1 odds), then our probability after getting evidence will be 84% (49:9 odds).
Again, surprising! The patient who sees these results and hears the doctor declaring that the test strengthens their belief that the patient has strep might feel that this is irrational and object to the conclusion. But the doctor would be right!
Supposing as before that the chance of any given birth being a boy is equal to the chance of it being a girl, we end up concluding…
The expected ratio of boys and girls in the country is 1! That is, this strategy doesn’t allow you to “cheat” – it has no impact at all on the ratio. Why? I’ll leave this one for you to figure out. Here’s a diagram for a hint:
This is important because it applies to the problem of p-hacking. Imagine that all researchers just repeatedly do studies until they get the results they like, and only publish these results. Now suppose that all the researchers in the world are required to publish every study that they do. Now, can they still get a bias in favor of results they like? No! Even though they always stop when getting the result they like, the aggregate of their studies is unbiased evidence. They can’t game the system!
Answers, in order:
If you flip a fair coin twice, do you have the same chance of getting HH as you have of getting HT? (Yes)
If you flip it repeatedly until you get HH, does this result in the same average number of flips as if you repeat until you get HT? (No)
If you flip it repeatedly until either HH emerges or HT emerges, is either outcome equally likely? (Yes)
You play a game with a friend in which you each choose a sequence of three coin flips (e.g HHT and TTH). You then flip a coin repeatedly until one of the two patterns emerges, and whosever pattern it is wins the game. You get to see your friend’s choice of pattern before deciding yours. Are you ever able to bias the game in your favor? (Yes)
Are you always able to bias the game in your favor? (Yes!)
Here’s a wiki page with a good explanation of this: LINK. A table from that page illustrating a winning strategy for any choice your friend makes:
Plausible reasoning, unlike logical deduction, is sensitive not only to the information at hand but also to the query process by which the information was obtained.
Judea Pearl, Probabilistic Reasoning in Intelligent Systems
This quote references an interesting feature of inductive reasoning that’s worth unpacking. It is indicative of the different level of complexity involved in formalizing induction than that involved in formalizing deduction.
A very simple example of this:
You spread a rumor to your neighbor N. A few days later you hear the same rumor from another neighbor N’. Should you increase your belief in the rumor now that N acknowledges it, or should you determine first whether N heard it from N’?
Clearly, if the only source of information for N’ was N, then your belief should not change. But if N’ independently confirmed the validity of the rumor, you have good reason to increase your belief in it.
In general, when you have both top-down (predictive) and bottom-up (explanatory/diagnostic) inferences in evidential reasoning, it is important to be able to trace back queries. If not, one runs the risk of engaging in circular reasoning.
So far this is all fairly obvious. Now here’s an example that’s more subtle.
Three prisoners problem
Three prisoners have been tried for murder, and their verdicts will be read tomorrow morning. Only one will be declared guilty, and the other two will be declared innocent.
Before sentencing, Prisoner A asks the guard (who knows which prisoner will be declared guilty) to do him a favor and give a letter to one of the other two prisoners who will be released (since only one person will be declared guilty, Prisoner A knows that at least one of the other two prisoners will be released). The guard does so, and later, Prisoner A asks him which of the two prisoners (B or C) he gave the letter two. The guard responds “I gave the letter to Prisoner B.”
Now Prisoner A reasons as follows:
“Previously, my chances of being executed were one in three. Now that I know that B will be released, only C and I remain as candidates for being declared guilty. So now my chances are one in two.”
Is this wrong?
Denote “A is guilty” as GA, and “B is innocent” as IB. Now, since GA → IB, we have that P(IB | GA) = 1. This tells us that we can write
The problem with this argument is that we have excluded some of the context of the guard’s response, namely, that the guard could only have answered “I gave the letter to Prisoner B” or “I gave the letter to Prisoner C.” In other words, the fact “Prisoner B will be declared innocent” leads to the wrong conclusion about the credibility of A’s guilt.
Let’s instead condition on IB’ = “Guard says that B will be declared innocent.” Now we get
P(GA | IB’) = P(IB’ | GA) P(GA) / P(IB’) = ½ ⋅ ⅓ / ½ = ⅓.
It’s not sufficient to just condition on what the guard said. We must consider the range of possible statements that the guard could have made.
In general, we cannot only assess the impact of propositions implied by information. We must also consider what information we could have received.
Things get clearer if we consider a similar thought experiment.
1000 prisoners problem
You are one of 1000 prisoners awaiting sentencing, with the knowledge that only one of you will be declared guilty. You come across a slip of paper from the court listing 998 prisoners; each name marked ‘innocent’. You look through all 998 names and find that your name is missing.
This should worry you greatly – your chances of being declared guilty have gone from 1 in 1000 to 1 in 2.
But imagine that you now see the query that produced the list.
Query: “Print the names of any 998 innocent right-handed prisoners.”
If you are the only left-handed prisoner, then you should thank your lucky stars. Why? Because now that you know that the query couldn’t have produced your name, the fact that it didn’t gives you no information. In other words, your chances have gone back from 1:2 to 1:1000.
In this example you can see very clearly why information about the possible outputs of a query is relevant to how we should update on the actual output of the query. We must know the process by which we attain information in order to be able to accommodate that information into our beliefs.
But now what if we don’t have this information? Suppose that you only run into the list of prisoners, and have no additional knowledge about how it was produced. Well, then we must consider all possible queries that might have produced this output!
This is no small matter.
For simplicity, let’s reconsider the simpler example with just three prisoners: Prisoners A, B, and C. Imagine that you are Prisoner A.
You come across a slip of paper from the court containing the statement I, where
I = “Prisoner B will be declared innocent”.
Now, we must assess the impact of I on the proposition GA = “Prisoner A is guilty.”
The result of this calculation depends upon P(I | GA), or in other words, how likely it is that the slip would declare Prisoner B innocent, given that you are guilty. This depends on the query process, and can vary anywhere from 0 to 1. Let’s just give this variable a name: P(I | GA) = p.
We’ll also need to know two other probabilities: (1) that the slip declares B innocent given that B is guilty, and (2) that it declares B innocent given that C is guilty. We’ll assume that the slip cannot be lying (i.e. that the first of these is zero), and name the second probability q = P(I | GC).
P(I | GA) = p (slip could declare either B or C innocent)
P(I | GB) = 0 (slip could declare either A or C innocent)
P(I | GC) = q (slip could declare either A or B innocent)
Now we have
P(GA | I) = ⅓ p / P(I)
=⅓ p /[P(I | GA) P(GA) + P(I | GB) P(GB) + P(I | GC) P(GC)]
= ⅓ p / (⅓ p + 0 + ⅓ q) = p/(p + q)
How do we assess this value, given that p and q are unknown? The Bayesian solution is to treat the probabilities p and q as random variables, and specify probability distributions over their possible values: f(p) and g(q). This distribution should contain all of your prior knowledge about the plausible queries that might have produced I.
The final answer is obtained by integrating over all possible values of p and q.
Supposing that our distributions over p and q are maximally uncertain, the final distribution we obtain is
P(GA | I) = ∫ p/(p + q) dp dq
Now suppose that we know that the slip could not declare A (yourself) innocent (as we do in the original three prisoners problem). Then we know that q = 1 (since if C is guilty and A couldn’t be on the slip, B is the only possible choice). This gives us
P(GA | I) = ∫ p/(p + 1) f(p) dp
If we are maximally uncertain about the value of p, we obtain
P(GA | I) = ∫ p/(p + 1) dp
= 1 – ln(2)
If, on the other hand, we are sure that the value of p is 50% (i.e., we know that in the case that A is guilty, the guard chooses randomly between B and C), we obtain
P(GA | I) = .5/(.5 + 1) = ⅓
We’ve re-obtained our initial result! Interestingly, we can see that being maximally uncertainty about the guard’s procedure for choosing between B and C gives a different answer than knowing that the guard chooses totally randomly between B and C.
Notice that this is true even though these reflect the same expectation of what choice the guard will make!
I.e., in both cases (total uncertainty about p, and knowledge that p is exactly .5), we should have 50% credence in the guard choosing B. This gives us some insight into the importance of considering different types of uncertainty when doing induction, which is a topic for another post.