An expected value puzzle

Consider the following game setup:

Each round of the game starts with you putting in all of your money. If you currently have $10, then you must put in all of it to play. Now a coin is flipped. If it lands heads, you get back 10 times what you put in ($100). If not, then you lose it all. You can keep playing this game until you have no more money.

What does a perfectly rational expected value reasoner do?

Supposing that they value money roughly linear with its quantity, then the expected value for putting in the money is always greater than 0. If you put in $X, then you stand a 50% chance of getting $10X back and a 50% chance of losing $X. Thus, your expected value is 5X – X/2 = 9X/2.

This means that the expected value reasoner would keep putting in their money until, eventually, they lose it all.

What’s wrong with this line of reasoning (if anything)? Does it serve as a reductio ad absurdum of expected value reasoning?

A closer look at anthropic tests for consciousness

(This post is the culmination of my last week of posts on anthropics and conservation of expected evidence.)

In this post, I described how anthropic reasoning can apparently give you a way to update on theories of consciousness. This is already weird enough, but I want to make things a little weirder. I want to present an argument that in fact anthropic reasoning implies that we should be functionalists about consciousness.

But first, a brief recap (for more details see the post linked above):

Screen Shot 2018-08-09 at 9.09.08 AM


Screen Shot 2018-08-09 at 9.15.37 AM.pngScreen Shot 2018-08-09 at 9.19.18 AM

Whenever this experiment is run, roughly 90% of experimental subjects observe snake eyes, and roughly 10% observe not snake eyes. What this means is that 90% of the people update in favor of functionalism (by a factor of 9), and only 10% of people update in favor of substrate dependence theory (also by a factor of 9).

Now suppose that we have a large population that starts out completely agnostic on the question of functionalism vs. substrate dependence. That is, the prior ratio for each individual is 1:

Screen Shot 2018-08-09 at 9.28.15 AM

Now imagine that we run arbitrarily many dice-killer experimental setups on the population. We would see an upwards drift in the average beliefs of the population towards functionalism. And in the limit of infinite experiments, we would see complete convergence towards functionalism as the correct theory of consciousness.

Now, the only remaining ingredient is what I’ve been going on about the past two days: if you can predict beforehand that a piece of evidence is going to make you on average more functionalist, then you should preemptively update in favor of functionalism.

What we end up with is the conclusion that considering the counterfactual infinity of experimental results we could receive, we should conclude with arbitrarily high confidence that functionalism is correct.

To be clear, the argument is the following:

  1. If we were to be members of a population that underwent arbitrarily many dice-killer trials, we would converge towards functionalism.
  2. Conservation of expected evidence: if you can predict beforehand which direction some observation would move you, then you should pre-emptively adjust your beliefs in that direction.
  3. Thus, we should preemptively converge towards functionalism.

Premise 1 follows from a basic application of anthropic reasoning. We could deny it, but doing so amounts to denying the self-sampling assumption and ensuring that you will lose in anthropic games.

Premise 2 follows from the axioms of probability theory. It is more or less the statement that you should update your beliefs with evidence, even if this evidence is counterfactual information about the possible results of future experiments.

(If this sounds unintuitive to you at all, consider the following thought experiment: We have two theories of cosmology, one in which 99% of people live in Region A and 1% in Region B, and the other in which 1% live in Region A and 99% in Region B. We now ask where we expect to find ourselves. If we expect to find ourselves in Region A, then we must have higher credence in the first theory than the second. And if we initially did not have this higher credence, then considering the counterfactual question “Where would I find myself if I were to look at which region I am in?” should cause us to update in favor of the first theory.)

Altogether, this argument looks really bullet proof to me. And yet its conclusion seems very wrong.

Can we really conclude with arbitrarily high certainty that functionalism is correct by just going through this sort of armchair reasoning from possible experimental results that we will never do? Should we now be hardcore functionalists?

I’m not quite sure yet what the right way to think about this is. But here is one objection I’ve thought of.

We have only considered one possible version of the dice killer thought experiment (in which the experimenter starts off with 1 human, then chooses 1 human and 9 androids, then 1 human and 99 androids, and so on). In this version, observing snake eyes was evidence for functionalism over substrate dependence theory, which is what causes the population-wide drift towards functionalism.

We can ask, however, if we can construct a variant of the dice killer thought experiment in which snake eyes counts as evidence for substrate dependence theory over functionalism. If so, then we could construct an experimental setup that we can predict beforehand will end up with us converging with arbitrary certainty to substrate dependence theory!

Let’s see how this might be done. We’ll imagine the set of all variants on the thought experiment (that is, the set of all choices the dice killer could make about how many humans and androids to kidnap in each round.)

Screen Shot 2018-08-10 at 12.32.28 AM

For ease of notation, we’ll abbreviate functionalism and substrate dependence theory as F and S respectively.

Screen Shot 2018-08-10 at 12.32.57 AM

And we’ll also introduce a convenient notation for calculating the total number of humans and the total number androids ever kidnapped by round N.

Screen Shot 2018-08-10 at 12.33.41 AM

Now, we want to calculate the probability of snake eyes given functionalism in this general setup, and compare it to the probability of snake eyes given substrate dependence theory. The first step will be to consider the probability of snake eyes if  the experiment happens to end on the nth round, for some n. This is just the number of individuals in the last round divided by the total number of kidnapped individuals.

Screen Shot 2018-08-10 at 12.35.06 AM

Now, we calculate the average probability of snake eyes (the average fraction of individuals in the last round).

Screen Shot 2018-08-10 at 12.36.08 AM

The question is thus if we can find a pair of sequences

Screen Shot 2018-08-10 at 12.41.24 AM

such that the first term is larger than the second.

Screen Shot 2018-08-10 at 12.45.29 AM.png

It seems hard to imagine that there are no such pairs of sequences that satisfy this inequality, but thus far I haven’t been able to find an example. For now, I’ll leave it as an exercise for the reader!

If there are no such pairs of sequences, then it is tempting to take this as extremely strong evidence for functionalism. But I am concerned about this whole line of reasoning. What if there are a few such pairs of sequences? What if there are far more in which functionalism is favored than those in which substrate dependence is favored? What if there are an infinity of each?

While I buy each step of the argument, it seems wrong to say that the right thing to do is to consider the infinite set of all possible anthropic experiments you could do, and then somehow average over the results of each to determine the direction in which we should update our theories of consciousness. Indeed, I suspect that any such averaging procedure would be vulnerable to arbitrariness in the way that the experiments are framed, such that different framings give different results.

At this point, I’m pretty convinced that I’m making some fundamental mistake here, but I’m not sure exactly where this mistake is. Any help from readers would be greatly appreciated. ūüôā

Two principles of Bayesian reasoning

Bayes’ rule is a pretty simple piece of mathematics, and it’s extraordinary to me the amount of deep insight that can be plumbed by looking closely at it and considering its implications.

Principle 1: The surprisingness of an observation is proportional to the amount of evidence it provides.

Evidence that you expect to observe is weak evidence, while evidence that is unexpected is strong evidence.

This follows directly from Bayes’ theorem:

Screen Shot 2018-08-08 at 11.46.26 PM.png

If E is very unexpected, then P(E) is very small. This puts an upwards pressure on the posterior probability, entailing a large belief update. If E is thoroughly unsurprising, then P(E) is near 1, which means that this upward pressure is not there.

A more precise way to say this is to talk about how surprising evidence is given a particular theory.

Screen Shot 2018-08-08 at 11.39.42 PM

On the left is a term that (1) is large when E provides strong evidence for H, (2) is near zero when it provides strong evidence against H, and (3) is near 1 when it provides weak evidence regarding H.

On the right is a term that (1) is large if E is very unsurprising given H, (2) is near zero when E is very surprising given H, and (3) is near 1 when E is not made much more surprising or unsurprising by H.

What we get is that (1) E provides strong evidence for H when E is very unsurprising given H, (2) E provides strong evidence against H when it is very surprising given H, and (3) E provides weak evidence regarding H when it is not much more surprising or unsurprising given H.

This makes a lot of sense when you think through it. Theories that make strong and surprising predictions that turn out to be right, are given stronger evidential weight than theories that make weak and unsurprising predictions.

Principle 2: Conservation of expected evidence

I stole the name of this principle from Eliezer Yudkowsky, who wrote about this here.

The idea here is that for any expectation you have of receiving evidence for a belief, you should have an equal and opposite expectation of receiving evidence against a belief. It cannot be the case that all possible observations support a theory. If some observations support a theory, then there must be some other observations that undermine it. And the precise amount that these observations undermine this theory balances the expected evidential support of the theory.

Proof of this:

Screen Shot 2018-08-09 at 12.18.55 AM.png

The first term is the expected change in credence in H after observing E, and the second is the expected change in credence in H after observing -E. Thus, the average expected change in credence is exactly zero.

Putting these together, we see that a strong expectation corresponds to weak evidence, and this strong expectation of weak evidence also corresponds to a weak expectation of strong evidence!

Explaining anthropic reasoning

I realize that I’ve been a little unclear in my last few posts. I presupposed a degree of familiarity with anthropic reasoning that most people don’t have. I want to remedy that by providing a short explanation of what anthropic reasoning is and why it is useful.

First of all, one thing that may be confusing is that the term ‘anthropic reasoning’ is used in multiple very different ways. In particular, its most common usage is probably in arguments about the existence of God, where it is sometimes presented as an argument against the evidential force of fine tuning. I have no interest in this, so please don’t take me to be using the term this way.¬†My usage is identical with that of Nick Bostrom, who wrote a fantastic book¬†about anthropic reasoning. You’ll see precisely what this usage entails shortly, but I just want to plant a flag now in case I use the word ‘anthropic’ in a way that you are unfamiliar with.

Good! Now, let’s start with a few thought experiments.

  1. Suppose that the universe consists of one enormous galaxy, divided into a central region and an outer region. The outer region is densely populated with intelligent life, containing many trillions of planetary civilizations at any given moment. The inner region is hostile to biology, and at any given time only has a few hundred planetary civilizations. It is impossible for life to develop beyond the outer region of the galaxy.

    Now, you are a member of a planetary civilization that knows all of this, but doesn’t know its location in the galaxy. You reason that it is:

    (a) As likely that you are in the central region as it is that you are in the the outer region
    (b) More likely
    (c) Less likely

  2. Suppose that the universe consists of one galaxy that goes through life phases. In its early phase, life is very rare and the galaxy is typically populated by only a few hundred planetary civilizations. In its middle phase, life is plentiful and the galaxy is typically populated by billions of planetary civilizations. And in its final phase, which lasts for the rest of the history of the universe, it is impossible for life to evolve.

    You are born into a planetary civilization that knows all of this, but doesn’t know what life phase the galaxy is in. You reason that it is:

    (a) As likely that you are in the early phase as the middle phase
    (b) More likely
    (c) Less likely

  3. You are considering two competing theories of cosmology. In Cosmology X, 1% of life exists in Region A and 99% in Region B. In Cosmology Y, 99% of life is in Region A and 1% in Region B. You currently don’t know which region you are in, and have equal credence in Cosmology X and Cosmology Y.

    Now you perform an experiment that locates yourself in the universe. You find that you are in Region A. How should your beliefs change?

    (a) They should stay the same
    (b) Cosmology X becomes more likely than Cosmology Y
    (c) Cosmology Y becomes more likely than Cosmology X

If you answered (c) for all three, then congratulations, you’re already an expert anthropic reasoner!

What we want to do is explain why (c) was the right answer in all three cases, and see if we can unearth any common principles. You might think that this is unnecessary; after all, aren’t we just using a standard application of Bayes’ theorem? Sort of, but there’s a little more going on here. Consider, for instance the following argument:

1. Most people have property X,
2. Therefore, I probably have property X.

Ignoring the base rate fallacy here, there is an implicit assumption involved in the jump from 1 to 2. This assumption can be phrased as follows:

I should reason about myself as if I am randomly sampled from the set of all people.

A similar principle turns out to be implicit in the reasoning behind our answers to the three starting questions. For question 1, it was something like

I should reason about myself as if I am randomly sampled from the set of all intelligent organisms in the universe at this moment.

For 2, it might be

I should reason about myself as if I am randomly sampled from the set of all intelligent organisms in the history of the universe.

And for 3, it is pretty much the same as 1:

I should reason about myself as if I am randomly sampled from all intelligent organisms in the universe.

These various sampling assumptions really amount to the notion that we should reason about ourselves the same way we reason about anything else. If somebody hands us a marble from an urn that contains 99% black marbles, (and we have no other information) we should think this marble has a 99% chance of being black. If we learn that 99% of individuals like us exist in Region A rather than Region B (and we have no other information), then we should think that we have a 99% chance of being in Region A.

In general, we can assert the Self-Sampling Assumption (SSA):

SSA: In the absence of more information, I should reason about myself as if I am randomly sampled from the set of all individuals like me.

The “individuals like me” is what gives this principle the versatility to handle all the various cases we’ve discussed so far. It’s slightly vague, but will do for now.

And now we have our first anthropic principle! We’ve seen how eminently reasonable this principle is in the way that it handles the cases we started with. But at the same time, accepting this basic principle pretty quickly leads to some unintuitive conclusions. For instance:

  1. It’s probably not the case that there are other intelligent civilizations that have populations many times larger than ours (for instance, galactic societies).
  2. It’s probably not the case that we exist in the first part of a long and glorious history of humanity in which we expand across space and populate the galaxy (this is called the Doomsday argument).
  3. On average, you are probably pretty average in most ways. (Though there might be a selection effect to be considered in who ends up regularly reading this blog.)

These are pretty dramatic conclusions for a little bit of armchair reasoning! Can it really be that we can assert the extreme improbability of a glorious future and the greater likelihood of doomsday from simply observing our birth order in the history of humanity? Can we really draw these types of conclusions about the probable distributions of intelligent life in our universe from simply looking at facts about the size of our species?

It is tempting to just deny that this reasoning is valid. But to do so is to reject the simple and fairly obvious-seeming principle that justified our initial conclusions. Perhaps we can find some way to accept (c) as the answer for the three questions we started with while still denying the three conclusions I’ve just listed, but it’s not at all obvious how.

Just to drive the point a little further, let’s look at (2) – the Doomsday argument – again. The argument is essentially this:

Consider two theories of human history. In Theory 1, humans have a brief flash of exponential growth and planetary domination, but then go extinct not much later. In this view, we (you and me) are living in a fairly typical point in the history of humanity, existing near its last few years when its population is greatest.

In Theory 2, humans continue to expand and expand, spreading civilization across the solar system and eventually the galaxy. In this view, the future of humanity is immense and glorious, and involves many trillions of humans spread across hundreds or thousands of planets for many hundreds of thousands of years.

We’d all like Theory 2 to be the right one. But when we consider our place in history, we must admit that it seems incredibly less likely for us to be in the very tiny period of human history in which we still exist on one planet, than it is for us to be in the height of human history where most people live.

By analogy, imagine a bowl filled with numbered marbles. We have two theories about the number of marbles in the bowl. Theory 1 says that there are 10 marbles in the bowl. Theory 2 says that there are 10,000,000. Now we draw a marble and see that it is numbered 7. How should this update our credences in these two theories?

Well, on Theory 2, getting a 7 is one million times less likely than it is on Theory 1. So Theory 1 gets a massive evidential boost from the observation. In fact, if we consider the set of all possible theories of how many marbles there are in the jar, the greatest update goes to the theory that says that there are exactly 7 marbles. Theories that say any fewer than 7 are made impossible by the observation, and theories that say more than 7 are progressively less likely as the number goes up.

This is exactly analogous to our birth order in the history of humanity. The self-sampling assumption says that given that you are a human, you should treat yourself as if you are randomly sampled from the set of all humans there will ever be. If you are, say, the one trillionth human, then the most likely theory is that there are not many more than a trillion humans that will ever exist. And theories that say there will be fewer than a trillion humans are ruled out definitively by the observation. Comparing the theory that says there will be a trillion trilllion humans throughout history to the theory that says there will be a trillion humans throughout history, the first is a trillion times less likely!

In other words, applying the self-sampling assumption to your birth order in the history of humanity, we update in favor of a shortly upcoming doomsday. To be clear, this is not the same as saying that doomsday soon is inevitable and that all other sources of evidence for doomsday or not-doomsday are irrelevant. This is just another piece of evidence to be added to the set of all evidence we have when drawing inferences about the future of humanity, albeit a very powerful one.

Okay, great! So far we’ve just waded into anthropic reasoning. The self-sampling assumption is just one of a few anthropic principles that Nick Bostrom discusses, and there are many other mind boggling implications of this style of reasoning. But hopefully I have whetted your appetite for more, as well as given you a sense that this style of reasoning is both nontrivial to refute and deeply significant to our reasoning about our circumstance.

Predicting changes in belief

One interesting fact about Bayesianism is that you should never be able to predict beforehand how a piece of evidence will change your credences.

For example, sometimes I think that the more I study economics, the more I will become impressed by the power of free markets, and that therefore I will become more of a capitalist. Another example is that I often think that over time I will become and more moderate in many ways (this is partially just because of induction on the direction of my change – it seems like when I’ve held extreme beliefs in the past, these beliefs tend to have become more moderate over time.)

Now, in these two cases I might be right as a matter of psychology. It might be the case that if I study economics, I’ll be more likely to end up a supporter of capitalism than if I don’t. It might be the case that as time passes, I’ll become increasingly moderate until I die out of boredom with my infinitely bland beliefs.

But even if I’m right about these things as a matter of psychology, I’m wrong about them as a matter of probability theory. It should¬†not be the case that I can predict beforehand how my beliefs will change, for if it were, then why wait to change them? If I am really convinced that studying economics will make me a hardcore capitalist, then I should update in favor of capitalism in advance.

I think that this is a pretty common human mistake to make. Part of an explanation for this might be based off of the argumentative theory of reason – the idea that human rationality evolved as a method of winning arguments, not necessarily being well-calibrated in our pursuit of truth. If this is right, then it would make sense that we would only want to hold strong beliefs when we can confidently display strong evidence for them. It’s not that easy to convincingly argue for your position when part of our justification for it is something like “Well I don’t have the evidence yet, but I’ve got a pretty strong hunch that it’ll come out in this direction.”

Another factor might be the way that personality changes influence belief changes. It might be that when we say “I will become more moderate in beliefs as I age”, we’re really saying something like “My personality will become less contrarian as I age.”¬†There’s still some type of mistake here, but it has to do with an overly strong influence of personality on beliefs, not the epistemic principle in question here.

Suppose that you have in front of you a coin with some unknown bias X. Your beliefs about the bias of the coin are captured in a probability distribution over possible values of X, P(X). Now you flip the coin and observe whether it lands heads or tails. If it lands H, your new state of belief is P(X | H). If it lands T, your new state of belief is P(X | T). So before you observe the coin flip, your new expected distribution over the possible biases is just the weighted sum of these two:

Posterior is P(X | H) with probability P(H)
Posterior is P(X | T) with probability P(T)

Thus, expected posterior is P(X | H) P(H) + P(X | T) P(T)
= P(X)

Our expected posterior distribution is exactly as our prior posterior distribution! In other words, you can’t anticipate any particular change in your prior distribution. This makes some intuitive sense… if you knew beforehand that your prior would change in a particular direction, then you should have¬†already changed in that direction!

In general: Take any hypothesis H and some piece of relevant evidence that will either turn out E or -E. Suppose you have some prior credence P(H) in this hypothesis before observing either E or -E. Your expected final credence is just:

P(H | E) P(E) + P(H | -E) P(-E)

Which of course is just another way of writing the prior credence P(H).

P(H) = P(H | E) P(E) + P(H | -E) P(-E)

Eliezer Yudkowsky has named this idea “conservation of expected evidence”¬†– for any possible expectation of evidence you might receive, you should have an equal and opposite expectation of evidence in the other direction. It should never be the case that a hypothesis is confirmed no matter what evidence you receive. If E counts as evidence for H, then -E should count against it. And if you have a strong expectation of weak evidence E, then you should have a weak expectation of strong evidence -E. (If you strongly expect to see H, then it is weak evidence, and correspondingly you should have a weak expectation of seeing T, which would be strong evidence.)

This is a pretty powerful idea, and I find myself applying it as a thinking tool pretty regularly. (And Yudkowsky’s writings on LessWrong are chalk full of this type of probability theoretic wisdom, I highly recommend them.)

Today I was thinking about how we could determine, if not the average¬†change in beliefs, the average¬†amount that beliefs change. That is, while you can’t say beforehand that you will become more confident that a hypothesis is true, you can still say something about how¬†much you expect your confidence to change as an absolute value.

Let’s stick with our simple coin-tossing example for illustration. Our prior distribution over possible biases is P(X). This distribution has some characteristic mean ¬Ķ and standard deviation¬†ŌÉ. We can also describe the mean of each possible posterior distribution:

Screen Shot 2018-08-08 at 12.09.32 AM

Now we can look at how far away each of these updated means is from the original mean:

Screen Shot 2018-08-08 at 12.09.49 AM

We want to average these differences, weighted by how likely we think we are to observe them. But if we did this, we’d just get zero. Why? Conservation of expected evidence rearing its head again! The average of the differences in means is the average amount that you think that your expectation of heads will move, and this cannot be nonzero.

What we want is a quantity that captures the absolute distance between the new means and the original mean. The standard way of doing this is the following:

Screen Shot 2018-08-08 at 12.26.40 AM.png

This gives us:

Screen Shot 2018-08-08 at 12.27.16 AM.png

This gives us a measure of how strong of a belief update we should expect to receive. I haven’t heard very much about this quantity (the square root of the weighted sum of the squares of the changes in means for all possible evidential updates), but it seems pretty important.

Notice also that this scales with the variance on our prior distribution. This makes a lot of sense, because a small variance on your prior implies a high degree of confidence, which entails a weak belief update. Similarly, a large variance on your prior implies a weakly held belief, and thus a strong belief update.

Let’s see what this measure gives for some simple distributions. First, the uniform distribution:

Screen Shot 2018-08-08 at 12.45.20 AM.png

In this case, the predicted change in mean is exactly right! If we get H, our new mean becomes 2/3 (a change of +1/6) and if we get T, our new mean becomes 1/3 (a change of -1/6). Either way, the mean value of our distribution shifts by 1/6, just as our calculation predicted!

Let’s imagine you start maximally uncertain about the bias (the uniform prior) and then observe a H. Now with this new distribution, what is your expected magnitude of belief change?

Screen Shot 2018-08-08 at 12.58.50 AM.png

Notice that this magnitude of belief change is smaller than for the original distribution. Once again, this makes a lot of sense: after getting more information, your beliefs become more ossified and less mutable.

In general, we can calculate ‚ąÜ¬Ķ for the distribution that results from observing n heads and m tails, starting with a flat prior:

Screen Shot 2018-08-08 at 10.47.37 PM

Asymptotically as n and m go to infinity (as you flip the coin arbitrarily many times), this relation becomes

Screen Shot 2018-08-08 at 10.54.38 PM.png

Which clearly goes to zero, and pretty quickly as well. This amounts to the statement that as you get lots and lots of information, each subsequent piece of information matters comparatively less.

In favor of anthropic reasoning

Often the ideas in my recent posts regarding anthropic reasoning and the Dice Killer thought experiment are met with skepticism. The sense is that something about the reasoning process being employing is rotten, and that the simple intuitive answers are right after all.

While I understand the impulse to challenge these unusual ideas, I think they have more going for them than might be obvious. In this post, I’ll present a basic argument for why we should reason anthropically: because doing so allows you to¬†win!

We can see this in the basic Dice Killer scenario. (I won’t rehash the details of the thought experiment here, but you can find them at this link).

The non-anthropic reasoner saw a 50% chance of death if they tried escaping and only a 3% chance of death if they didn’t. The anthropic reasoner saw a 50% chance of dying if they tried escaping and a 90% chance of death if not. Naturally, the anthropic reasoner takes the escape route, and the non-anthropic reasoner does not. Now, how do these strategies compare?

Suppose that all of those that gets kidnapped are non-anthropic reasoners. Then none of them try escaping, so about 90% of them end up dying in the last round. What if they are all anthropic reasoners? Then they all try escaping, so only 50% of them die.

This is clearly a HUGE win for anthropic reasoning. Anthropic reasoners run a 40% decreased chance of dying! A simple explanation for this is that they’re simply taking advantage of all the information available to them, including indexical information about their state of being.

We can also construct variants of this thought experiment in which non-anthropic reasoners end up taking bets that lose them money on average, while anthropic reasoners always avoid such losing bets. These thought experiments run on the same basic principle in the Dice Killer scenario – sometimes you can construct deals that look net positive until you take the anthropic perspective, at which point they turn net negative.

In other words, if somebody refuses to use anthropic reasoning, you can turn them into a money pump, taking more and more of their money until they change their mind! This is a pragmatic argument for why even if you find this form of reasoning to be unusual and unintuitive, you should take it seriously.

Getting empirical evidence for different theories of consciousness

Previously, I described a thought experiment in which a madman kidnaps a person, then determines whether or not to kill them by rolling a pair of dice. If they both land 1 (snake eyes), then the madman kills the person. Otherwise, the madman lets them go and kidnaps ten new people. He rolls the dice again and if he gets snake eyes, kills all ten. Otherwise he lets them go and finds 100 new people. Et cetera until he eventually gets snake eyes, at which point he kills all the currently kidnapped people and stops his spree.

If you find that you have been kidnapped, then your chance of survival depends upon the dice landing snake eyes, which happens with probability 1/36. But we can also calculate the average fraction of people kidnapped that end up dying. We get the following:

Screen Shot 2018-08-02 at 1.16.15 AM

We already talked about how this is unusually high compared to the 1/36 chance of the dice landing snake eyes, and how to make sense of the difference here.

In this post, we’ll talk about a much stranger implication. To get there, we’ll start by considering a variant of the initial thought experiment. This will be a little weird, but there’s a nice payout at the end, so stick with it.

In our variant, our madman kidnaps not only people, but also rocks. (The kidnapper does not “rock”, he kidnaps pieces of stones). He starts out by kidnapping a person, then rolls his dice. Just like before, if he gets snake eyes, he kills the person. And if not, he frees the person and kidnaps a new group. This new group consists of 1 person and 9 rocks. Now if the dice come up snake eyes, the person is killed and the 9 rocks pulverized. And if not, they are all released, and 1 new person and 99 rocks are gathered.

To be clear, the pattern is:

First Round: 1 person
Second Round: 1 person, 9 rocks
Third Round: 1 person, 99 rocks
Fourth Round: 1 person, 999 rocks
and so on…

Now, we can run the same sort of anthropic calculation as before:

Screen Shot 2018-08-02 at 1.16.33 AM.png

Evidently, this time you have roughly a 10% chance of dying if you find yourself kidnapped! (Notice that this is still worse than 1/36, though a lot better than 90%).

Okay, so we have two scenarios, one in which 90% of those kidnapped die and the other in which 10% of those kidnapped die.

Now let’s make a new variant on our thought experiment, and set it in a fictional universe of my creation.

In this world there exist androids – robotic intelligences that behave, look, and feel like any ordinary human. They are so well integrated into society that most people don’t actually know if they are a biological person or an android. The primary distinction between the two groups is, of course, that one has a brain made of silicon transistors and the other has a brain made of carbon-based neurons.

There is a question of considerable philosophical and practical importance in this world, which is: Are androids conscious just like human beings? This question has historically been a source of great strife in this world. On the one hand, some biological humans argue that the substrate is essential to the existence of consciousness and that therefore non-carbon-based life forms can never be conscious, no matter how well they emulate conscious beings. This thesis is known as the substrate-dependence view.

On the other hand, many argue that we have no good reason to dismiss the androids’ potential consciousness. After all, they are completely indistinguishable from biological humans, and have the same capacity to introspect and report on their feelings and experiences. Some android philosophers even have heated debates about consciousness. Plus, the internal organization of androids is pretty much identical to that of biological humans, indicating that the same sort of computation is going on in both organisms. It is argued that clearly consciousness arises from the patterns of computation in a system, and that on that basis androids are definitely conscious. The people that support this position are called¬†functionalists¬†(and, no great surprise, all androids that are aware that they are androids are functionalists).

The fundamental difference between the two stances can be summarized easily: Substrate-dependence theorists think that to be conscious, you must be a carbon-based life form operating on cells. Functionalists think that to be conscious, you must be running a particular type of computation, regardless of what material that computation is running on

In this world, the debate runs on endlessly. The two sides marshal philosophical arguments to support their positions and hurl them at each other with little to no effect. Androids insist vehemently that they are as conscious as anybody else, functionalists say “See?? Look at how obviously conscious they are,” and substrate-dependence theorists say “But this is exactly what you’d expect to hear from an unconscious replica of a human being! Just because you built a machine that can cleverly perform the actions of conscious beings does not mean that it really is conscious”.

It is soon argued by some that this debate can¬†never be settled. This camp, known as the¬†mysterians, says that there is something fundamentally special and intrinsically mysterious about the phenomenon that bars us from ever being able to answer these types of question, or even provide evidence for them. They point to the subjective nature of experience and the fact that you can only really know whether somebody is conscious by entering their head, which is impossible. The mysterians’ arguments are convincing to many, and their following grows stronger by the day as the debates between the other parties appear ever more futile.

With this heated debate in the backdrop, we can now introduce a new variant on the dice killer setup.

The killer starts like before by kidnapping a single human (not an android). If he rolls snake eyes, this person is killed. If not, he releases them and kidnaps one new human and nine androids. (Sounding familiar?)  If he rolls snake eyes, all ten are killed, and if not, one new person and 99 new androids are kidnapped. Etc. Thus we have:

First Round: 1 person
Second Round: 1 person, 9 androids
Third Round: 1 person, 99 androids
Fourth Round: 1 person, 999 androids
and so on…

You live in this society, and are one of its many citizens that doesn’t know if they are an android or a biological human. You find yourself kidnapped by the killer. How worried should you be about your survival?

If you are a substrate dependence theorist, you will see this case as similar to the variant with rocks. After all, you know that you are conscious. So you naturally conclude that you can’t be an android. This means that there is only one possible person that you could be in each round. So the calculation runs exactly as it did before with the rocks, ending with a 10% chance of death.

If you are a functionalist, you will see this case as similar to the case we started with. You think that androids are conscious, so you don’t rule out any of the possibilities for who you might be. Thus you calculate as we did initially, ending with a 90% chance of death.

Here we pause to notice something very important! Our two different theories of consciousness have made different empirically verifiable predictions about the world! And not only are they easily testable, but they are significantly different. The amount of evidence provided by the observation of snake eyes has to do with the likelihood ratio P(snake eyes | functionalism) / P(snake eyes | substrate dependence). This ratio is roughly 90% / 10% = 9, which means that observing snake eyes tilts the balance by a factor of 9 in favor of functionalism.

More precisely, we use the likelihood ratio to update our prior credences in functionalism and substrate dependence to our posterior credences. That is,

Screen Shot 2018-08-02 at 1.27.02 AM.png

This is a significant update. It can be made even more significant by altering the details of the setup. But the most important point is that there is an update at all. If what I’ve argued is correct, then the mysterians are demonstrably wrong. We¬†can construct setups that test theories of consciousness, and we know just how!

(There’s an interesting caveat here, which is that this is¬†only evidence for the individual that found themselves to be kidnapped. If an experimenter was watching from the outside and saw the dice land snake eyes, they would get no evidence for functionalism over . This relates to the anthropic nature of the evidence; it is only evidence for the individuals for whom the indexical claims “I have been kidnapped” and “I am conscious” apply.)

So there we have it. We’ve constructed an experimental setup that allows us to test claims of consciousness that are typically agreed to be beyond empirical verification. Granted, this is a pretty destructive setup and would be monstrously unethical to actually enact. But the essential features of the setup can be preserved without the carnage. Rather than snake eyes resulting in the killer murdering everybody kept captive, it could just result in the experimenter saying “Huzzah!” and ending the experiment. Then the key empirical evidence for somebody that has been captured would be whether or not the experimenter says “Huzzah!” If so, then functionalism becomes nine times more likely than it was before relative to substrate dependence.

This would be a perfectly good experiment that we could easily run, if only we could start producing some androids indistinguishable from humans. So let’s get to it, AI researchers!