A closer look at anthropic tests for consciousness

(This post is the culmination of my last week of posts on anthropics and conservation of expected evidence.)

In this post, I described how anthropic reasoning can apparently give you a way to update on theories of consciousness. This is already weird enough, but I want to make things a little weirder. I want to present an argument that in fact anthropic reasoning implies that we should be functionalists about consciousness.

But first, a brief recap (for more details see the post linked above):

Screen Shot 2018-08-09 at 9.09.08 AM

Thus…

Screen Shot 2018-08-09 at 9.15.37 AM.pngScreen Shot 2018-08-09 at 9.19.18 AM

Whenever this experiment is run, roughly 90% of experimental subjects observe snake eyes, and roughly 10% observe not snake eyes. What this means is that 90% of the people update in favor of functionalism (by a factor of 9), and only 10% of people update in favor of substrate dependence theory (also by a factor of 9).

Now suppose that we have a large population that starts out completely agnostic on the question of functionalism vs. substrate dependence. That is, the prior ratio for each individual is 1:

Screen Shot 2018-08-09 at 9.28.15 AM

Now imagine that we run arbitrarily many dice-killer experimental setups on the population. We would see an upwards drift in the average beliefs of the population towards functionalism. And in the limit of infinite experiments, we would see complete convergence towards functionalism as the correct theory of consciousness.

Now, the only remaining ingredient is what I’ve been going on about the past two days: if you can predict beforehand that a piece of evidence is going to make you on average more functionalist, then you should preemptively update in favor of functionalism.

What we end up with is the conclusion that considering the counterfactual infinity of experimental results we could receive, we should conclude with arbitrarily high confidence that functionalism is correct.

To be clear, the argument is the following:

  1. If we were to be members of a population that underwent arbitrarily many dice-killer trials, we would converge towards functionalism.
  2. Conservation of expected evidence: if you can predict beforehand which direction some observation would move you, then you should pre-emptively adjust your beliefs in that direction.
  3. Thus, we should preemptively converge towards functionalism.

Premise 1 follows from a basic application of anthropic reasoning. We could deny it, but doing so amounts to denying the self-sampling assumption and ensuring that you will lose in anthropic games.

Premise 2 follows from the axioms of probability theory. It is more or less the statement that you should update your beliefs with evidence, even if this evidence is counterfactual information about the possible results of future experiments.

(If this sounds unintuitive to you at all, consider the following thought experiment: We have two theories of cosmology, one in which 99% of people live in Region A and 1% in Region B, and the other in which 1% live in Region A and 99% in Region B. We now ask where we expect to find ourselves. If we expect to find ourselves in Region A, then we must have higher credence in the first theory than the second. And if we initially did not have this higher credence, then considering the counterfactual question “Where would I find myself if I were to look at which region I am in?” should cause us to update in favor of the first theory.)

Altogether, this argument looks really bullet proof to me. And yet its conclusion seems very wrong.

Can we really conclude with arbitrarily high certainty that functionalism is correct by just going through this sort of armchair reasoning from possible experimental results that we will never do? Should we now be hardcore functionalists?

I’m not quite sure yet what the right way to think about this is. But here is one objection I’ve thought of.

We have only considered one possible version of the dice killer thought experiment (in which the experimenter starts off with 1 human, then chooses 1 human and 9 androids, then 1 human and 99 androids, and so on). In this version, observing snake eyes was evidence for functionalism over substrate dependence theory, which is what causes the population-wide drift towards functionalism.

We can ask, however, if we can construct a variant of the dice killer thought experiment in which snake eyes counts as evidence for substrate dependence theory over functionalism. If so, then we could construct an experimental setup that we can predict beforehand will end up with us converging with arbitrary certainty to substrate dependence theory!

Let’s see how this might be done. We’ll imagine the set of all variants on the thought experiment (that is, the set of all choices the dice killer could make about how many humans and androids to kidnap in each round.)

Screen Shot 2018-08-10 at 12.32.28 AM

For ease of notation, we’ll abbreviate functionalism and substrate dependence theory as F and S respectively.

Screen Shot 2018-08-10 at 12.32.57 AM

And we’ll also introduce a convenient notation for calculating the total number of humans and the total number androids ever kidnapped by round N.

Screen Shot 2018-08-10 at 12.33.41 AM

Now, we want to calculate the probability of snake eyes given functionalism in this general setup, and compare it to the probability of snake eyes given substrate dependence theory. The first step will be to consider the probability of snake eyes if  the experiment happens to end on the nth round, for some n. This is just the number of individuals in the last round divided by the total number of kidnapped individuals.

Screen Shot 2018-08-10 at 12.35.06 AM

Now, we calculate the average probability of snake eyes (the average fraction of individuals in the last round).

Screen Shot 2018-08-10 at 12.36.08 AM

The question is thus if we can find a pair of sequences

Screen Shot 2018-08-10 at 12.41.24 AM

such that the first term is larger than the second.

Screen Shot 2018-08-10 at 12.45.29 AM.png

It seems hard to imagine that there are no such pairs of sequences that satisfy this inequality, but thus far I haven’t been able to find an example. For now, I’ll leave it as an exercise for the reader!

If there are no such pairs of sequences, then it is tempting to take this as extremely strong evidence for functionalism. But I am concerned about this whole line of reasoning. What if there are a few such pairs of sequences? What if there are far more in which functionalism is favored than those in which substrate dependence is favored? What if there are an infinity of each?

While I buy each step of the argument, it seems wrong to say that the right thing to do is to consider the infinite set of all possible anthropic experiments you could do, and then somehow average over the results of each to determine the direction in which we should update our theories of consciousness. Indeed, I suspect that any such averaging procedure would be vulnerable to arbitrariness in the way that the experiments are framed, such that different framings give different results.

At this point, I’m pretty convinced that I’m making some fundamental mistake here, but I’m not sure exactly where this mistake is. Any help from readers would be greatly appreciated. 🙂

2 thoughts on “A closer look at anthropic tests for consciousness

  1. This is really interesting! I think I might see a problem in this reasoning that relates to slipping between talking about conservation of expected evidence for a conscious individual and the whole population. While most of the population always ends up increasing their “credence” in functionalism, whether most of the conscious beings increase their credence in functionalism depends on whether functionalism or substrate dependence is true. So if you know you’re conscious then it seems like you still have no reason to update beforehand. However, there definitely still is a layer of weirdness here because even if the androids aren’t conscious they would still have the “belief” that they are (and as a result will always update towards functionalism regardless of if it’s true). I think this shows the strangeness of substrate dependence because it makes whether you have the belief that you are conscious independent of whether you are conscious. If you don’t think you’re justified in believing that you’re conscious because you would still have that “belief” either way (which is a pretty crazy situation to be in), then this seems like it would eliminate any difference between what you would expect to observe under functionalism and substrate dependence in this thought experiment.
    I’m not sure what to make of the whole “you’ll believe you’re conscious even if you’re not” thing that comes along with substrate dependence but it seems like regardless of whether you think you are justified in believing that you’re conscious you still would be keeping in line with conservation of expected evidence by not updating beforehand.

    1. Amazing, I think this is the solution!!

      So if substrate dependence is correct, about 9% of all conscious captives see snake-eyes each round and therefore update by a factor of 10 in favor of functionalism. And about 91% of all conscious captives update by a factor of 10 in favor of substrate dependence. On repeated trials, then, the population of conscious beings converges to substrate dependence.

      If functionalism is correct, then 90% of conscious captives update in favor of functionalism and 10% in favor of substrate dependence. So the set of conscious beings converges to functionalism.

      Also agreed that the “belief in your consciousness is independent of the fact of your consciousness” is super weird.

Leave a Reply