Previously, I described a thought experiment in which a madman kidnaps a person, then determines whether or not to kill them by rolling a pair of dice. If they both land 1 (snake eyes), then the madman kills the person. Otherwise, the madman lets them go and kidnaps ten new people. He rolls the dice again and if he gets snake eyes, kills all ten. Otherwise he lets them go and finds 100 new people. Et cetera until he eventually gets snake eyes, at which point he kills all the currently kidnapped people and stops his spree.
If you find that you have been kidnapped, then your chance of survival depends upon the dice landing snake eyes, which happens with probability 1/36. But we can also calculate the average fraction of people kidnapped that end up dying. We get the following:
We already talked about how this is unusually high compared to the 1/36 chance of the dice landing snake eyes, and how to make sense of the difference here.
In this post, we’ll talk about a much stranger implication. To get there, we’ll start by considering a variant of the initial thought experiment. This will be a little weird, but there’s a nice payout at the end, so stick with it.
In our variant, our madman kidnaps not only people, but also rocks. (The kidnapper does not “rock”, he kidnaps pieces of stones). He starts out by kidnapping a person, then rolls his dice. Just like before, if he gets snake eyes, he kills the person. And if not, he frees the person and kidnaps a new group. This new group consists of 1 person and 9 rocks. Now if the dice come up snake eyes, the person is killed and the 9 rocks pulverized. And if not, they are all released, and 1 new person and 99 rocks are gathered.
To be clear, the pattern is:
First Round: 1 person
Second Round: 1 person, 9 rocks
Third Round: 1 person, 99 rocks
Fourth Round: 1 person, 999 rocks
and so on…
Now, we can run the same sort of anthropic calculation as before:
Evidently, this time you have roughly a 10% chance of dying if you find yourself kidnapped! (Notice that this is still worse than 1/36, though a lot better than 90%).
Okay, so we have two scenarios, one in which 90% of those kidnapped die and the other in which 10% of those kidnapped die.
Now let’s make a new variant on our thought experiment, and set it in a fictional universe of my creation.
In this world there exist androids – robotic intelligences that behave, look, and feel like any ordinary human. They are so well integrated into society that most people don’t actually know if they are a biological person or an android. The primary distinction between the two groups is, of course, that one has a brain made of silicon transistors and the other has a brain made of carbon-based neurons.
There is a question of considerable philosophical and practical importance in this world, which is: Are androids conscious just like human beings? This question has historically been a source of great strife in this world. On the one hand, some biological humans argue that the substrate is essential to the existence of consciousness and that therefore non-carbon-based life forms can never be conscious, no matter how well they emulate conscious beings. This thesis is known as the substrate-dependence view.
On the other hand, many argue that we have no good reason to dismiss the androids’ potential consciousness. After all, they are completely indistinguishable from biological humans, and have the same capacity to introspect and report on their feelings and experiences. Some android philosophers even have heated debates about consciousness. Plus, the internal organization of androids is pretty much identical to that of biological humans, indicating that the same sort of computation is going on in both organisms. It is argued that clearly consciousness arises from the patterns of computation in a system, and that on that basis androids are definitely conscious. The people that support this position are called functionalists (and, no great surprise, all androids that are aware that they are androids are functionalists).
The fundamental difference between the two stances can be summarized easily: Substrate-dependence theorists think that to be conscious, you must be a carbon-based life form operating on cells. Functionalists think that to be conscious, you must be running a particular type of computation, regardless of what material that computation is running on
In this world, the debate runs on endlessly. The two sides marshal philosophical arguments to support their positions and hurl them at each other with little to no effect. Androids insist vehemently that they are as conscious as anybody else, functionalists say “See?? Look at how obviously conscious they are,” and substrate-dependence theorists say “But this is exactly what you’d expect to hear from an unconscious replica of a human being! Just because you built a machine that can cleverly perform the actions of conscious beings does not mean that it really is conscious”.
It is soon argued by some that this debate can never be settled. This camp, known as the mysterians, says that there is something fundamentally special and intrinsically mysterious about the phenomenon that bars us from ever being able to answer these types of question, or even provide evidence for them. They point to the subjective nature of experience and the fact that you can only really know whether somebody is conscious by entering their head, which is impossible. The mysterians’ arguments are convincing to many, and their following grows stronger by the day as the debates between the other parties appear ever more futile.
With this heated debate in the backdrop, we can now introduce a new variant on the dice killer setup.
The killer starts like before by kidnapping a single human (not an android). If he rolls snake eyes, this person is killed. If not, he releases them and kidnaps one new human and nine androids. (Sounding familiar?) If he rolls snake eyes, all ten are killed, and if not, one new person and 99 new androids are kidnapped. Etc. Thus we have:
First Round: 1 person
Second Round: 1 person, 9 androids
Third Round: 1 person, 99 androids
Fourth Round: 1 person, 999 androids
and so on…
You live in this society, and are one of its many citizens that doesn’t know if they are an android or a biological human. You find yourself kidnapped by the killer. How worried should you be about your survival?
If you are a substrate dependence theorist, you will see this case as similar to the variant with rocks. After all, you know that you are conscious. So you naturally conclude that you can’t be an android. This means that there is only one possible person that you could be in each round. So the calculation runs exactly as it did before with the rocks, ending with a 10% chance of death.
If you are a functionalist, you will see this case as similar to the case we started with. You think that androids are conscious, so you don’t rule out any of the possibilities for who you might be. Thus you calculate as we did initially, ending with a 90% chance of death.
Here we pause to notice something very important! Our two different theories of consciousness have made different empirically verifiable predictions about the world! And not only are they easily testable, but they are significantly different. The amount of evidence provided by the observation of snake eyes has to do with the likelihood ratio P(snake eyes | functionalism) / P(snake eyes | substrate dependence). This ratio is roughly 90% / 10% = 9, which means that observing snake eyes tilts the balance by a factor of 9 in favor of functionalism.
More precisely, we use the likelihood ratio to update our prior credences in functionalism and substrate dependence to our posterior credences. That is,
This is a significant update. It can be made even more significant by altering the details of the setup. But the most important point is that there is an update at all. If what I’ve argued is correct, then the mysterians are demonstrably wrong. We can construct setups that test theories of consciousness, and we know just how!
(There’s an interesting caveat here, which is that this is only evidence for the individual that found themselves to be kidnapped. If an experimenter was watching from the outside and saw the dice land snake eyes, they would get no evidence for functionalism over substrate dependence. This relates to the anthropic nature of the evidence; it is only evidence for the individuals for whom the indexical claims “I have been kidnapped” and “I am conscious” apply.)
So there we have it. We’ve constructed an experimental setup that allows us to test claims of consciousness that are typically agreed to be beyond empirical verification. Granted, this is a pretty destructive setup and would be monstrously unethical to actually enact. But the essential features of the setup can be preserved without the carnage. Rather than snake eyes resulting in the killer murdering everybody kept captive, it could just result in the experimenter saying “Huzzah!” and ending the experiment. Then the key empirical evidence for somebody that has been captured would be whether or not the experimenter says “Huzzah!” If so, then functionalism becomes nine times more likely than it was before relative to substrate dependence.
This would be a perfectly good experiment that we could easily run, if only we could start producing some androids indistinguishable from humans. So let’s get to it, AI researchers!
This is great! Nothing like combining the strangeness of anthropic reasoning and consciousness. This type of scenario is the only one I can think of where a human could get evidence for one of these theories even if it would get screened off if they ever found out that they were a human. Although maybe if someone believes in the self-indication assumption then they would take their own existence as evidence for functionalism?