I’ve been reading about the integrated information theory of consciousness lately, and wondering about the following question. In general, what are the sources of evidence we have for a theory of consciousness?
One way to think about this is to imagine yourself teleported hundreds of years into the future and talking to a scientist in this future world. This scientist tells you that in his time, consciousness is fully understood. What sort of experiments would you expect to be able to run to verify for yourself that the future’s theory of consciousness really is sufficient?
One thing you could do is point to a bunch of different physical systems, ask the scientist what his theory of consciousness says about them, and compare them to your intuitions. So, for instance, does the theory say that you are conscious? What about humans in general? What about people in deep sleep? How about dogs? Chickens? Frogs? Insects? Bacterium? Are Siri-style computer programs conscious? What about a rock? And so on.
The obvious problem with this is that it assumes the validity of your intuitions about consciousness. Sure it seems obvious that a rock is not conscious, that humans generally are, and that dogs are conscious, but less so than humans, but how do we know that these are trustworthy intuitions?
I think the validity of these intuitions is necessarily grounded in our phenomenology and our observations of how it correlates with our physical substance. So, for instance, I notice that when I fall asleep, my consciousness fades in and out. On the other hand, when I wiggle my big toe, this has an effect on the character of my conscious experience, but doesn’t shut it off entirely. This tells me that something about what happens to my body when I fall asleep is relevant to the maintenance of my consciousness, while the angle of my big toe is not.
In general, we make many observations like these and piece together a general theory of how consciousness relates to the physical world, not just in terms of the existence of consciousness, but also in terms of what specific conscious experiences we expect for a given change to our physical system. It tells us, for instance, that receiving a knock on the head or drinking too much alcohol is sometimes sufficient to temporarily suspend consciousness, while breaking a finger or cutting your hair is not.
Now, since we are able to intervene on our physical body at will and observe the results, our model is a causal model. An implication of this is that it should be able to handle counterfactuals. So, for instance, it can give us an answer to the question “Would I still be conscious if I cut my hair off, changed my skin color, shrunk several inches in height, and got a smaller nose?” This answer is presumably yes, because our theory distinguishes between physical features that are relevant to the existence of consciousness and those that are not.
Extending this further, we can ask if we would still be conscious if we gradually morphed into another human being, with a different brain and body. Again, the answer would appear to be yes, as long as nothing essential to the existence of consciousness is severed along the way. But now we are in a position to be able to make inferences about the existence of consciousness in bodies outside our own! For if I think that I would be conscious if I slowly morphed into my boyfriend, then I should also believe that my boyfriend is conscious himself. I could deny this by denying that the same physical states give rise to the same conscious states, but while this is logically possible, it seems quite implausible.
This gives rational grounds for our belief in the existence of consciousness in other humans, and allows us justified access to all of the work in neuroscience analyzing the connection between the brain and consciousness. It also allows us to have a baseline level of trust in the self-reports of other people about their conscious experiences, given the observation that we are generally reliable reporters of our conscious experience.
Bringing this back to our scientist from the future, I can think of some much more convincing tests I would do than the ‘tests of intuition’ that we did at first. Namely, suppose that the scientist was able to take any description of an experience, translate that into a brain state, and then stimulate your brain in such a way as to produce that experience for you. So over and over you submit requests – “Give me a new color experience that I’ve never had before, but that feels vaguely pinkish and bluish, with a high pitch whine in the background”, “Produce in me an emotional state of exaltation, along with the sensation of warm wind rushing through my hair and a feeling of motion”, etc – and over and over the scientist is able to excellently match your request. (Also, wow imagine how damn cool this would be if we could actually do this.)
You can also run the inverse test: you tell the scientist the details of an experience you are having while your brain is being scanned (in such a way that the scientist cannot see it). Then the scientist runs some calculations using their theory of consciousness and makes some predictions about what they’ll see on the brain scan. Now you check the brain scan to see if their predictions have come true.
To me, repeated success in experiments of this kind would be supremely convincing. If a scientist of the future was able to produce at will any experience I asked for (presuming my requests weren’t too far out as to be physical impossible), and was able to accurately translate facts about my consciousness into facts about my brain, and could demonstrate this over and over again, I would be convinced that this scientist really does have a working theory of consciousness.
And note that since this is all rooted in phenomenology, it’s entirely uncoupled from our intuitive convictions about consciousness! It could turn out that the exact framework the scientist is using to calculate the connections between my physical body and my consciousness end up necessarily entailing that rocks are conscious and that dolphins are not. And if the framework’s predictive success had been demonstrated with sufficient robustness before, I would just have to accept this conclusion as unintuitive but true. (Of course, it would be really hard to imagine how any good theory of consciousness could end up coming to this conclusion, but that’s beside the point.)
So one powerful source of evidence we have for testing a theory of consciousness is the correlations between our physical substance and our phenomenology. Is that all, or are there other sources of evidence tout there?
We can straightforwardly adopt some principles from the philosophy of science, such as the importance of simplicity and avoiding overfitting in formulating our theories. So for instance, one theory of consciousness might just be an exhaustive list of every physical state of the brain and what conscious experience this corresponds to. In other words, we could imagine a theory in which all of the basic phenomenological facts of consciousness are taken as individual independent axioms. While this theory will be fantastically accurate, it will be totally worthless to us, and we’d have no reason to trust its predictive validity.
So far, we really just have three criteria for evidence:
- Correlations between phenomenology and physics
- Simplicity
- Avoiding overfitting
As far as I’m concerned, this is all that I’m really comfortable with counting as valid evidence. But these are very much not the only sources of evidence that get referenced in the philosophical literature. There are a lot of arguments that get thrown around concerning the nature of consciousness that I find really hard to classify neatly, although often these arguments feel very intuitively appealing. For instance, one of my favorite arguments for functionalism is David Chalmers’ ‘Fading Qualia’ argument. It goes something like this:
Imagine that scientists of the future are able to produce silicon chips that are functionally identical to neurons and can replicate all of their relevant biological activity. Now suppose that you undergo an operation in which gradually, every single part of your nervous system is substituted out for silicon. If the biological substrate implementing the functional relationships is essential to consciousness, then by the end of this procedure you will no longer be conscious.
But now we ask: when did the consciousness fade out? Was it a sudden or a gradual process? Both seem deeply implausible. Firstly, we shouldn’t expect a sudden drop-out of consciousness from the removal of a single neuron or cluster of neurons, as this would be a highly unusual level of discreteness. This would also imply the ability to switch on and off the entirety of your consciousness with seemingly insignificant changes to the biological structure of your nervous system.
And secondly, if it is a gradual process, then this implies the existence of “pseudo-conscious” states in the middle of the procedure, where your experiences are markedly distinct from those of the original being but you are pretty much always wrong about your own experiences. Why? Well, the functional relationships have stayed the same! So your beliefs about your conscious states, the memories you form, the emotional reactions you have, will all be exactly as if there has been no change to your conscious states. This seems totally bizarre and, in Chalmers’ words, “we have little reason to believe that consciousness is such an ill-behaved phenomenon.”
Now, this is a fairly convincing argument to me. But I have a hard time understanding why it should be. The argument’s convincingness seems to rely on some very high-level abstract intuitions about the types of conscious experiences we imagine organisms could be having, and I can’t think of a great reason for trusting these intuitions. Maybe we could chalk it up to simplicity, and argue that the notion of consciousness entailed by substrate-dependence must be extremely unparsimonious. But even this connection is not totally clear to me.
A lot of the philosophical argumentation about consciousness feels this way to me; convincing and interesting, but hard to make sense of as genuine evidence.
One final style of argument that I’m deeply skeptical of is arguments from pure phenomenology. This is, for instance, how Giulio Tononi likes to argue for his integrated information theory of consciousness. He starts from five supposedly self-evident truths about the character of conscious experience, then attempts to infer facts about the structure of the physical systems that could produce such experiences.
I’m not a big fan of Tononi’s observations about the character of consciousness. They seem really vaguely worded and hard enough to make sense of that I have no idea if they’re true, let alone self-evident. But it is his second move that I’m deeply skeptical of. The history of philosophers trying to move from “self-evident intuitive truths” to “objective facts about reality” is pretty bad. While we might be plenty good at detailing our conscious experiences, trying to make the inferential leap to the nature of the connection between physics and consciousness is not something you can do just by looking at phenomenology.