Every now and then I go through a phase in which I find myself puzzling about consciousness. I typically leave these phases feeling like my thoughts are slightly more organized on the problem than when I started thinking about it, but still feeling overwhelmingly confused about the subject.
I’m currently in one of those phases!
It started when I was watching an episode of the recent Planet Earth II series (which I recommend to everybody – it’s beautiful). One scene contains a montage of grizzly bears that have just emerged from hibernation and are now passionately grinding their backs against trees to shed their excess fur.
Nobody with a soul would watch this video and not relate to the back-scratching bears through memories of the rush of pleasure and utter satisfaction of a great back scratching session.
The natural question this raises is: how do we know that the bears are actually feeling the same pleasure that we feel when we get our backs scratched? How could we know that they are feeling anything at all?
A modest answer is that it’s just intuition. Some things just look to us like they’re conscious, and we feel a strong intuitive conviction that they really are feeling what we think they are.
But this is unsatisfying. ‘Intuition’ is only a good answer to a question when we have a good reason to presume that our intuitions should be reliable in the context of the question. And why should we believe that our intuitions about a rock being unconscious and a bear being conscious have any connection to reality? How can we rationally justify such beliefs?
The only starting point we have for assessing any questions about consciousness is our own conscious experience – the only one that we have direct and undeniable introspective access to. If we’re to build up a theory of consciousness, we must start there.
So for instance, we notice that there are tight correlations between patterns of neural activation in our brains and our conscious experiences. We also notice that there are some physical details that seem irrelevant to the conscious experiences that we have.
This distinction between ‘the physical details that are relevant to what conscious experiences I have’ and ‘the physical details that are irrelevant to what conscious experiences I have’ allow us to make new inferences about conscious experiences that are not directly accessible to us.
We can say, for instance, that a perfect physical clone of mine that is in a different location than me probably has a similar range of conscious experiences. This is because the only difference between us is our location, which is largely irrelevant to the range of my conscious experiences (I experience colors and emotions and sounds the same way whether I’m on one side of the room or another).
And we can draw similar conclusions about a clone of mine if we also change their hairstyle or their height or their eye color. Each of these changes should only affect our view of their consciousness insofar as we notice changes in our consciousness upon changes in our height, hairstyle, or eye color.
This gives us rational grounds on which to draw conclusions like ‘Other human beings are conscious, and likely have similar types of conscious experiences to me.’ The differences between other human beings and me are not the types of things that seem able to make them have wildly different types of conscious experiences.
Once we notice that we tend to reliably produce accurate reports about our conscious experiences when there are no incentives for us to lie, we can start drawing conclusions about the nature of consciousness from the self-reports of other beings like us.
(Which is of course how we first get to the knowledge about the link between brain structure and conscious experience, and the similarity in structure between my brain and yours. We probably don’t actually personally notice this unless we have access to a personal MRI, but we can reasonably infer from the scientific literature.)
From this we can build up a theory of consciousness. A theory of consciousness examines a physical system and reports back on things like whether or not this system is conscious and what types of conscious experiences it is having.
Let me now make a conceptual separation between two types of theories of consciousness: epiphenomenal theories and causally active theories.
Epiphenomenal theories of consciousness are structured as follows: There are causal relationships leading from the physical world to conscious experiences, and no causal relationships leading back.
Causally active theories of consciousness have both causal arrows leading from the physical world to consciousness, and back from consciousness to the physical world. So physical stuff causes conscious experiences, and conscious experiences have observable behavioral consequences.
Let’s tackle the first class of theories first. How could a good Bayesian update on these theories? Well, the theories make predictions about what is being experienced, but make no predictions about any other empirically observable behaviors. So the only source of evidence for these theories is our personal experiences. If Theory X tells me that when I hit my finger with a hammer, I will feel nothing but a sense of mild boredom, then I can verify that Theory X is wrong only through introspection of my own experiences.
But even this is unusual.
The mental process by which I verify that Theory X is wrong is occurring in my brain, and on any epiphenomenal theory, such a process cannot be influenced by any actual conscious experiences that I’m having.
If suddenly all of my experiences of blue and red were inverted, then any reaction of mine, especially one which accurately reported what had happened, would have to be a wild coincidence. After all, the change in my conscious experience can’t have had any causal effects on my behavior.
In other words, there is no reason to expect on an epiphenomenal theory of consciousness that the beliefs I form or the self-reports I produce about my own experiences should align with my actual conscious experiences.
And yet they invariably do. Every time I notice that I have accurately reported a conscious experience, I have noticed something that is wildly unlikely to occur under any epiphenomenal theory of consciousness. And by Bayes’ rule, each time this happens, all epiphenomenal theories are drastically downgraded in credence.
So this entire class of theories is straightforwardly empirically wrong, and will quickly be eliminated from our model of reality through some introspection. The theories that are left involve causation going both from the physical world to consciousness and back from consciousness to the physical world.
In other words, they involve two mappings – one from a physical system to consciousness, and another from consciousness to predicted future behavior of the physical system
But now we have a puzzle. The second mapping involves third-party observable physical effects that are caused by conscious experiences. But in our best understanding of the world, physical effects are always found to be the results of physical causes. For any behavior that my theory tells me is caused by a conscious experience, I can trace a chain of physical causes that uniquely determined this behavior.
What does this mean about the causal role of consciousness? How can it be true that conscious experiences are causal determinants of our behavior, and also that our behaviors are fully causally determined by physical causes?
The only way to make sense of this is by concluding that conscious experiences must be themselves purely physical causes. So if my best theory of consciousness tells me that experience E will cause behavior B, and my best theory of physics tells me that the cause of B is some set of physical events P, then E is equal to P, or some subset of P.
This is how we are naturally led to what’s called identity physicalism – the claim that conscious experiences are literally the same thing as some type of physical pattern or substance.
Let me move on to another weird aspect of consciousness. Imagine that I encounter an alien being that looks like an exact clone of myself, but made purely of silicon. What does our theory of consciousness say about this being?
It seems like this depends on if the theory makes reference only to the patterns exhibited by the physical structure, or to the physical structure itself. So if my theory is about the types of conscious experiences that arise from complicated patterns of carbon, then it will tell me that this alien being is not conscious. But if it just references the complicated patterns, and doesn’t specify the lower-level physical substrate from which the pattern arises, then the alien being is conscious.
The problem is that it’s not clear to me which of these we should prefer. Both make the same third-party predictions, and first-party verifications could only be made through a process involving a transformation of our body from one substrate to the other. In the absence of such a process, both of the theories make the exact same predictions about what the world looks like, and thus will be boosted or shrunk in credence exactly the same way by any evidence we receive.
Perhaps the best we could do is say that the first theory contains all of the complicated details of the first, but also has additional details, and so should be penalized by the conjunction rule? So “carbon + pattern” will always be less likely than “pattern” by some amount. But these differences in priors can’t give us that much, as they should in principle be dwarfed in the infinite-evidence limit.
What this amounts to is an apparently un-leap-able inferential gap regarding the conscious experiences of beings that are qualitatively different from us.