Constructing the world

In this six and a half hour lecture series by David Chalmers, he describes the concept of a minimal set of statements from which all other truths are a priori “scrutable” (meaning, basically, in-principle knowable or derivable).

What are the types of statements in this minimal set required to construct the world? Chalmers offers up four categories, and abbreviates this theory PQIT.

P

P is the set of physical facts (for instance, everything that would be accessible to a Laplacean demon). It can be thought of as essentially the initial conditions of the universe and the laws governing their changes over time.

Q

Q is the set of facts about qualitative experience. We can see Chalmers’ rejection of physicalism here, as he doesn’t think that Q is eclipsed within P. Example of a type of statement that cannot be derived from P without Q: “There is a beige region in the bottom right of my visual field.”

I

Here’s a true statement: “I’m in the United States.” Could this be derivable from P and Q? Presumably not; we need another set of indexical truths that allows us to have “self-locating” beliefs and to engage in anthropic reasoning.

T

Suppose that P, Q, and I really are able to capture all the true statements there are to be captured. Well then, the statement “P, Q, and I really are able to capture all the true statements there are to be captured” is a true statement, and it is presumably not captured by P, Q, and I! In other words, we need some final negative statements that tell us that what we have is enough, and that there are no more truths out there. These “that’s all”-type statements are put into the set T.

⁂⁂⁂

So this is a basic sketch of Chalmer’s construction. I like that we can use these tags like PQIT or PT or QIT as a sort of philosophical zip-code indicating the core features of a person’s philosophical worldview. I also want to think about developing this further. What other possible types of statements are there out there that may not be captured in PQIT? Here is a suggestion for a more complete taxonomy:

p    microphysics
P    macrophysics (by which I mean all of science besides fundamental physics)
Q    consciousness
R    normative rationality
E    
normative ethics
C    counterfactuals
L    
mathematical / logical truths
I     indexicals
T    “that’s-all” statements

I’ve split P into big-P (macrophysics) and little-p (microphysics) to account for the disagreements about emergence and reductionism. Normativity here is broad enough to include both normative epistemic statements (e.g. “You should increase your credence in the next coin toss landing H after observing it land H one hundred times in a row”) and ethical statements. The others are fairly self-explanatory.

The most ontologically extravagant philosophical worldview would then be characterized as pPQRECLIT.

My philosophical address is pRLIT (with the addendum that I think C comes from p, and am really confused about Q). What’s yours?

Moving Naturalism Forward: Eliminating the macroscopic

Sean Carroll, one of my favorite physicists and armchair philosophers, hosted a fantastic conference on philosophical naturalism and science, and did the world a great favor by recording the whole thing and posting it online. It was a three-day long discussion on topics like the nature of reality, emergence, morality, free will, meaning, and consciousness. Here are the videos for the first two discussion sections, and the rest can be found by following Youtube links.

 

Having watched through the entire thing, I have updated a few of my beliefs, plan to rework some of my conceptual schema, and am puzzled about a few things.

A few of my reflections and take-aways:

  1. I am much more convinced than before that there is a good case to be made for compatibilism about free will.
  2. I think there is a set of interesting and challenging issues around the concept of representation and intentionality (about-ness) that I need to look into.
  3. I am more comfortable with intense reductionism claims, like “All fact about the macroscopic world are entailed by the fundamental laws of physics.”
  4. I am really interested in hearing Dan Dennett talk more about grounding morality, because what he said was starting to make a lot of sense to me.
  5. I am confused about the majority attitude in the room that there’s not any really serious reason to take an eliminativist stance about macroscopic objects.
  6. I want to find more details about the argument that Simon DeDeo was making for the undecidability of questions about the relationship between macroscopic theories and microscopic theories (!!!).
  7. There’s a good way to express the distinction between the type of design human architects engage in and the type of design that natural selection produces, which is about foresight and representations of reasons. I’m not going to say more about this, and will just refer you to the videos.
  8. There are reasons to suspect that animal intelligence and capacity to suffer are inversely correlated (that is, the more intelligent an animal, the less capacity to suffer it likely has). This really flips some of our moral judgements on their head. (You must deliver a painful electric shock to either a human or to a bird. Which one will you choose?)

Let me say a little more about number 5.

I think that questions about whether macroscopic objects like chairs or plants really REALLY exist, or whether there are really only just fermions and bosons are ultimately just questions about how we should use the word “exist.” In the language of our common sense intuitions, obviously chairs exist, and if you claim otherwise, you’re just playing complicated semantic games. I get this argument, and I don’t want to be that person that clings to bizarre philosophical theses that rest on a strange choice of definitions.

But at the same time, I see a deep problem with relying on our commonsense intuitions about the existence of the macro world. This is that as soon as we start optimizing for consistency, even a teeny tiny bit, these macroscopic concepts fall to pieces.

For example, here is a trilemma (three statements that can’t all be correct):

  1. The thing I am sitting on is a chair.
  2. If you subtract a single atom from a chair, it is still a chair.
  3. Empty space is not a chair.

These seem to me to be some of the most obvious things we could say about chairs. And yet they are subtly incoherent!

Number 1 is really shorthand for something like “there are chairs.” And the reason why the second premise is correct is that denying it requires that there be a chair such that if you remove a single atom, it is no longer a chair. I take it to be obvious that such things don’t exist. But accepting the first two requires us to admit that as we keep shedding atoms from a chair, it stays a chair, even down to the very last atom. (By the way, some philosophers do actually deny number 2. They take a stance called epistemicism, which says that concepts like “chair” and “heap” are actually precise and unambiguous, and there exists a precise point at which a chair becomes a non-chair. This is the type of thing that makes me giggle nervously when reflecting on the adequacy of philosophy as a field.)

As I’ve pointed out in the past, these kinds of arguments can be applied to basically everything in the macroscopic world. They wreak havoc on our common sense intuitions and, to my mind, demand rejection of the entire macroscopic world. And of course, they don’t apply to the microscopic world. “If X is an electron, and you change its electric charge a tiny bit, is it still an electron?” No! Electrons are physical substances with precise and well-defined properties, and if something doesn’t have these properties, it is not an electron! So the Standard Model is safe from this class of arguments.

Anyway, this is all just to make the case that upon close examination, our commonsense intuitions about the macroscopic world turn out to be subtly incoherent. What this means is that we can’t make true statements like “There are two cars in the garage”. Why? Just start removing atoms from the cars until you get to a completely empty garage. Since no single-atom change can make the relevant difference to “car-ness”, at each stage, you’ll still have two cars!

As soon as you start taking these macroscopic concepts seriously, you find yourself stuck in a ditch. This, to me, is an incredibly powerful argument for eliminativism, and I was surprised to find that arguments like these weren’t stressed at the conference. This makes me wonder if this argument is as powerful as I think.

What is integrated information?

Integrated information theory relates consciousness to degrees of integrated information within a physical system. I recently became interested in IIT and found it surprisingly hard to locate a good simple explanation of the actual mathematics of integrated information online.

Having eventually just read through all of the original papers introducing IIT, I discovered that integrated information is closely related to some of my favorite bits of mathematics, involving information theory and causal modeling.  This was exciting enough to me that I decided to write a guide to understanding integrated information. My goal in this post is to introduce a beginner to integrated information in a rigorous and (hopefully!) intuitive way.

I’ll describe it increasing levels of complexity, so that even if you eventually get lost somewhere in the post, you’ll be able to walk away having learned something. If you get to the end of this post, you should be able to sit down with a pencil and paper and calculate the amount of integrated information in simple systems, as well as how to calculate it in principle for any system.

Level 1

So first, integrated information is a measure of the degree to which the components of a system are working together to produce outputs.

A system composed of many individual parts that are not interacting with each other in any way is completely un-integrated – it has an integrated information ɸ = 0. On the other hand, a system composed entirely of parts that are tightly entangled with one another will have a high amount of integrated information, ɸ >> 0.

For example, consider a simple model of a camera sensor.

tut_sensors_grid2

This sensor is composed of many independent parts functioning completely separately. Each pixel stores a unit of information about the outside world, regardless of what its neighboring pixels are doing. If we were to somehow sever the causal connections between the two halves of the sensor, each half would still capture and store information in exactly the same way.

Now compare this to a human brain.

FLARE-Technique-Offers-Snapshots-of-Neuron-Activity

The nervous system is a highly entangled mesh of neurons, each interacting with many many neighbors in functionally important ways. If we tried to cut the brain in half, severing all the causal connections between the two sides, we would get an enormous change in brain functioning.

Makes sense? Okay, on to level 2.

Level 2

So, integrated information has to do with the degree to which the components of a system are working together to produce outputs. Let’s delve a little deeper.

We just said that we can tell that the brain is integrating lots of information, because the functioning would be drastically disrupted if you cut it in half. A keen reader might have realized that the degree to which the functioning is disrupted will depend a lot on how you cut it in half.

For instance, cut off the front half of somebody’s brain, and you will end up with total dysfunction. But you can entirely remove somebody’s cerebellum (~50% of the brain’s neurons), and end up with a person that has difficulty with coordination and is a slow learner, but is otherwise a pretty ordinary person.

Human head, MRI and 3D CT scans

What this is really telling us is that different parts of the brain are integrating information differently. So how do we quantify the total integration of information of the brain? Which cut do we choose when evaluating the decrease in functioning?

Simple: We look at every possible way of partitioning the brain into two parts. For each one, we see how much the brain’s functioning is affected. Then we locate the minimum information partition, that is, the partition that results in the smallest change in brain functioning. The change in functioning that results from this particular partition is the integrated information!

Okay. Now, what exactly do we mean by “changes to the system’s functioning”? How do we measure this?

Answer: The functionality of a system is defined by the way in which the current state of the system constrains the past and future states of the system.

To make full technical sense of this, we have to dive a little deeper.

Level 3

How many possible states are there of a Connect Four board?

(I promise this is relevant)

The board is 6 by 7, and each spot can be either a red piece, a black piece, or empty.

Screen Shot 2018-04-20 at 1.03.04 AM

So a simple upper bound on the number of total possible board states is 342 (of course, the actual number of possible states will be much lower than this, since some positions are impossible to get into).

Now, consider what you know about the possible past and future states of the board if the board state is currently…

Screen Shot 2018-04-20 at 1.03.33 AM

Clearly there’s only one possible past state:

Screen Shot 2018-04-20 at 1.03.04 AM

And there are seven possible future states:

What this tells us is that the information about the current state of the board constrains the possible past and future states, selecting exactly one possible board out of the 342 possibilities for the past, and seven out of 342 possibilities for the future.

More generally, for any given system S we have a probability distribution over past and future states, given that the current state is X.

System

Pfuture(X, S) = Pr( Future state of S | Present state of S is X )
Ppast(X, S) = Pr( Past state of S | Present state of S is X )

For any partition of the system into two components, S1 and S2, we can consider the future and past distributions given that the states of the components are, respectively, X1 and X2, where X = (X1, X2).

System

Pfuture(X, S1, S2) = Pr( Future state of S1 | Present state of S1 is X1 )・Pr( Future state of S2 | Present state of S2 is X2 )
Ppast(X, S1, S2) = Pr( Past state of S1 | Present state of S1 is X1 )・Pr( Past state of S2 | Present state of S2 is X2 )

Now, we just need to compare our distributions before the partition to our distributions after the partition. For this we need some type of distance function D that assesses how far apart two probability distributions are. Then we define the cause information and the effect information for the partition (S1, S2).

Cause information = D( Ppast(X, S), Ppast(X, S1, S2) )
Effect information = D( Pfuture(X, S), Pfuture(X, S1, S2) )

In short, the cause information is how much the distribution over past states changes when you partition off your system into two separate systems And the future information is the change in the distribution over future states when you partition the system.

The cause-effect information CEI is then defined as the minimum of the cause information CI and effect information EI.

CEI = min{ CI, EI }

We’ve almost made it all the way to our full definition of ɸ! Our last step is to calculate the CEI for every possible partition of S into two pieces, and then select the partition that minimizes CEI (the minimum information partition MIP).

The integrated information is just the cause effect information of the minimum information partition!

ɸ = CEI(MIP)

Level 4

We’ve now semi-rigorously defined ɸ. But to really get a sense of how to calculate ɸ, we need to delve into causal diagrams. At this point, I’m going to assume familiarity with causal modeling. The basics are covered in a series of posts I wrote starting here.

Here’s a simple example system:

XOR AND.png

This diagram tells us that the system is composed of two variables, A and B. Each of these variables can take on the values 0 and 1. The system follows the following simple update rule:

A(t + 1) = A(t) XOR B(t)
B(t + 1) = A(t) AND B(t)

We can redraw this as a causal diagram from A and B at time 0 to A and B at time 1:

Causal Diagram

What this amounts to is the following system evolution rule:

    ABt → ABt+1
00        00
01       10
10       10
11       01

Now, suppose that we know that the system is currently in the state AB = 00. What does this tell us about the future and past states of the system?

Well, since the system evolution is deterministic, we can say with certainty that the next state of the system will be 00. And since there’s only one way to end up in the state 00, we know that the past state of the system 00.

We can plot the probability distributions over the past and future distributions as follows:

Probabilities Full System

This is not too interesting a distribution… no information is lost or gained going into the past or future. Now we partition the system:

XOR AND Cut

The causal diagram, when cut, looks like:

Causal Diagram Cut

Why do we have the two “noise” variables? Well, both A and B take two variables as inputs. Since one of these causal inputs has been cut off, we replace it with a random variable that’s equally likely to be a 0 or a 1. This procedure is called “noising” the causal connections across the partition.

According to this diagram, we now have two independent distributions over the two parts of the system, A and B. In addition, to know the total future state of a system, we do the following:

P(A1, B1 | A0, B0) = P(A1 | A0) P(B1 | B0)

We can compute the two distributions P(A1 | A0) and P(B1 | B0) straightforwardly, by looking at how each variable evolves in our new causal diagram.

A0 = 0 ⇒ A1 = 0, 1 (½ probability each)
B0 = 0 ⇒ B1 = 0

A0 = 0 ⇒ A-1 = 0, 1 (½ probability each)
B0 = 0 ⇒ B-1 = 0, 1 (probabilities ⅔ and ⅓)

This implies the following probability distribution for the partitioned system:

Partitioned System

I recommend you go through and calculate this for yourself. Everything follows from the updating rules that define the system and the noise assumption.

Good! Now we have two distributions, one for the full system and one for the partitioned system. How do we measure the difference between these distributions?

There are a few possible measures we could use. My favorite of these is the Kullback-Leibler divergence DKL. Technically, this metric is only used in IIT 2.0, not IIT 3.0 (which uses the earth-mover’s distance). I prefer DKL, as it has a nice interpretation as the amount of information lost when the system is partitioned. I have a post describing DKL here.

Here’s the definition of DKL:

DKL(P, Q) = ∑ Pi log(Pi / Qi)

We can use this quantity to calculate the cause information and the effect information:

Cause information = log(3) ≈ 1.6
Effect information = log(2) = 1

These values tell us that our partition destroys about .6 more bits of information about the past than it does the future. For the purpose of integrated information, we only care about the smaller of these two (for reasons that I don’t find entirely convincing).

Cause-effect information = min{ 1, 1.6 } = 1

Now, we’ve calculated the cause-effect information for this particular partition. And since our system has only two variables, this is the only possible partition.

The integrated information is the cause-effect information of the minimum information partition. Since our system only has two components, the partition we’ve examined is the only possible partition, meaning that it must be the minimum information partition. And thus, we’ve calculated ɸ for our system!

ɸ = 1

Level 5

Let’s now define ɸ in full generality.

Our system S consists of a vector of N variables X = (X1, X2, X3, …, XN), each an element in some space 𝒳. Our system also has an updating rule, which is a function f: 𝒳N → 𝒳N. In our previous example, 𝒳 = {0, 1}, N = 2, and f(x, y) = (x XOR y, x AND y).

More generally, our updating rule f can map X to a probability distribution p:  𝒳N → . We’ll denote P(Xt+1 | Xt) as the distribution over the possible future states, given the current state. P is defined by our updating rule: P(Xt+1 | Xt) = f(Xt). The distribution over possible past states will be denoted P(Xt-1 | Xt). We’ll obtain this using Bayes’ rule: P(Xt-1 | Xt) = P(Xt | Xt-1) P(Xt-1) / P(Xt) = f(Xt-1) P(Xt-1) / P(Xt).

A partition of the system is a subset of {1, 2, 3, …, N}, which we’ll label A. We define B = {1, 2, 3, …, N} \ A. Now we can define XA = ( X)a∈A, and XB = ( X)b∈B. Loosely speaking, we can say that X = (XA, XB), i.e. that the total state is just the combination of the two partitions A and B.

We now define the distributions over future and past states in our partitioned system:

Q(Xt+1 | Xt) = P(XA, t+1 | XA, t) P(XB, t+1 | XB, t)
Q(Xt-1 | Xt) = P(XA, t-1 | XA, t) P(XB, t-1 | XB, t).

The effect information EI of the partition defined by A is the distance between P(Xt+1 | Xt) and Q(Xt+1 | Xt), and the cause information CI is defined similarly. The cause-effect information is defined as the minimum of these two.

CI(f, A, Xt) = D( P(Xt-1 | Xt), Q(Xt-1 | Xt) )
EI(f, A, Xt) = D( P(Xt+1 | Xt), Q(Xt+1 | Xt) )

CEI(f, A, Xt) = min{ CI(f, A, Xt), EI(f, A, Xt) }

And finally, we define the minimum information partition (MIP) and the integrated information:

MIP = argminA CEI(f, A, Xt)
ɸ(f, Xt) = minA CEI(f, A, Xt)
= CEI(f, MIP, Xt)

And we’re done!

Notice that our final result is a function of f (the updating function) as well as the current state of the system. What this means is that the integrated information of a system can change from moment to moment, even if the organization of the system remains the same.

By itself, this is not enough for the purposes of integrated information theory. Integrated information theory uses ɸ to define gradations of consciousness of systems, but the relationship between ɸ and consciousness isn’t exactly one-to-on (briefly, consciousness resides in non-overlapping local maxima of integrated information).

But this post is really meant to just be about integrated information, and the connections to the theory of consciousness are actually less interesting to me. So for now I’ll stop here! 🙂

Utter confusion about consciousness

I’m starting to get a sense of why people like David Chalmers and Daniel Dennett call consciousness the most mysterious thing known to humans. I’m currently just really confused, and think that pretty much every position available with respect to consciousness is deeply unsatisfactory. In this post, I’ll just walk through my recent thinking.

Against physicalism

In a previous post, I imagined a scientist from the future who told you they had a perfected theory of consciousness, and asked how we could ask for evidence confirming this. This theory of consciousness could presumably be thought of as a complete mapping from physical states to conscious states – a set of psychophysical laws. Questions about the nature of consciousness are then questions about the nature of these laws. Are they ultimately the same kind of laws as chemical laws (derivable in principle from the underlying physics)? Or are they logically distinct laws that must be separately listed on the catalogue of the fundamental facts about the universe?

I take physicalism to be the stance that answers ‘yes’ to the first question and ‘no’ to the second. Dualism and epiphenomenalism answer ‘no’ to the first and ‘yes’ to the second, and are distinguished by the character of the causal relationships between the physical and the conscious entailed by the psychophysical laws.

So, is physicalism right? Imagining that we had a perfect mapping from physical states to conscious states, would this mapping be in principle derivable from the Schrodinger equation? I think the answer to this has to be no; whatever the psychophysical laws are, they are not going to be in principle derivable from physics.

To see why, let’s examine what it looks like when we derive macroscopic laws from microscopic laws. Luckily, we have a few case studies of successful reduction. For instance, you can start with just the Schrodinger equation and derive the structure of the periodic table. In other words, the structure and functioning of atoms and molecules naturally pops out when you solve the equation for systems of many particles.

You can extrapolate this further to larger scale systems. When we solve the Schrodinger equation for large systems of biomolecules, we get things like enzymes and cell membranes and RNA, and all of the structure and functioning corresponding to our laws of biology. And extending this further, we should expect that all of our behavior and talk about consciousness will be ultimately fully accounted for in terms of purely physical facts about the structure of our brain.

The problem is that consciousness is something more than just the words we say when talking about consciousness. While it’s correlated in very particular ways with our behavior (the structure and functioning of our bodies), it is by its very nature logically distinct from these. You can tell me all about the structure and functioning of a physical system, but the question of whether or not it is conscious is a further fact that is not logically entailed. The phrase LOGICALLY entailed is very important here – it may be that as a matter of fact, it is a contingent truth of our universe that conscious facts always correspond to specific physical facts. But this is certainly not a relationship of logical entailment, in the sense that the periodic table is logically entailed by quantum mechanics.

In summary, it looks like we have a problem on our hands if we want to try to derive facts about consciousness from facts about fundamental physics. Namely, the types of things we can derive from something like the Schrodinger equation are facts about complex macroscopic structure and functioning. This is all well and good for deriving chemistry or solid-state physics from quantum mechanics, as these fields are just collections of facts about structure and functioning. But consciousness is an intrinsic property that is logically distinct from properties like macroscopic structure and functioning. You simply cannot expect to start with the Schrodinger equation and naturally arrive at statements like “X is experiencing red” or “Y is feeling sad”, since these are not purely behavioral statements.

Here’s a concise rephrasing of the argument I’ve made, in terms of a trilemma. Out of the following three postulates, you cannot consistently accept all three:

  1. There are facts about consciousness.
  2. Facts about consciousness are not logically entailed by the Schrodinger equation (substitute in whatever the fundamental laws of physics end up being).
  3. Facts about consciousness are fundamentally facts about physics.

Denying (1) makes you an eliminativist. Presumably this is out of the question; consciousness is the only thing in the universe that we can know with certainty exists, as it is the only thing that we have direct first-person access to. Indeed, all the rest of our knowledge comes to us by means of our conscious experience, making it in some sense the root of all of our knowledge. The only charitable interpretations I have of eliminativism involve semantic arguments subtly redefining what we mean by “consciousness” away from “that thing which we all know exists from first-hand experience” to something whose existence can actually be cast doubt on.

Denying (2) seems really implausible to me for the considerations given above.

So denying (3) looks like our only way out.

Okay, so let’s suppose physicalism is wrong. This is already super important. If we accept this argument, then we have a worldview in which consciousness is of fundamental importance to the nature of reality. The list of fundamental facts about the universe will be (1) the laws of physics and (2) the laws of consciousness. This is really surprising for anybody like me that professes a secular worldview that places human beings far from the center of importance in the universe.

But “what about naturalism?” is not the only objection to this position. There’s a much more powerful argument.

Against non-physicalism

Suppose we now think that the fundamental facts about the universe fall into two categories: P (the fundamental laws of physics, plus the initial conditions of the universe) and Q (the facts about consciousness). We’ve already denied that P = Q or that there is a logical entailment relationship from P to Q.

Now we can ask about the causal nature of the psychophysical laws. Does P cause Q? Does Q cause P? Does the causation go both ways?

First, conditional on the falsity of physicalism, we can quickly rule out theories that claim that Q causes P (i.e. dualist theories). This is the old Cartesian picture that is unsatisfactory exactly because of the strength of the physical laws we’ve discovered. In short, physics appears to be causally complete. If you fix the structure and functioning on the microscopic level, then you fix the structure and functioning on the macroscopic level. In the language of philosophy, macroscopic physical facts supervene upon microscopic physical facts.

But now we have a problem. If all of our behavior and functioning is fully causally accounted for by physical facts, then what is there for Q (consciousness) to play a causal role in? Precisely nothing!

We can phrase this in the following trilemma (again, all three of these cannot be simultaneously true):

  1. Physicalism is false.
  2. Physics is causally closed.
  3. Consciousness has a causal influence on the physical world.

Okay, so now we have ruled out any theories in which Q causes P. But now we reach a new and even more damning conclusion. Namely, if facts about consciousness have literally no causal influence on any aspect of the physical world, then they have no causal influence, in particular, on your thoughts and beliefs about your consciousness.

Stop to consider for a moment the implications of this. We take for granted that we are able to form accurate beliefs about our own conscious experiences. When we are experiencing red, we are able to reliably produce accurate beliefs of the form “I am experiencing red.” But if the causal relationship goes from P to Q, then this becomes extremely hard to account for.

What would we expect to happen if our self-reports of our consciousness fell out of line with our actual consciousness? Suppose that you suddenly noticed yourself verbalizing “I’m really having a great time!” when you actually felt like you were in deep pain and discomfort. Presumably the immediate response you would have would be confusion, dismay, and horror. But wait! All of these experiences must be encoded in your brain state! In other words, to experience horror at the misalignment of your reports about your consciousness and your actual consciousness, it would have to be the case that your physical brain state would change in a particular way. And a necessary component of the explanation for this change would be the actual state of your consciousness!

This really gets to the heart of the weirdness of epiphenomenalism (the view that P causes Q, but Q doesn’t causally influence P). If you’re an epiphenomenalist, then all of your beliefs and speculations about consciousness are formed exactly as they would be if your conscious state were totally different. The exact same physical state of you thinking “Hey, this coffee cake tastes delicious!” would arise even if the coffee cake actually tasted like absolute shit.

To be sure, you would still “know” on the inside, in the realm of your direct first-person experience that there was a horrible mismatch occurring between your beliefs about consciousness and your actual conscious experience. But you couldn’t know about it in any way that could be traced to any brain state of yours. So you couldn’t form beliefs about it, feel shocked or horrified about it, have any emotional reactions to it, etc. And if every part of your consciousness is traceable back to your brain state, then your conscious state must be in some sense “blind” to the difference between your conscious state and your beliefs about your conscious state.

This is completely absurd. On the epiphenomenalist view, any correlation between the beliefs you form about consciousness and the actual facts about your conscious state couldn’t possibly be explained by the actual facts about your consciousness. So they must be purely coincidental.

In other words, the following two statements cannot be simultaneously accepted:

  • Consciousness does not causally influence our behavior.
  • Our beliefs about our conscious states are more accurate than random guessing.

So where does that leave us?

It leaves us in a very uncomfortable place. First of all, we should deny physicalism. But the denial of physicalism leaves us with two choices: either Q causes P or it does not.

We should deny the first, because otherwise we are accepting the causal incompleteness of physics.

And we should deny the second, because it leads us to conclude that essentially all of our beliefs about our conscious experiences are almost certainly wrong, undermining all of our reasoning that led us here in the first place.

So here’s a summary of this entire post so far. It appears that the following four statements cannot all be simultaneously true. You must pick at least one to reject.

  1. There are facts about consciousness.
  2. Facts about consciousness are not logically entailed by the Schrodinger equation (substitute in whatever the fundamental laws of physics end up being).
  3. Physics is causally closed.
  4. Our beliefs about our conscious states are more accurate than random guessing.

Eliminativists deny (1).

Physicalists deny (2).

Dualists deny (3).

And epiphenomenalists must deny (4).

I find that the easiest to deny of these four is (2). This makes me a physicalist, but not because I think that physicalism is such a great philosophical position that everybody should hold. I’m a physicalist because it seems like the least horrible of all the horrible positions available to me.

Counters and counters to those counters

A response that I would have once given when confronted by these issues would be along the lines of: “Look, clearly consciousness is just a super confusing topic. Most likely, we’re just thinking wrong about the whole issue and shouldn’t be taking the notion of consciousness so seriously.”

Part of this is right. Namely, consciousness is a super confusing topic. But it’s important to clearly delineate between which parts of consciousness are confusing and which parts are not. I’m super confused about how to make sense of the existence of consciousness, how to fit consciousness into my model of reality, and how to formalize my intuitions about the nature of consciousness. But I’m definitively not confused about the existence of consciousness itself. Clearly consciousness, in the sense of direct first-person experience, exists, and is a property that I have. The confusion arises when we try to interpret this phenomenon.

In addition, “X is super confusing” might be a true statement and a useful acknowledgment, but it doesn’t necessarily push us in one direction over another when considering alternative viewpoints on X. So “X is super confusing” isn’t evidence for “We should be eliminativists about X” over “We should be realists about X.” All it does is suggest that something about our model of reality needs fixing, without pointing to which particular component it is that needs fixing.

One more type of argument that I’ve heard (and maybe made in the past, to my shame) is a “scientific optimism” style of argument. It goes:

Look, science is always confronted with seemingly unsolvable mysteries.  Brilliant scientists in each generation throw their hands up in bewilderment and proclaim the eternal unsolvability of the deep mystery of their time. But then a few generations later, scientists end up finding a solution, and putting to shame all those past scientists that doubted the power of their art.”

Consciousness is just this generation’s “great mystery.” Those that proclaim that science can never explain the conscious in terms of the physical are wrong, just as Lord Kelvin was wrong when he affirmed that the behavior of living organisms cannot be explained in terms of purely physical forces, and required a mysterious extra element (the ‘vital principle’ as he termed it).

I think that as a general heuristic, “Science is super powerful and we should be cautious before proclaiming the existence of specific limits on the potential of scientific inquiry” is pretty damn good.

But at the same time, I think that there are genuinely good reasons, reasons that science skeptics in the past didn’t have, for affirming the uniqueness of consciousness in this regard.

Lord Kelvin was claiming that there were physical behaviors that could not be explained by appeal to purely physical forces. This is a very different claim from the claim that there are phenomenon that are not purely logically reducible to structural properties of matter, that cannot be explained by purely physical forces. This, it seems to me, is extremely significant, and gets straight to the crux of the central mystery of consciousness.

Getting evidence for a theory of consciousness

I’ve been reading about the integrated information theory of consciousness lately, and wondering about the following question. In general, what are the sources of evidence we have for a theory of consciousness?

One way to think about this is to imagine yourself teleported hundreds of years into the future and talking to a scientist in this future world. This scientist tells you that in his time, consciousness is fully understood. What sort of experiments would you expect to be able to run to verify for yourself that the future’s theory of consciousness really is sufficient?

One thing you could do is point to a bunch of different physical systems, ask the scientist what his theory of consciousness says about them, and compare them to your intuitions. So, for instance, does the theory say that you are conscious? What about humans in general? What about people in deep sleep? How about dogs? Chickens? Frogs? Insects? Bacterium? Are Siri-style computer programs conscious? What about a rock? And so on.

The obvious problem with this is that it assumes the validity of your intuitions about consciousness. Sure it seems obvious that a rock is not conscious, that humans generally are, and that dogs are conscious, but less so than humans, but how do we know that these are trustworthy intuitions?

I think the validity of these intuitions is necessarily grounded in our phenomenology and our observations of how it correlates with our physical substance. So, for instance, I notice that when I fall asleep, my consciousness fades in and out. On the other hand, when I wiggle my big toe, this has an effect on the character of my conscious experience, but doesn’t shut it off entirely. This tells me that something about what happens to my body when I fall asleep is relevant to the maintenance of my consciousness, while the angle of my big toe is not.

In general, we make many observations like these and piece together a general theory of how consciousness relates to the physical world, not just in terms of the existence of consciousness, but also in terms of what specific conscious experiences we expect for a given change to our physical system. It tells us, for instance, that receiving a knock on the head or drinking too much alcohol is sometimes sufficient to temporarily suspend consciousness, while breaking a finger or cutting your hair is not.

Now, since we are able to intervene on our physical body at will and observe the results, our model is a causal model. An implication of this is that it should be able to handle counterfactuals. So, for instance, it can give us an answer to the question “Would I still be conscious if I cut my hair off, changed my skin color, shrunk several inches in height, and got a smaller nose?” This answer is presumably yes, because our theory distinguishes between physical features that are relevant to the existence of consciousness and those that are not.

Extending this further, we can ask if we would still be conscious if we gradually morphed into another human being, with a different brain and body. Again, the answer would appear to be yes, as long as nothing essential to the existence of consciousness is severed along the way. But now we are in a position to be able to make inferences about the existence of consciousness in bodies outside our own! For if I think that I would be conscious if I slowly morphed into my boyfriend, then I should also believe that my boyfriend is conscious himself. I could deny this by denying that the same physical states give rise to the same conscious states, but while this is logically possible, it seems quite implausible.

This gives rational grounds for our belief in the existence of consciousness in other humans, and allows us justified access to all of the work in neuroscience analyzing the connection between the brain and consciousness. It also allows us to have a baseline level of trust in the self-reports of other people about their conscious experiences, given the observation that we are generally reliable reporters of our conscious experience.

Bringing this back to our scientist from the future, I can think of some much more convincing tests I would do than the ‘tests of intuition’ that we did at first. Namely, suppose that the scientist was able to take any description of an experience, translate that into a brain state, and then stimulate your brain in such a way as to produce that experience for you. So over and over you submit requests – “Give me a new color experience that I’ve never had before, but that feels vaguely pinkish and bluish, with a high pitch whine in the background”, “Produce in me an emotional state of exaltation, along with the sensation of warm wind rushing through my hair and a feeling of motion”, etc – and over and over the scientist is able to excellently match your request. (Also, wow imagine how damn cool this would be if we could actually do this.)

You can also run the inverse test: you tell the scientist the details of an experience you are having while your brain is being scanned (in such a way that the scientist cannot see it). Then the scientist runs some calculations using their theory of consciousness and makes some predictions about what they’ll see on the brain scan. Now you check the brain scan to see if their predictions have come true.

To me, repeated success in experiments of this kind would be supremely convincing. If a scientist of the future was able to produce at will any experience I asked for (presuming my requests weren’t too far out as to be physical impossible), and was able to accurately translate facts about my consciousness into facts about my brain, and could demonstrate this over and over again, I would be convinced that this scientist really does have a working theory of consciousness.

And note that since this is all rooted in phenomenology, it’s entirely uncoupled from our intuitive convictions about consciousness! It could turn out that the exact framework the scientist is using to calculate the connections between my physical body and my consciousness end up necessarily entailing that rocks are conscious and that dolphins are not. And if the framework’s predictive success had been demonstrated with sufficient robustness before, I would just have to accept this conclusion as unintuitive but true. (Of course, it would be really hard to imagine how any good theory of consciousness could end up coming to this conclusion, but that’s beside the point.)

So one powerful source of evidence we have for testing a theory of consciousness is the correlations between our physical substance and our phenomenology. Is that all, or are there other sources of evidence tout there?

We can straightforwardly adopt some principles from the philosophy of science, such as the importance of simplicity and avoiding overfitting in formulating our theories. So for instance, one theory of consciousness might just be an exhaustive list of every physical state of the brain and what conscious experience this corresponds to. In other words, we could imagine a theory in which all of the basic phenomenological facts of consciousness are taken as individual independent axioms. While this theory will be fantastically accurate, it will be totally worthless to us, and we’d have no reason to trust its predictive validity.

So far, we really just have three criteria for evidence:

  1. Correlations between phenomenology and physics
  2. Simplicity
  3. Avoiding overfitting

As far as I’m concerned, this is all that I’m really comfortable with counting as valid evidence. But these are very much not the only sources of evidence that get referenced in the philosophical literature. There are a lot of arguments that get thrown around concerning the nature of consciousness that I find really hard to classify neatly, although often these arguments feel very intuitively appealing. For instance, one of my favorite arguments for functionalism is David Chalmers’ ‘Fading Qualia’ argument. It goes something like this:

Imagine that scientists of the future are able to produce silicon chips that are functionally identical to neurons and can replicate all of their relevant biological activity. Now suppose that you undergo an operation in which gradually, every single part of your nervous system is substituted out for silicon. If the biological substrate implementing the functional relationships is essential to consciousness, then by the end of this procedure you will no longer be conscious.

But now we ask: when did the consciousness fade out? Was it a sudden or a gradual process? Both seem deeply implausible. Firstly, we shouldn’t expect a sudden drop-out of consciousness from the removal of a single neuron or cluster of neurons, as this would be a highly unusual level of discreteness. This would also imply the ability to switch on and off the entirety of your consciousness with seemingly insignificant changes to the biological structure of your nervous system.

And secondly, if it is a gradual process, then this implies the existence of “pseudo-conscious” states in the middle of the procedure, where your experiences are markedly distinct from those of the original being but you are pretty much always wrong about your own experiences. Why? Well, the functional relationships have stayed the same! So your beliefs about your conscious states, the memories you form, the emotional reactions you have, will all be exactly as if there has been no change to your conscious states. This seems totally bizarre and, in Chalmers’ words, “we have little reason to believe that consciousness is such an ill-behaved phenomenon.”

Now, this is a fairly convincing argument to me. But I have a hard time understanding why it should be. The argument’s convincingness seems to rely on some very high-level abstract intuitions about the types of conscious experiences we imagine organisms could be having, and I can’t think of a great reason for trusting these intuitions. Maybe we could chalk it up to simplicity, and argue that the notion of consciousness entailed by substrate-dependence must be extremely unparsimonious. But even this connection is not totally clear to me.

A lot of the philosophical argumentation about consciousness feels this way to me; convincing and interesting, but hard to make sense of as genuine evidence.

One final style of argument that I’m deeply skeptical of is arguments from pure phenomenology. This is, for instance, how Giulio Tononi likes to argue for his integrated information theory of consciousness. He starts from five supposedly self-evident truths about the character of conscious experience, then attempts to infer facts about the structure of the physical systems that could produce such experiences.

I’m not a big fan of Tononi’s observations about the character of consciousness. They seem really vaguely worded and hard enough to make sense of that I have no idea if they’re true, let alone self-evident. But it is his second move that I’m deeply skeptical of. The history of philosophers trying to move from “self-evident intuitive truths” to “objective facts about reality” is pretty bad. While we might be plenty good at detailing our conscious experiences, trying to make the inferential leap to the nature of the connection between physics and consciousness is not something you can do just by looking at phenomenology.

Consciousness

Every now and then I go through a phase in which I find myself puzzling about consciousness. I typically leave these phases feeling like my thoughts are slightly more organized on the problem than when I started thinking about it, but still feeling overwhelmingly confused about the subject.

I’m currently in one of those phases!

It started when I was watching an episode of the recent Planet Earth II series (which I recommend to everybody – it’s beautiful). One scene contains a montage of grizzly bears that have just emerged from hibernation and are now passionately grinding their backs against trees to shed their excess fur.

Nobody with a soul would watch this video and not relate to the back-scratching bears through memories of the rush of pleasure and utter satisfaction of a great back scratching session.

The natural question this raises is: how do we know that the bears are actually feeling the same pleasure that we feel when we get our backs scratched? How could we know that they are feeling anything at all?

A modest answer is that it’s just intuition. Some things just look to us like they’re conscious, and we feel a strong intuitive conviction that they really are feeling what we think they are.

But this is unsatisfying. ‘Intuition’ is only a good answer to a question when we have a good reason to presume that our intuitions should be reliable in the context of the question. And why should we believe that our intuitions about a rock being unconscious and a bear being conscious have any connection to reality? How can we rationally justify such beliefs?

The only starting point we have for assessing any questions about consciousness is our own conscious experience – the only one that we have direct and undeniable introspective access to. If we’re to build up a theory of consciousness, we must start there.

So for instance, we notice that there are tight correlations between patterns of neural activation in our brains and our conscious experiences. We also notice that there are some physical details that seem irrelevant to the conscious experiences that we have.

This distinction between ‘the physical details that are relevant to what conscious experiences I have’ and ‘the physical details that are irrelevant to what conscious experiences I have’ allow us to make new inferences about conscious experiences that are not directly accessible to us.

We can say, for instance, that a perfect physical clone of mine that is in a different location than me probably has a similar range of conscious experiences. This is because the only difference between us is our location, which is largely irrelevant to the range of my conscious experiences (I experience colors and emotions and sounds the same way whether I’m on one side of the room or another).

And we can draw similar conclusions about a clone of mine if we also change their hairstyle or their height or their eye color. Each of these changes should only affect our view of their consciousness insofar as we notice changes in our consciousness upon changes in our height, hairstyle, or eye color.

This gives us rational grounds on which to draw conclusions like ‘Other human beings are conscious, and likely have similar types of conscious experiences to me.’ The differences between other human beings and me are not the types of things that seem able to make them have wildly different types of conscious experiences.

Once we notice that we tend to reliably produce accurate reports about our conscious experiences when there are no incentives for us to lie, we can start drawing conclusions about the nature of consciousness from the self-reports of other beings like us.

(Which is of course how we first get to the knowledge about the link between brain structure and conscious experience, and the similarity in structure between my brain and yours. We probably don’t actually personally notice this unless we have access to a personal MRI, but we can reasonably infer from the scientific literature.)

From this we can build up a theory of consciousness. A theory of consciousness examines a physical system and reports back on things like whether or not this system is conscious and what types of conscious experiences it is having.

***

Let me now make a conceptual separation between two types of theories of consciousness: epiphenomenal theories and causally active theories.

Epiphenomenal theories of consciousness are structured as follows: There are causal relationships leading from the physical world to conscious experiences, and no causal relationships leading back.

Causally active theories of consciousness have both causal arrows leading from the physical world to consciousness, and back from consciousness to the physical world. So physical stuff causes conscious experiences, and conscious experiences have observable behavioral consequences.

Let’s tackle the first class of theories first. How could a good Bayesian update on these theories? Well, the theories make predictions about what is being experienced, but make no predictions about any other empirically observable behaviors. So the only source of evidence for these theories is our personal experiences. If Theory X tells me that when I hit my finger with a hammer, I will feel nothing but a sense of mild boredom, then I can verify that Theory X is wrong only through introspection of my own experiences.

But even this is unusual.

The mental process by which I verify that Theory X is wrong is occurring in my brain, and on any epiphenomenal theory, such a process cannot be influenced by any actual conscious experiences that I’m having.

If suddenly all of my experiences of blue and red were inverted, then any reaction of mine, especially one which accurately reported what had happened, would have to be a wild coincidence. After all, the change in my conscious experience can’t have had any causal effects on my behavior.

In other words, there is no reason to expect on an epiphenomenal theory of consciousness that the beliefs I form or the self-reports I produce about my own experiences should align with my actual conscious experiences.

And yet they invariably do. Every time I notice that I have accurately reported a conscious experience, I have noticed something that is wildly unlikely to occur under any epiphenomenal theory of consciousness. And by Bayes’ rule, each time this happens, all epiphenomenal theories are drastically downgraded in credence.

So this entire class of theories is straightforwardly empirically wrong, and will quickly be eliminated from our model of reality through some introspection. The theories that are left involve causation going both from the physical world to consciousness and back from consciousness to the physical world.

In other words, they involve two mappings – one from a physical system to consciousness, and another from consciousness to predicted future behavior of the physical system

But now we have a puzzle. The second mapping involves third-party observable physical effects that are caused by conscious experiences. But in our best understanding of the world, physical effects are always found to be the results of physical causes. For any behavior that my theory tells me is caused by a conscious experience, I can trace a chain of physical causes that uniquely determined this behavior.

What does this mean about the causal role of consciousness? How can it be true that conscious experiences are causal determinants of our behavior, and also that our behaviors are fully causally determined by physical causes?

The only way to make sense of this is by concluding that conscious experiences must be themselves purely physical causes. So if my best theory of consciousness tells me that experience E will cause behavior B, and my best theory of physics tells me that the cause of B is some set of physical events P, then E is equal to P, or some subset of P.

This is how we are naturally led to what’s called identity physicalism – the claim that conscious experiences are literally the same thing as some type of physical pattern or substance.

***

Let me move on to another weird aspect of consciousness. Imagine that I encounter an alien being that looks like an exact clone of myself, but made purely of silicon. What does our theory of consciousness say about this being?

It seems like this depends on if the theory makes reference only to the patterns exhibited by the physical structure, or to the physical structure itself. So if my theory is about the types of conscious experiences that arise from complicated patterns of carbon, then it will tell me that this alien being is not conscious. But if it just references the complicated patterns, and doesn’t specify the lower-level physical substrate from which the pattern arises, then the alien being is conscious.

The problem is that it’s not clear to me which of these we should prefer. Both make the same third-party predictions, and first-party verifications could only be made through a process involving a transformation of our body from one substrate to the other. In the absence of such a process, both of the theories make the exact same predictions about what the world looks like, and thus will be boosted or shrunk in credence exactly the same way by any evidence we receive.

Perhaps the best we could do is say that the first theory contains all of the complicated details of the first, but also has additional details, and so should be penalized by the conjunction rule? So “carbon + pattern” will always be less likely than “pattern” by some amount. But these differences in priors can’t give us that much, as they should in principle be dwarfed in the infinite-evidence limit.

What this amounts to is an apparently un-leap-able inferential gap regarding the conscious experiences of beings that are qualitatively different from us.