There’s a puzzle for implementation of Bayesian reasoning in human beings. This is that the start of the reasoning process in humans is *conscious* *experience*, and it’s not totally clear how we should update on conscious experiences.

Jeffreys defined a *summary* of an experience E as a set B of propositions {B_{1}, B_{2}, … B_{n}} such that for all other propositions in your belief system A, P(A | B) = P(A | B, E).

In other words, B is a minimal set of propositions that *fully screens off* your experience.

This is a useful concept because summary sentences allow you to summarize everything that is epistemically relevant about conscious experience. if you have a summary B of an experience E, then you only need to know P(A | B) and P(B | E) in order to calculate P(A | E).

Notice that the summary set is subjective; it is defined only in terms of properties of your personal belief network. The set of facts that screens off E for you might be different from the set of facts that screens it off for somebody else.

Quick example.

Consider a brief impression by candlelight of a cloth held some distance away from you. Call this experience E.

Suppose that all you could decipher from E is that the cloth was around 2 meters away from you, and that it was either blue (with probability 60%) or green (with probability 40%). Then the summary set for E might be {“The cloth is blue”, “The cloth is green”, “The cloth is 2 meters away from you”, “The cloth is 3 meters away from you”, etc.}.

If this is the right summary set, then the probabilities P(“The cloth is blue”), P(“The cloth is green”) and P(“The cloth is x meters away from you”) should screen off E from the rest of your beliefs.

One trouble is that it’s not exactly obvious how to go about converting a given experience into a set of summary propositions. We could always be leaving something out. For instance, one more thing we learned upon observing E was the proposition “I can see light.” This is certainly not screened off by the other propositions so far, so we need to add it in as well.

But how do we know that we’ve gotten everything now? If we think a little more, we realize that we have also learned something about the nature of the light given off by the candle flame. We learn that it is capable of reflecting the color of light that we saw!

But now this additional consideration is related to how we interpret the color of the cloth. In other words, not only might we be missing something from our summary set, but that missing piece might be relevant to how we interpret the others.

I’d like to think more about this question: In general, how do we determine the set of propositions that screens off a given experience from the rest of your beliefs? Ultimately, to be able to coherently assess the impact of experiences on your web of beliefs, your model of reality must contain a model of yourself as an experiencer.

The nature of this model is pretty interesting from a philosophical perspective. Does it arise organically out of factual beliefs about the physical world? Well, this is what a physicalist would say. To me, it seems quite plausible that modeling yourself as a conscious experiencer would require a *separate* set of rules relating physical happenings to conscious experiences. How we should model this set of rules as a set of a priori hypotheses to be updated on seems very unclear to me.