There’s a puzzle for implementation of probabilistic reasoning in human beings. This is that the start of the reasoning process in humans is conscious experience, and it’s not totally clear how we should update on conscious experiences.
Jeffreys defined a summary of an experience E as a set B of propositions {B1, B2, … Bn} such that for all other propositions in your belief system A, P(A | B) = P(A | B, E).
In other words, B is a minimal set of propositions that fully screens off your experience.
This is a useful concept because summary sentences allow you to isolate everything that is epistemically relevant about conscious experience. if you have a summary B of an experience E, then you only need to know P(A | B) and P(B | E) in order to calculate P(A | E).
Notice that the summary set is subjective; it is defined only in terms of properties of your personal belief network. The set of facts that screens off E for you might be different from the set of facts that screens it off for somebody else.
Quick example.
Consider a brief impression by candlelight of a cloth held some distance away from you. Call this experience E.
Suppose that all you could decipher from E is that the cloth was around 2 meters away from you, and that it was either blue (with probability 60%) or green (with probability 40%). Then the summary set for E might be {“The cloth is blue”, “The cloth is green”, “The cloth is 2 meters away from you”, “The cloth is 3 meters away from you”, etc.}.
If this is the right summary set, then the probabilities P(“The cloth is blue”), P(“The cloth is green”) and P(“The cloth is x meters away from you”) should screen off E from the rest of your beliefs.
One trouble is that it’s not exactly obvious how to go about converting a given experience into a set of summary propositions. We could always be leaving something out. For instance, one more thing we learned upon observing E was the proposition “I can see light.” This is certainly not screened off by the other propositions so far, so we need to add it in as well.
But how do we know that we’ve gotten everything now? If we think a little more, we realize that we have also learned something about the nature of the light given off by the candle flame. We learn that it is capable of reflecting the color of light that we saw!
But now this additional consideration is related to how we interpret the color of the cloth. In other words, not only might we be missing something from our summary set, but that missing piece might be relevant to how we interpret the others.
I’d like to think more about this question: In general, how do we determine the set of propositions that screens off a given experience from the rest of your beliefs? Ultimately, to be able to coherently assess the impact of experiences on your web of beliefs, your model of reality must contain a model of yourself as an experiencer.
The nature of this model is pretty interesting from a philosophical perspective. Does it arise organically out of factual beliefs about the physical world? Well, this is what a physicalist would say. To me, it seems quite plausible that modeling yourself as a conscious experiencer would require a separate set of rules relating physical happenings to conscious experiences. How we should model this set of rules as a set of a priori hypotheses to be updated on seems very unclear to me.
Conscious experiences are physical happenings which are structured in such a way that they infer themselves to exceptional to the natural order or aphysical! (this is known as the genuine rather than the hard problem of consciousness, and it is quite possible to put forth naturalistic explanations for the appearance of dualism)
Look up variational free energy minimization and the bayesian brain. It’s strongest advocate is Karl Friston so you can just look up his work. We need not restrict ourselves to propositional states.
And importantly you can not stop these considerations at “the physical world” ie at the contents of exteroceptive sensory experiences. Actually interacting with the world (which itself is crucial to successively building up an integrated world concept in the first place) depends on interoceptive inference at every turn.
The fascinating thing is that it is out of self organizing patterns of sensory states (now broadly concieved to include both “5 senses” exteroception and “gut feelings” / “visceral sensations” concerning the bodies own state, models become parsed into self and non-self. (as Fichte rightly observes in the primordial absolute one does not begin with self, self is produce out of the self organizing action)