Confirmation bias, or different priors?

Consider two people.

Person A is a person of color who grew up in an upper middle class household in a wealthy and crime-free neighborhood. This person has gone through their life encountering mostly others similar to themselves – well-educated liberals that believe strongly in socially progressive ideals. They would deny ever having personally experienced or witnessed racism, despite their skin color. In addition, when their friends discuss the problem of racism in America, they feels a baseline level of skepticism about the actual extent of the problem, and suspect that the whole thing has been overblown by sensationalism in the media. Certainly there was racism in the past, they reason, but the problem seems largely solved in the present day. This suspicion of the mainstream narrative seems confirmed by the emphasis placed on shootings of young black men like Michael Brown, which were upon closer reflection not clear cases of racial discrimination at all.

Person B grew up in a lower middle class family, and developed friendships across a wide range of socioeconomic backgrounds. They began witnessing racist behavior towards black friends at a young age. As they got older, this racism became more pernicious, and several close friends described their frustration at experiences of racial profiling. Many ended up struggling with the law, and some ended up in jail. Studying history, they could see that racism is not a new phenomenon, and descends from a long and brutal history of segregation, Jim Crow, and discriminatory housing practices. To Person B, it is extremely obvious that racism is a deeply pervasive force in society today, and that it results in many injustices. These injustices are those that sparked the Black Lives Matter movement, which they are an enthusiastic supporter of. They are aware that BLM has made some mistakes in the past, but they see a torrent of evidence in favor of the primary message of the movement: that policing is racially biased.

Now both A and B are presented with the infamous Roland Fryer study finding that when you carefully control for confounding factors, black people are no more likely to be shot by police officers than whites, and are in fact slightly less likely to be shot than whites.

Person A is not super surprised by these results, and feels vindicated in his skepticism of the mainstream narrative. To them, this is clear-cut evidence supporting their preconception that a large part of the Black Lives Matter movement rests on an exaggeration of the seriousness of racial issues in the country.

On the other hand, Person B right away dismisses the results of this study. They know from their whole life experience that the results must be flawed, and their primary take-away is the fallibility of statistics in analyzing complex social issues.

They examine the study closely, trying to find the flaw. They come up with a few hypotheses: (1) The data is subject to one or more selection biases, having been provided by police departments that have an interest in seeming colorblind, and having come from the post-Ferguson period in which police officers became temporarily more careful to not shoot blacks (dubbed the Ferguson effect). (2) The study looked at encounters between officers and blacks/whites, but didn’t take into account the effects of differential encounter rates. If police are more likely to pull over or arrest black people than white people for minor violations, then this would artificially lower the rate of officer shootings.

A and B now discuss the findings. Witnessing B’s response to the data, A perceives it as highly irrational. It appears to be a textbook example of confirmation bias. B immediately dismissed the data that contradicted their beliefs, and rationalized it by conjuring complicated explanations for why the results of the study were wrong. Sure, (thinks Person A) these explanations were clever and suggested the need for more nuance in the statistical analysis, but clearly it was more likely that the straightforward interpretation of the data was correct than these complicated alternatives.

B finds A’s quick acceptance of the results troubling and suggestive of a lack of nuance. To B, it appears that A already had their mind made up, and eagerly jumped onto this study without being sufficiently cautious.

Now the two are presented with a series of in-depth case studies of shootings of young black men that show clear evidence of racial profiling.

These stories fit perfectly within B’s worldview, and they find themselves deeply moved by the injustices that these young men experienced. Their dedication to the cause of fighting police brutality is reinvigorated.

But now A is the skeptical one. After all, the plural of anecdote is not data, and the existence of some racist cops by no means indicts all of society as racist. And what about all the similar stories with white victims that don’t get reported? They also recall the pernicious effects of cognitive biases that could make a young black man fed narratives of police racism more likely to see racism where there is none.

To B, all of this gives them the impression that A is doing cartwheels to avoid acknowledging the simple fact that racism exists.

In the first case, was Person B falling prey to confirmation bias? Was A? Were they both?

How about in the second case… Is A thinking irrationally, as B believes?

✯✯✯

I think that in both cases, the right answer is most likely no. In each case we had two people that were rationally responding to the evidence that they received, just starting from very different presuppositions.

Said differently, A and B had vastly different priors upon encountering the same data, and this difference is sufficient to explain their differing reactions. Given these priors it makes sense and is perfectly rational for B to quickly dismiss the Fryer report and search for alternative explanations, and for A to regard stories of racial profiling as overblown. It makes sense for the same reason that it makes sense for a scientist encountering a psychic that seems eerily accurate to right away write it off as some complicated psychological illusion… strong priors are not easily budged by evidence, and there are almost always alternative explanations that are more likely.

This is all perfectly Bayesian, by the way. If two interpretations of a data set equally well predict or make sense of the data (i.e. P(data | interpretation 1) = P(data | interpretation 2)), then their posterior odds ratio P(interpretation 1 | data) / P(interpretation 2 | data) should be no different from their prior ratio P(interpretation 1) / P(interpretation 2). In other words, strong priors dictate strong posteriors when the evidence is weakly discriminatory between the hypotheses.

When the evidence does rule out some interpretations, probability mass is preferentially shifted towards the interpretations that started with stronger priors. For instance, suppose you have three theories with credences (P(T1), P(T2), P(T3)) = (5%, 5%, 90%), and some evidence E is received that rules out T1, but is equally likely under T2 and T3. Then your posterior probabilities will be (P(T1 | E), P(T2 | E), P(T3 | E)) = (0%, 5.6%, 94.4%).

T2 gains only .6% credence, while T3 gains 4.4%. In other words, while the posterior odds stay the same, the actual probability mass has shifted relatively more towards theories favored in the prior.

The moral of this story is to be careful when accusing others, internally or externally, of confirmation bias. What looks like mulish unwillingness to take seriously alternative hypotheses can actually be good rational behavior and the effect of different priors. Being able to tell the difference between confirmation bias and strong priors is a hard task – one that most people probably won’t undertake, opting instead to assume the worst of their ideological opponents.

Another moral is that privilege is an important epistemic concept. Privilege means, in part, having lived a life sufficiently insulated from injustice that it makes sense to wonder if it is there at all. Privilege is a set of priors tilted in favor of colorblindness and absolute moral progress. “Recognizing privilege” corresponds to doing anthropic reasoning to correct for selection biases in the things you have personally experienced, and adjusting your priors accordingly.

6 thoughts on “Confirmation bias, or different priors?

  1. Still thinking about this today. 😛 Partially I love it because it’s a morally neutral way of distilling the core of the common definition (“unearned advantage”).

  2. Is there actually a difference between confirmation bias and having different priors? Perhaps what you have described IS confirmation bias, using Bayesian lingo. The definition of confirmation bias is “the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories” – isn’t that exactly what one does when taking priors into account?

    1. Thanks for commenting! 🙂 🙂

      Trying to make a clear distinction between confirmation bias and having different priors is exactly what made me write this post. I think that the distinction is hard to make and that a lot of what we call confirmation bias might be better summarized as the effect of different priors, but I do think that there is a useful distinction there.

      Looking at your definition, we could interpret it in at least two ways: (1) that new evidence disproportionately favors theories that you have a prior preference for, and (2) that new evidence is misinterpreted to support theories that you prefer, when it actually doesn’t support them. The first of these makes sense from a Bayesian framework, while the second does not.

      I’d maybe describe three general categories of behavior that people might describe as confirmation bias:

      (1) High priors are evidentially favored: Beliefs that start with higher priors are typically given stronger updates from evidential support than beliefs with lower priors, even if the likelihood of the evidence is the same for each belief.

      (2) Excluding evidence: Evidence that challenges a high-prior hypothesis is ignored or forgotten, while evidence that supports it is retained.

      (3) Misinterpreting evidence: Evidence against a high-prior hypothesis is taken to be supporting it. Or, evidence for a high-prior hypothesis is updated on *more strongly* than it should be. And vice versa.

      Given that the term confirmation bias is meant to imply irrationality, I’d want to make sure it only applies to (2) and (3), and not to (1).

      1. Hmm, just thought of another category that could nicely fit into a conception of confirmation bias that does not make it rationally justifiable:

        (4) Biased evidence collection: The way that one gathers information is selected in order to avoid potential sources of conflicting evidence. (e.g. a conservative avoiding any news sources besides Fox)

        From wiki: “Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs or hypotheses.”

        “Search for, favor, and recall information” nicely fit into (2) and (4). “Interpret” is the tricky one… if somebody’s interpretation of information is just taken to be the way the information affects their network of beliefs, then the tendency to interpret information in a way that disproportionately confirms preexisting beliefs could be perfectly rational. I’d want to split it into two categories analogous to (1) and (3) – proper updating and improper updating.

Leave a Reply to squarishbracketCancel reply