In a previous post, I developed an analogy between patterns of reasoning and sampling procedures. I want to go a little further with two expansions on this idea.
Scientific laws and domains of validity
First, different sampling procedures can focus on sampling from different regions of the urn. This is analogous to how scientific theories have specific domains of validity that they were built to explain, and in general their conclusions do not spread beyond this domain.
Classical Newtonian mechanics is a great theory to explain slowly swinging pendulums and large gravitating bodies, but if you apply it to particles that are too small, or moving too fast, or too massive, then you’ll get bad results. In general, any scientific law will be known to work within a certain range of energies or sizes or speeds.
By analogy, the Super Persuader was not a good source of evidence, because its sampling procedure was to scour the urn for any black balls it could find, and ignore all white balls. Ideally, we want our truth-seeking enterprises to function like random sampling of balls from an urn. But of course, the way that scientists seek out evidence is not analogous to randomly sampling from the entire urn consisting of all pieces of evidence as to the structure of reality. Instead, a psychologist will focus on one region of the urn, a biologist another, and a physicist another.
In this way, a psychologist can say that the evidence they receive is representative of the general state of evidence in a certain region of the urn. The region of the urn being sampled by the scientist represents the domain of validity of the laws they develop.
Developing this line further, we might imagine that there is a general positioning of pieces of evidence or good arguments in terms of accessibility to humans. Some arguments or ideas or pieces of evidence about reality will lie near the top of the urn, and will be low-hanging fruits for any investigators. (Mixing metaphors!) Others will lie deeper down, requiring more serious thought and dedicated investigation to come across.
Advances in tech can allow scientists to dig deeper into Nature’s urn, expanding the domains of validity of their theories and becoming better acquainted with the structure of reality.
Cognitive biases and generalized distortions of reasoning
Second, a taxonomy of different ways in which reasoning can go wrong naturally arises from the metaphor. Some of these correspond nicely to well-known cognitive biases.
For instance, the sampling procedure used by the Super Persuader involved selectively choosing evidence to support a certain hypothesis. In general, this corresponds to selection biases. A special case of this is motivated reasoning. When we strongly desire a hypothesis to be true, we are more likely to find, remember, and fairly judge evidence in its favor than evidence against it. Selection biases are in general just non-random sampling procedures.
Another class of error is misjudgment, where we draw a black ball, but see it as a white ball. This would correspond to things like the backfire effect (LINK), where evidence against a proposition we favor serves to strengthen our belief in it, or just failure to understand an argument or a piece of evidence.
A third class of error is bad extrapolation, where we are sampling randomly from one region of the urn, but then act as if we are sampling from some other region. This would include hasty generalizations and all forms of irrational stereotyping.
Generalizing argument strength
Finally, a weakness of the urn analogy is that it treats all arguments as equally strong. We can fix this by imagining that some balls come clustered together as a single, stronger argument. Additionally, we could imagine argument strength as ball density, and suppose that we actually want to estimate the ratio of mass of black balls to mass of white balls. In this way, denser balls effect our judgment of the ratio more severely than less dense ones.