What are we trying to do in epistemology?
Here’s a candidate for an answer: The goal of epistemology is to formalize rational reasoning.
This is pretty good. But I don’t think it’s quite enough. I want to distinguish between three possible end goals of epistemology.
- The goal of epistemology is to formalize how an ideal agent with infinite computational power should reason.
- The goal of epistemology is to formalize how an agent with limited computational power should reason.
- The goal of epistemology is to formalize how a rational human being should reason.
We can understand the second task as asking something like “How should I design a general artificial intelligence to most efficiently and accurately model the world?” Since any general AI is going to be implemented in a particular bit of hardware, the answer to this question will depend on details like the memory and processing power of the hardware.
For the first task, we don’t need to worry about these details. Imagine that you’re a software engineer with access to an oracle that instantly computes any function you hand it. You want to build a program that takes in input from its environment and, with the help of this oracle, computes a model of its environment. Hardware constraints are irrelevant, you are just interested in getting the maximum epistemic juice out of your sensory inputs as logically possible.
The third task is probably the hardest. It is the most constrained of the three tasks; to accomplish it we need to first of all have a descriptively accurate model of the types of epistemic states that human beings have (e.g. belief and disbelief, comparative confidence, credences). Then we want to place norms on these states that are able to accommodate our cognitive quirks (for example, that don’t call things like memory loss or inability to instantly see all the logical consequences of a set of axioms irrational).
But both of these goals are on a spectrum. We aren’t interested in fully describing our epistemic states, because then there’s no space for placing non-trivial norms on them. And we aren’t interested in fully accommodating our cognitive quirks, because some of these quirks are irrational! It seems really hard to come up with precise and non-arbitrary answers to how descriptive we want to be and how many quirks we want to accommodate.
Now, in my experience, this third task is the one that most philosophers are working on. The second seems to be favored by statisticians and machine learning researchers. The first is favored by LessWrong rationalist-types.
For instance, rationalists tend to like Solomonoff induction as a gold standard for rational reasoning. But Solomonoff induction is literally uncomputable, immediately disqualifying it as a solution to tasks (2) and (3). The only sense in which Solomonoff induction is a candidate for the perfect theory of rationality is the sense of task (1). While it’s certainly not the case that Solomonoff induction is the perfect theory of rationality for a human or a general AI, it might be the right algorithm for an ideal agent with infinite computational power.
I think that disambiguating these three different potential goals of epistemology allows us to sidestep confusion resulting from evaluating a solution to one goal according to the standards of another. Let’s see this by purposefully glossing over the differences between the end goals.
We start with pure Bayesianism, which I’ll take to be the claim that rationality is about having credences that align with the probability calculus and updating them by conditionalization. (Let’s ignore the problem of priors for the moment.)
In favor of this theory: it works really well, in principle! Bayesianism has a lot of really nice properties like convergence to truth and maximizing relative entropy in updating on evidence (which is sort of like squeezing out all the information out of your evidence).
In opposition: the problem of logical omniscience. A Bayesian expects that all of the logical consequences of a set of axioms should be immediately obvious to a rational agent, and therefore that all credences of the form P(logical consequence of axioms | axioms) should be 100%. But now I ask you: is 19,973 a prime number? Presumably you understand natural numbers, including how to multiply and divide them and what prime numbers are. But it seems wrong to declare that the inability to conclude that 19,973 is prime from this basic level is knowledge is irrational.
This is an appeal to task (2). We want to say that there’s a difference between rationality and computational power. An agent with infinite computational power can be irrational if it is running poor software. And an agent with finite computational power can be perfectly rational, in that it makes effective use of these limited computational resources.
What this suggests is that we want a theory of rationality that is indexed by the computational capacities of the agent in question. What’s rational for one agent might not be rational for another. Bayesianism by itself isn’t nuanced enough to do this; two agents with the same evidence (and the same priors) should always end up at the same final credences. What we want is a framework in which two agents with the same evidence, priors, and computational capacity have the same beliefs.
It might be helpful to turn to computational complexity theory for insights. For instance, maybe we want a principle that says that a polynomial-powered agent is not rationally expected to solve NP problems. But the exact details of how such a theory would turn out are not obvious to me. Nor is it obvious that there even is a single non-arbitrary choice.
Regardless, let’s imagine for the moment that we have in hand the perfect theory of rationality for task (2). This theory should reduce to (1) as a special case when the agent in question has infinite computational powers. And if we treat human beings very abstractly as having some well-defined quantity of memory and processing power, then the theory also places norms on human reasoning. But in doing this, we open a new possible set of objections. Might this theory condemn as irrational some cognitive features of humans that we want to label as arational (neither rational nor irrational)?
For instance, let’s suppose that this theory involves something like updating by conditionalization. Notice that in this process, your credence in the evidence being conditioned on goes to 100%. Perhaps we want to say that the only things we should be fully 100% confident in are our conscious experiences at the present moment. Your beliefs about past conscious experiences could certainly be mistaken (indeed, many regularly are). Even your beliefs about your conscious experiences from a moment ago are suspect!
What this implies is that the set of evidence you are conditioning on at any given moment is just the set of all your current conscious experiences. But this is way too small a set to do anything useful with. What’s worse, it’s constantly changing. The sound of a car engine I’m updating on right now will no longer be around to be updated in one more moment. But this can’t be right; if at time T we set our credence in the proposition “I heard a car engine at time T” to 100%, then at time T+1 our credence should still be 100%.
One possibility here is to deny that 100% credences always stay 100%, and allow for updating backwards in time. Another is to treat not just your current experiences but also all your past experiences as 100% certain. Both of these are pretty unsatisfactory to me. A more plausible approach is to think about the things you’re updating on as not just your present experiences, but the set of presently accessible memories. Of course, this raises the question of what we mean by accessibility, but let’s set that aside for a moment and rest on an intuitive notion that at a given moment there is some set of memories that you could call up at will.
If we allow for updating on this set of presently accessible memories as well as present experiences, then we solve the problem of the evidence set being too small. But we don’t solve the problem of past certainties becoming uncertain. Humans don’t have perfect memory, and we forget things over time. If we don’t want to call this memory loss irrational, then we have to abandon the idea that what counts as evidence at one moment will always count as evidence in the future.
The point I’m making here is that the perfect theory of rationality for task (2) might not be the perfect theory of rationality for task (3). Humans have cognitive quirks that might not be well-captured by treating our brain as a combination of a hard drive and processor. (Another example of this is the fact that our confidence levels are not continuous like real numbers. Trying to accurately model the set of qualitatively distinct confidence levels seems super hard.)
Notice that as we move from (1) to (2) to (3), things get increasingly difficult and messy. This makes sense if we think about the progression as adding more constraints to the problem (as well as making it increasingly vague constraints).
While I am hopeful that we can find an optimal algorithm for inference with infinite computing power, I am less hopeful that there is a unique best solution to (2), and still less for (3). This is not merely a matter of difficulty, the problems themselves become increasingly underspecified as we include constraints like “these rational norms should apply to humans.”