A general pattern I’ve noticed in meta-level thinking is a spectrum of systemizing. I’ll explain what this means by a personal example.
When I was first exposed to the idea of ethics as a serious discipline, I found it fairly silly. I mean, clearly our ethical beliefs are not the types of things that we should expect objectivity from. They form from a highly subjective and complex mix of factors involving the peer group we surround ourselves with, the type of parents we had, our religious background, our inbuilt deep moral intuitions, our life experiences, and so on. What’s the point in thinking hard about your ethical beliefs – they just are what they are, right?
What I found funny was the idea that people thought it made sense to spend serious time and effort trying to analyze their ethical intuitions and creating general frameworks that capture as much of these intuitions as they could. I would say that I, for whatever reason, had an initially highly non-systematizing attitude towards ethics.
In college, I fell in with a crowd that liked spending long hours debating abstract ethical principals, and eventually grew fond of it myself. It became intuitive to me that of course it is desirable to have a simple, precisely formalized, and vastly generalizable ethical framework to guide your beliefs and actions. This remained the case even though I never lost the intuitive sense of the obviousness of moral non-objectivity.
Frameworks like utilitarianism appealed to me as incredibly simple general “laws of morality” that were able to capture most of my ethical intuitions, When they contradicted strong ethical intuitions, I felt okay with overriding these intuitions for the sake of the more valuable synthesis that was the framework as a whole.
These types of cognitive patterns – taking complex disparate phenomena, analyzing patterns in them, looking for precise and simple descriptions of these patterns and trying to generalize them as far as possible – are what I mean by systematizing. Some people are very strong systematizers when it comes to their aesthetic tastes – they will spend hours arguing about what beauty is and analyzing their basic aesthetic reactions in order to form simple general Theories of Everything Beautiful. Others think that this is stupid and a waste of time and cognitive resources.
Philosophers tend to be systematizers about literally everything – I’d say systematization comes close to a general definition of philosophy as an intellectual field. Scientists tend to be systematizers about the field that they work in, where they work obsessively to cleanly and neatly describe vast realms of natural phenomena. In our daily lives, systematizing tendencies come out in arguments about the quality of a certain movie or the tastiness of a meal or the attractiveness of a celebrity. Some people will want to dive into these debates with an attitude towards forming general principles of what makes a quality movie, or a tasty meal, or an attractive person, while others will dismiss the general principles, arguing instead from their gut-level reactions to the movie. Which is to say, some people will feel a desire to systematize their thoughts/ opinions/ desires/ tastes, and others will not.
Those that do not are perfectly content with a complicated and messy reality. They feel no inner urge pulling them towards de-cluttering their view of the world. From this perspective, it can be perplexing to see people working very hard to systematize their intuitions. Such efforts can seem fairly pointless, and downright absurd when the final product ends up contradicting some of the intuitions from which it was built.
About a lot of things, I am an extreme systematizer, relentlessly searching for concise, elegant, and powerful models to piece everything together. But there are plenty of other areas where I feel totally fine with messiness and complexity and am turned off by efforts to reduce or remove them. Aesthetics is one such area – I appreciate art on a gut level, and am weirded out by the prospect of trying to formulate a simple general theory of aesthetics.
One of the areas where I have the most extreme systematizing tendencies (as might be obvious from my writings on this blog) is formal epistemology. A single neat equation that summarizes the process of rational belief formation is just obviously desirable to me. This is not a desirability borne out of practical considerations. It is perhaps at its root a deeply aesthetic feeling about different structures of reasoning. I want to know not just what is practically useful for day-to-day reasoning, but also what is ultimately the best and most fundamental framework with which to describe my epistemological intuitions.
I choose the phrase ‘epistemological intuitions’ carefully and intentionally. We do not have any direct line to objective epistemic truth; we are not provided by Nature with a golden shining book in which the true nature of normative rational reasoning is laid out for us. What we do have, ultimately, is a set of deep intuitions about the way that good reasoning works. These intuitions are messy and complicated.
I say this all to make the point that strong enough systematizing intuitions can make the non-objective look objective, and I think it’s important to try to avoid that mistake. Maybe we think that if we extend our framework of reasoning enough, we can eventually find evolutionary justifications for why our patterns of reasoning should in general align with the truth. But this is simply an appeal to the value of reflective equilibria – the criterion that multiple alternative perspectives on the same framework end up cohering and bolstering one another.
If we try to say something like “We can find out what framework works best by just seeing how they do at predicting future events,” then we are relying on the intuition that empiricism is an epistemic virtue. Similarly, if we appeal to Occam’s razor, we are relying on intuitions about simplicity. If we think that better frameworks take little for granted and are cautious about jumping to strong conclusions, then we are relying on intuitions about epistemic humility. Etc.
The best we can do, it seems to me, is to compile different arguments starting from our deepest intuitions and ending at a particular epistemic framework. Bayesianism has arguments like Cox’s theorem and Dutch Book arguments. The empirical case for Bayesianism can be made by convergence and consistency theorems, as well as case studies in which Bayesian methods result in great predictive power.
But I think that it’s important to keep in mind that these are not absolute proofs of the objective superiority of Bayesianism. Ultimately, arguments for any epistemic framework rest on some set of deep-seated epistemic intuitions, and are ineradicably tied to these intuitions.