The Rival-Expert Heuristic

I like to try to surround myself with people that are very intelligent and know a lot about subjects that I know very little about. As such, I am sometimes in the position that Scott Alexander refers to as epistemic learned helplessness. The basic idea bears some resemblance to ideas I explored in a previous post about reasoning in the presence of Super Persuaders.

When you’re talking to somebody who is much more knowledgeable than you about a particular subject and who is presenting to you very compelling arguments, it becomes unclear how strongly you should update on the arguments you are receiving. In particular, if the person you’re talking to is very plausibly presenting a biased sampling of the relevant arguments, then you should be very hesitant to update on these arguments as fully as you would otherwise.

One way of dealing with this is just to avoid people that know more than you and have strong opinions on matters that are disputed among experts. But that’s no fun.

A useful heuristic here is to do your best to imagine what it would be like if there was a rival expert in the room with you and your conversation partner. Creatively, I call this the Rival-Expert Heuristic.

For example, imagine that you’re in conversation with an expert sociologist who is making some very compelling-sounding arguments for why socialist economic systems are overall better than capitalistic systems. It might be that you can’t personally see any reason why the arguments they’re making would fail, and are unable to think of any original arguments for capitalism or against socialism.

In such a situation, it might be genuinely helpful to imagine that Milton Friedman is sitting in the room beside you, holding forth against the scholar. Even if you don’t personally know any counterarguments, you might have some sense that it is likely that such counterarguments exist and that Milton Friedman would know them.

If they say “Capitalism is a system that exploits workers and causes wealth to concentration at the top!”, and you don’t know of any good responses to this, you should consider the chance that Milton Friedman has heard of this line of argument and has a crushingly good response to it. If you can’t think of arguments of your own to present, you should try to take into account the “empty space” in the conversation where these opposing arguments would be if Milton Friedman was in the room.

This can potentially help you with judging how strong the arguments you’re receiving actually are. The primary difficulty is obvious: it’s not easy to accurately imagine a rival expert for exactly the reason that you don’t personally know what arguments they would be making.

At the same time, it is probably much easier to simply consider the question: “How likely is it that a rival expert would have a compelling response to this?” than it is to try to construct such a response yourself. I also think that it can be more reliable in many cases. Imagine that somebody comes up to you with plans for a perpetual motion device, and begins to describe them in much greater detail than you are able to understand. Perhaps this person understands the underlying physics much better than you, and whenever you raise an objection to their design, they are able to easily respond with apparently logical arguments. This is a case where you can be extremely confident that there exist good reasons why they are wrong, even though you have no idea what those reasons might be.

More realistically, suppose that somebody presents you with an argument for why X is true, and you vividly remember hearing a fantastic argument just last week for the falsity of X by a very reputable expert on X-like matters. The trouble is, you can’t remember any of the details of this argument, just that it was a much stronger argument by a more reputable source that this argument you are receiving now. This is a situation that we are often in, but is not typically addressed in standard philosophy talk about epistemology.

Are we justified in believing that what they’re saying is probably wrong, even though we can’t remember the details of the argument? Of course! Our confidence in the falsity of X is moved by an argument’s strength, only indirectly by its content. If the memory of the strength of the argument is retained and reliable, then there is no reason to backtrack on the earlier credence bump.

But just feeling confident that the things you’re hearing are wrong is often not very salient to us, especially if the person saying them is very charismatic and persuasive. You’ll eventually be tempted to relent in your dogged agnosticism after repeatedly failing to see any flaws in their arguments.

This, I think, is the main strength of the rival-expert heuristic. Dogged adherence to uncertainty in the face of compelling evidence feels much more okay if you can vividly imagine a more balanced social dynamic, one in which compelling evidence is being presented on both sides of the issue.

A more general form of this heuristic is to not form strong opinions or take sides on issues that are controversial amongst those that know the most on them, unless you yourself are one of the top experts. I think that a world in which this was more common would be hugely improved. As it is, people generally have far too many beliefs that are far too strong on matters that are disputed among experts. Part of the problem is that beliefs are sticky – It’s easier to acquire them than it is to abandon them once they have become a part of your identity.

If you think that raising the minimum wage is obviously a fantastic idea, but also know that there is a great deal of complicated debate amongst professional economists on the matter, then you are implicitly assuming that you know better than all those economists that disagree with you.

More viscerally, you must come to terms with the fact that if you were faced with the boatloads of experts that disagree with you, your arguments would probably fall flat, and you would likely hear a bunch of compelling arguments for why you are wrong. If this is true, then you essentially are just hanging on to your beliefs because you have by chance happened to avoid these experts!

Ultimately, the Rival-Expert Heuristic is about updating on evidence that you don’t have, but which you have good reason to believe exists. Perhaps this feels weird, but to sum up, there are three basic motivations for doing so.

First, we are easily convinced by compelling-sounding arguments from biased sources.

Second, abstractly knowing of the existence of experts that disagree with compelling-sounding arguments is less likely to properly influence your epistemic habits than actually imagining those experts engaging with the arguments.

And third, beliefs are “sticky” and easier to take on than to back out of.

Leave a Reply