Nature’s Urn and Ignoring the Super Persuader

This post is about one of my favorite little-known thought experiments.

Here’s the setup:

You are in conversation with a Super Persuader – an artificial intelligence that has access to an enormously larger pool of information than you, and that is excellent at synthesizing information to form powerful arguments. The Super Persuader is so super at persuading, in fact, that given any proposition, it will be able to construct the most powerful argument possible for that proposition, consisting of the strongest evidence it has access to.

The Super Persuader is going to try and persuade you either that a certain proposition A is true or that it is false. In doing so, you know that it cannot lie, but it can pick and choose the information that it presents to you, giving an incomplete picture.

Finally, you know that the Super Persuader is going to decide which side of the issue to argue based off of a random coin toss: 50% chance they will argue that A is true, and 50% chance they will argue that A is false.

Once the coin is tossed and the Persuader begins to present the evidence, how should you rationally respond? Should you be swayed by the arguments, ignore them, or something else?

Here’s a basic presentation of one response to this thought experiment:

Of course you should be swayed by their arguments! If not, then you end up receiving boatloads of crazily persuasive argumentation and pretending like you’ve heard none of it. This is the very definition of irrationality – closing your eyes to the evidence you have sitting right in front of you! There’s no reason to disregard all of the useful information that you’re getting, just because it’s coming from a source that is trying to persuade you. Regardless of the motives of the Super Persuader, it can only persuade you by giving you honest and genuinely convincing evidence. And a rational agent has no choice but to update their credences on this evidence.

I think that this is a bad argument. Here’s an analogy to help explain why.

Imagine the set of all possible pieces of evidence you could receive for a given proposition as a massive urn filled with marbles. Each marble is a single argument that could be made for the proposition. If the argument is in support of the proposition, then the marble representing it will be black. And if the argument is against the proposition, then the marble representing it will be white.

Now, the question as to whether the proposition is more likely to be true or false is roughly the same as the question of whether there are more black or white marbles in the urn. That is the exact same question if all of the arguments in question are equally strong, and we have no reason for starting out favoring one side over the other.

But now we can think about the actions of the Super Persuader as follows: the Super Persuader has direct access to the urn, and can select any marble it wants. If it wants to persuade you that the proposition is true, then it will just fish through the urn and present you with as many black marbles as it desires, ignoring all the white marbles.

Clearly this process gives you no information as to the true proportion of the marbles that are white versus the proportion that are black. The data you are receiving is contaminated by a ridiculously powerful selection bias. The evidence you see is no longer linked in any way to the truth of the proposition, because regardless of whether or not it is true, you still expect to receive large amounts of evidence for it.

In the end, all of the pieces of evidence you receive are useless, in the same way that a stacked deck is not a reliable source of information about the average card deck.

This has some really weird consequences. For one thing, after your conversation you still have all of that information hanging around in your head (as long as you have a good enough memory). So if anybody asks you what you think about the issue, you will be able to spout off incredibly powerful arguments for exactly one side of the issue. But you’ll also have to concede that you don’t actually strongly believe the conclusion of these arguments. And if you’re asked to present any evidence for not accepting the conclusion, you’ll likely draw a blank, or only be able to produce very unsatisfactory answers. You will certainly not come off as a very rational person!

In addition, that person who hears your arguments has also received contaminated evidence. After all, the evidence was the result of a selection bias when you received it, so it should still be the result of a selection bias when you transfer it on to others.

Which is to say that one consequence of the conclusion we’ve apparently come to is that if you were to broadcast these powerful arguments to the world, and if nobody could think of similarly powerful arguments on the other side, then everybody would still be rationally required to hang onto their old beliefs, disregarding the evidence that they all know.

This seems quite unusual. You might say that the state of your beliefs before talking to the Super Persuader was just the result of what the easiest possible arguments you could have thought of were – the low hanging fruits in terms of evidence for the proposition.

Now you have access to much more advanced and powerful arguments, but you can’t update on them. So it seems like you’re weirdly privileging the low-hanging easy arguments over the new arguments you’ve received. But again, the new powerful arguments you’ve received simply can’t be trusted – while they appear to discriminate between the truth and falsity of the proposition, they actually fail to.

Here’s another weird consequence. Say that somebody comes up to you and tells you an argument that they’ve just thought of for the conclusion. If this argument is one that you already heard from the Super Persuader, then you are now justified in updating on it! Why? Because even though you had the evidence before, it was from a contaminated source. And now you’ve gotten the evidence from an uncontaminated source, so you can take it into account!

This behavior is absurd on its surface – you already had all of the information that they told you, and you learned nothing new. But now you are suddenly shifting your beliefs in response? How can it be rational for your beliefs to shift when your information has not?

One way out of this particular dilemma is to say that you have gained information. Not the information contained in the argument, but information about the relative placement of this evidence in the space of all possible pieces of evidence! You’ve learned that this particular argument is in the space of pieces of evidence that are being sampled by your fellow humans. Said another way, you’ve learned that this argument was relatively accessible, given that one of your fellow humans thought of it.

By analogy, when the Super Persuader presents you with a certain black marbles, you don’t learn anything about the fraction of marbles that are black in the urn. But when a friend of yours that is reaching in and blindly selecting marbles pulls out that exact same marble, you do learn something about the fraction of marbles that are black. You learn that this marble was relatively accessible – the chance can’t have been too enormously small that this marble was selected.

I’ll present one further weird consequence:

Say that one of the pieces of evidence given by the Super Persuader was the result of a wide-ranging study that they personally did, that no scientist had yet thought to do. Maybe when you broadcast the arguments of the Super Persuader to the world, some curious scientists decide to try to replicate this experiment.

But now, regardless of what they find, this experiment can’t rationally count as evidence! It is as contaminated as the Super Persuader’s initial argument – they only did the experiment because of the random coin flip that determined which arguments would be made by the Super Persuader.

This seems like it can get arbitrarily weird. What if the scientists find a different result from the Super Predictor? Well then, assuming that they have really good reason to believe that the Super Predictor can’t have been wrong, they should expect that their results were confounded in some way, and redo the experiment.

But even this is not allowed! Why? Because if they decide to redo the experiment on the basis of the Super Predictor’s results, then they are taking into account the argument of the Super Predictor in their beliefs about the world! That is, they are admitting that they expect the experiment to come out a certain way, because of what the Super Predictor said about how the experiment should come out. Updating on this evidence seems to have the exact same problem as updating on the truth of the starting proposition!

***

But this isn’t actually the whole story. Above I wrote: “The evidence you see is no longer linked in any way to the truth of the proposition, because regardless of whether or not it is true, you still expect to receive large amounts of evidence for the proposition.”

But there are special cases in which this is not true. There is something that you learn when the Super Persuader draws out a black marble – you learn that there is at least one black marble in the urn. If the Super Persuader presents you with 100 black marbles, then you now know that there are at least 100 black marbles in the urn.

If you initially did not think that there could be that many arguments for the proposition, then you have learned something new and surprising, and your beliefs should be updated accordingly. In addition, if you have good reason to believe that there can’t be more than 100 white marbles in the urn, then you have gained very powerful evidence.

In other words, data that is completely contaminated by selection biases is not completely useless! It is still informative insofar as you initially thought that the data could not exist.

Let me give a basic example of this. Suppose the proposition in question is “2 = 4”. It seems reasonable to assume that there are not really that many arguments in favor of this proposition. So if the Super Persuader presents you with a few convincing arguments for why it is false that you were not previously aware of, and you were initially on the fence about the proposition’s truth, then you are rationally justified in being persuaded by their evidence.

On the other hand, if the Super Persuader is able to present you with a handful of convincing arguments for “2 = 4” that you couldn’t have initially conceived of, then this should be a major update to your beliefs.

This doesn’t only apply in extreme cases like logical truths. Imagine that the proposition is about whether the prevalence of a certain disease in a population is greater than 10%, and you know that the Super Predictor is able to simply count up the total number of people with the disease and the total number without in the population. If they tell you the calculated prevalence of the disease, then you should strongly update on this information. Why? Because it is wildly implausible that there could be any other evidence that would be of comparable strength to this. The evidence in this case is literally just the answer to the question “Is the prevalence of this disease greater than 10%?”

This leads to another way that you can update your beliefs indirectly by observing the behavior of the Super Predictor, rather than updating directly on the content of the arguments they present. Suppose that you know that if a given proposition is true, there will be a particular piece of extremely damning evidence for its truth. In addition, suppose that you know that the Super Predictor would have access to this evidence if it existed. Then you can update on the observation of whether or not the Super Predictor presents this piece of evidence! This form of evidence by observation can be very powerful.

To take an example, suppose the proposition is “Barack Obama is currently sitting in a beach chair soaking up the sun in Maui.” And in addition, suppose that you know that the Super Predictor has access to advanced cameras that are always tracking the movement of the former President, and could present the footage to you with proof that it is live.

Well, now whether or not the Super Predictor presents you with the footage is very damning evidence as to the truth of the proposition. This is true regardless of if it is trying to convince you of the truth or the falsity of the proposition. If it wants to convince you that Barack Obama is on a beach chair in Maui, and he is in fact on a beach chair in Maui, then you should be extremely confident that it will show you the footage. Which means that if it doesn’t show you the footage, then Barack Obama is almost certainly not on a beach chair on Maui.

And if the Super Predictor wants to convince you that Obama is not on a beach chair in Maui, but they don’t show you any footage, the best explanation is that Obama is on a beach chair in Maui.

This is one of the cases in which the common aphorism “Absence of evidence is not evidence of absence” proves to be incorrect. If the Super Predictor doesn’t show you the footage, then there is a very conspicuous lack of evidence. This conspicuous lack can be extremely good evidence for the absence of the phenomenon – that is, the absence of an Obama on a beach chair in Maui.

And of course, if they do show you the footage, then you have very strong evidence that you should update on. This relates to the earlier point about having good reason to suspect that there could be no comparably good evidence on the other side.

So we have two ways in which you can get powerful indirect evidence from the behavior of the Super Predictor. First, you can get evidence based on the number and quality of the arguments presented, insofar as you initially held beliefs about the possible number and quality of arguments available. And second, you can get evidence from a conspicuous absence of evidence presented by the Predictor, based off of the expectation that if the evidence were favorable to the Predictor, then it would have presented it.

Interestingly, both of these forms of evidence get stronger and stronger as the Super Predictor gets more knowledgeable and better at forming arguments. A perfectly omniscient oracle that is able to just credibly tell you whether or not any proposition is true is completely powerless to persuade you of any falsehood.

Why? Because it will never be able to simply state “A is true” unless A is actually true. So the observation of whether or not the agent states “A is true” is by itself as strong evidence as the statement itself.

This of course can be avoided if the agent is a little clever – for instance, if the Super Predictor can pre-commit to not giving any evidence if counterfactually it couldn’t have presented comparably powerful evidence of the opposite proposition. This would allow them to handicap themselves in some situations by not presenting powerful evidence that would be very convincing unless there was equally powerful evidence on the other side, in order to get the benefits of not being “see-through” like the Super Predictor we’ve been imagining.

This would open up a whole new set of questions about how we could update on the statements of this new agent – we would be able to indirectly assess the amount of evidence on both sides of an argument just by hearing the strongest arguments on one side. But that starts to move in the direction of more sophisticated game theory and I’ll steer clear for now.

***

I think that this thought experiment is very revealing about the nature of rationality. It says that we should try as best as we can to look at evidence as if we are sampling randomly from Nature’s Urn. Out of all the pieces of evidence that we could receive, we want to close our eyes and fish them out randomly, without rigging the system in favor of the conclusion we prefer. Ideally, we would find a way to sample Nature’s Urn so that the chance that we end up finding evidence for a proposition is directly proportional to the amount of evidence there is for that proposition.

Unfortunately, there are plenty of known bugs in human reasoning that make this extremely difficult. To name just one example, confirmation bias is the phenomenon wherein people ignore or forget evidence that contradicts their beliefs, and only consider the evidence that supports them.

This brings up an interesting question – to what degree should we treat our own beliefs as actually reliable? If the way we have formed our beliefs is mostly akin to fishing for evidence that supports our pre-existing intuitions, then we should drastically decrease our level of confidence in our overall worldview.

In addition, we can wonder to what degree we are in the presence of agents analogous to Super Persuaders in real life. If I am talking to somebody that has an enormous amount of knowledge in a field that I am very unfamiliar with and that is presenting mostly one-sided arguments, then I should not allow myself to be fully persuaded by them.

This has interesting applications for how somebody with all of this information should best present it to others in order to minimize selection effects or apparent selection effects (for example, perhaps a comprehensive historical approach, in which the major developments relevant to a field are listed in order, regardless of how they influence the likelihood of any given proposition).

This has interesting applications for the concept of the “devil’s advocate” as a teaching style. If you know that a given professor is going to present arguments in a form that is sufficiently distinct from the manner in which these arguments might have been pulled from Nature’s Urn, then you should rationally not be updating on the content of the arguments as much as if they were.

This makes some intuitive sense – if somebody has come into lecture one day determined to give the best arguments to convince their students of one side of the argument, then you should expect that there are probably good counterarguments and objections that are not being presented. At the same time, the arguments don’t originate in a contaminated source like the arguments that the Super Persuader provides. They were thought of by other fellow humans that were doing something resembling sampling Nature’s Urn. This counterbalances the selection effect a little bit – the professor that is presenting the biased arguments initially had a pool of arguments to choose from that was limited by processes that are at least somewhat approximated as pseudorandom.

But again, how accurate is it to call this process pseudorandom? I’ve already pointed skeptically to the existence of cognitive biases that make each of us kind of like Super Persuaders to ourselves, though perhaps less “super” than the one in this thought experiment. How much can we be trusted at all to accumulate evidence?

One way to resolve this worry is to realize that even if each individual is gathering evidence in a biased manner, the group in total might still be fairly unbiased. We can think about this by comparing it to a courtroom in which a prosecuting attorney and defending attorney each present their best cases for the jury. If the people that are already convinced of the guilt of the defendant only enter the courtroom to listen to the prosecutor, and those that are already convinced of the innocence of the defendant only listen to the defense attorney, then neither of them on their own are reliable sources of evidence.

But if you afterwards sit down with both of them and ask them to summarize all of the arguments that they heard, then you can get a more fair assessment – a closer approach to truly randomly sampling Nature’s Urn.

This now brings us to the question of what determines the beliefs that people end up with. If it’s mostly factors like the opinions of their parents and the ordinary societal beliefs that they are surrounded by, then it looks like even aggregating information from many individuals may be insufficient to defeat the contaminating effects of selection biases.

This doesn’t seem too far-fetched – twin studies indicate that our political beliefs are genetically hardwired to a significant degree and kids tend to share the beliefs of their parents to a fairly dramatic extent.

Of course, all of this depends on the domain of knowledge you’re asking about. It’s much easier to believe that sociologists are not forming beliefs in a manner that is closely linked to the truths of those beliefs than it is to believe the same thing for physicists. Although then again, perhaps that’s just my self-serving biases blinding me to the subtle selection effects undermining my worldview!

3 thoughts on “Nature’s Urn and Ignoring the Super Persuader

Leave a Reply