Today we’ll take a little break from the more intense abstract math stuff I’ve been doing, and do a quick dive into a fun probabilistic puzzle I found on the internet.

Background for the puzzle: In Ancient Israel, there was a court of 23 wise men that tried important cases, known as the Sanhedrin. If you were being tried for a crime by the Sanhedrin and a majority of them found you guilty, you were convicted. But there was an interesting twist on this! According to the Talmud (Tractate Sanhedrin: Folio 17a), if the Sanhedrin unanimously found you guilty, you were to be acquitted.

If the Sanhedrin unanimously find [the accused] guilty, he is acquitted. Why? — Because we have learned by tradition that sentence must be postponed till the morrow in hope of finding new points in favour of the defence. But this cannot be anticipated in this case.

Putting aside the dubious logic of this rule, it gives rise to an interesting probability puzzle with a counterintuitive answer. Imagine that an accused murderer has been brought before the Sanhedrin, and that the evidence is strong enough that no judge has any doubt in their mind about his guilt. Each judge obviously wants for the murderer to be convicted, and would ordinarily vote to convict. But under this Talmudic rule, they need to be worried about the prospect of them all voting guilty and therefore letting him off scot-free!

So: If a probability *p* can be chosen such that each and every judge votes to convict with probability *p*, and to acquit with probability 1 – *p*, which *p* will give them the highest probability of ultimately convicting the guilty man?

Furthermore, imagine that the number of judges is not 23, but some arbitrarily high number. As the number of judges goes to infinity, what does *p* converge to?

I want you to think about this for a minute and test your intuitions before moving on.

(…)

(…)

(…)

So, it turns out that the optimal *p* for 23 judges is actually ≈ 75.3%. And as the number of judges goes to infinity? The optimal value of *p* converges to…

80%!

This was a big shock to me. I think the natural first thought is that when you have thousands and thousands of judges, you only need a minuscule chance for any one judge to vote ‘acquit’ in order to ensure a majority and prevent him from getting off free. So I initially guessed that *p* would be something like 99%, and would converge to 100% in the limit of infinite judges.

But this is wrong! And of the small sample of mathematically gifted friends I asked this question to, they mostly guessed the same as me.

There’s clearly a balance going on between the risk of a minority voting to convict and the risk of a unanimous vote to convict. For small p, the first of these is ~1 and the second is ~0, and for p ~ 1, the first is ~0 and the second ~1. It seems that we are naturally underemphasizing the danger of a minority vote to convict, and overemphasizing the danger of the unanimous vote.

Here are some plots of the various relevant values, for different numbers of judges:

One thing to notice is that as the number of judges gets larger, the graph’s peak becomes more and more of a plateau. And in the limit of infinite judges, you can show that the graph is actually just a simple step function: Pr(conviction) = 0 if p < .5, and 1 if p > .5. This means that while yes, *technically*, 80% is the optimal value, you can do pretty much equally well by choosing *any* value of p greater than 50%.

My challenge to you is to come up with some justification for the value 80%. Good luck!

I thought that the best percentage would be 75% or at least a range of values that averaged to 75%. My reasoning was that as you drift away from 75% in either direction you get closer to 50% or 100% and those are equally bad results. In retrospect there is some reason to expect the percentage to jump up from there because some percentages will make a majority likely and also leave the complete majority very very unlikely. But I think similar reasoning can get us to expect the percentage to jump down because some percentages will make the negation of a complete majority likely and also the nonexistence of a majority very very unlikely. So I expected those types of considerations to be evenly matched and leave the optimal percentage at 75% but instead those considerations appear to be ever-so-slightly unmatched in a way that suggests it might be too subtle to predict (like the result of 75.3). That said, I still find it very difficult to motivate 80% as opposed to 75%.

maybe the sages’ logic isn’t as dubious as one would think: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4841483/