Extending Cauchy’s Theorem

Here I present an extension of Cauchy’s theorem that I haven’t seen anywhere else.

In our earlier proof of Cauchy’s theorem, we saw that r, the number of p-tuples (g,g,…,g) of elements of G whose product is e, had to be a multiple of p. Since (e,e,…,e) was one such p-tuple, we knew that r was greater than zero, and therefore concluded that there must be at least one other element g in G such that gp = e. And that was how we got Cauchy’s theorem. But in our final step, we weakened our state of knowledge quite a bit, from “r = pn (for some positive n)” to “r > 1”. We can get a slightly stronger result than Cauchy’s theorem by just sticking to our original statement and not weakening it.

So, we know that r = pn for some n. Does this mean that there are pn elements of order p? Not quite. One of these p-tuples is just (e,e,…,e), and e is order 1, not p. So there are really (pn – 1) elements of order p.

Furthermore, each of these elements forms a subgroup of size p. Every non-identity element in any of these subgroups is also order p. So this tells us that the number of elements of order p must be k(p – 1), where k is the number of subgroups of order p.

Putting these together, we see that (pn – 1) = k(p – 1). Crucially, this equation can’t be satisfied for all n and p! In particular, for k to be an integer, the value of n must be such that pn – 1 is divisible by p – 1. Let’s look at some examples.

p = 2

2n – 1 must be divisible by 1. This is true for all n.
So k, the number of subgroups of order 2, is 2n – 1 for any positive n.
k = 1, 3, 5, 7, …

p = 3

3n – 1 must be divisible by 2. This is only true for odd n.
So k, the number of subgroups of order 3, is (3n – 1)/2 for any odd n.
k = 1, 4, 7, 10, …

p = 5

5n – 1 must be divisible by 4. This is only true for n = 1, 5, 9, 14, …
So k = 1, 6, 11, 16, …

See the pattern? In general, the number of subgroups of order p can only be 1 + mp for any m ≥ 0. And the number of elements of order p is therefore mp2 – (m – 1)p – 1.

Needless to say, this is a much stronger result than what Cauchy’s theorem tells us!

An Application

Say we have a group G such that |G| = 15 = 3⋅5. By Cauchy, we know that there’s at least one subgroup of size 3 and one of size 5. But now we can do better than that! In particular we know that:

Number of subgroups of size 3 = 1, 4, 7, 10, …
Number of subgroups of size 5 = 1, 6, 11, 16, …

For each subgroup of size 3, we have 2 unique elements of order 3. And for each subgroup of size 5, we have 4 unique elements of order 5.

Number of elements of order 3 = 2, 8, 14, …
Number of elements of order 5 = 4, 24, 44, 64, …

But keep in mind, we only have 15 elements to work with! This immediately rules out a bunch of the possibilities:

Number of subgroups of size 3 = 1, 4, or 7
Number of subgroups of size 5 = 1

So we know that there is exactly one subgroup of size 5, which means that 4 of our 15 elements are order 5. This leaves us with only 10 non-identity elements left, ruling out 7 as a possible number of subgroups of size 3. So finally, we get:

Number of subgroups of size 3 = 1 or 4
Number of subgroups of size 5 = 1

This is as far as we can go using only our extended Cauchy theorem. However, we can actually go a little further using Sylow’s Third Theorem. This allows us to rule out there being four subgroups of size 3 (since 4 doesn’t divide 5). So the “subgroup profile” of G is totally clear: G has one subgroup of size 3 and one of size 5. You can use this fact to show that there is exactly one group of size 15, and it is just 15.

15 = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14}
Subgroup of size 3 = <5> = {0, 5, 10}
Subgroup of size 5 = <3> = {0, 3, 6, 9, 12}
All other elements generate the whole group.

All About Adultery, BDSM, and Divorce

I recently came across a whole bunch of crazy historical trivia, involving the laws around adultery, BDSM, and divorce. Here are some of the quotes that made me gasp (mostly from Wikipedia):

On Adultery

As of 2019, adultery remains a criminal offense in 19 states, but prosecutions are rare. Although adultery laws are mostly found in the conservative states (especially Southern states), there are some notable exceptions such as New York, Idaho, Oklahoma, Michigan, and Wisconsin consider adultery a felony, while in the other states it is a misdemeanor.

Penalties vary from a $10 fine (Maryland) to four years in prison (Michigan). In South Carolina, the fine for adultery is up to $500 and/or imprisonment for no more than one year (South Carolina code 16-15-60), and South Carolina divorce laws deny alimony to the adulterous spouse.

In Florida adultery (“Living in open adultery”, Art 798.01) is illegal; while cohabitation of unmarried couples was decriminalized in 2016.

Under South Carolina law adultery involves either “the living together and carnal intercourse with each other” or, if those involved do not live together “habitual carnal intercourse with each other” which is more difficult to prove.

In Alabama “A person commits adultery when he engages in sexual intercourse with another person who is not his spouse and lives in cohabitation with that other person when he or that other person is married.”

In some Native American cultures, severe penalties could be imposed on an adulterous wife by her husband. In many instances she was made to endure a bodily mutilation which would, in the mind of the aggrieved husband, prevent her from ever being a temptation to other men again. Among the Aztecs, wives caught in adultery were occasionally impaled, although the more usual punishment was to be stoned to death.

The Code of Hammurabi, a well-preserved Babylonian law code of ancient Mesopotamia, dating back to about 1772 BC, provided drowning as punishment for adultery.

Amputation of the nose – rhinotomy – was a punishment for adultery among many civilizations, including ancient India, ancient Egypt, among Greeks and Romans, and in Byzantium and among the Arabs.

In England and its successor states, it has been high treason to engage in adultery with the King’s wife, his eldest son’s wife and his eldest unmarried daughter. The jurist Sir William Blackstone writes that “the plain intention of this law is to guard the Blood Royal from any suspicion of bastardy, whereby the succession to the Crown might be rendered dubious.”

Adultery was a serious issue when it came to succession to the crown. Philip IV of France had all three of his daughters-in-law imprisoned, two (Margaret of Burgundy and Blanche of Burgundy) on the grounds of adultery and the third (Joan of Burgundy) for being aware of their adulterous behaviour. The two brothers accused of being lovers of the king’s daughters-in-law were executed immediately after being arrested.

Until 2018, in Indian law, adultery was defined as sex between a man and a woman without the consent of the woman’s husband. The man was prosecutable and could be sentenced for up to five years (even if he himself was unmarried) whereas the married woman cannot be jailed.

In Southwest Asia, adultery has attracted severe sanctions, including death penalty. In some places, such as Saudi Arabia, the method of punishment for adultery is stoning to death. Proving adultery under Muslim law can be a very difficult task as it requires the accuser to produce four eyewitnesses to the act of sexual intercourse, each of whom should have a good reputation for truthfulness and honesty. The criminal standards do not apply in the application of social and family consequences of adultery, where the standards of proof are not as exacting.

Adultery is no longer a crime in any European country. Among the last Western European countries to repeal their laws were Italy (1969), Malta (1973), Luxembourg (1974), France (1975), Spain (1978), Portugal (1982), Greece (1983), Belgium (1987), Switzerland (1989), and Austria (1997).

In most Communist countries adultery was not a crime. Romania was an exception, where adultery was a crime until 2006, though the crime of adultery had a narrow definition, excluding situations where the other spouse encouraged the act or when the act happened at a time the couple was living separate and apart; and in practice prosecutions were extremely rare.

English common law defined the crime of seduction as a felony committed “when a male person induced an unmarried female of previously chaste character to engage in an act of sexual intercourse on a promise of marriage.” A father had the right to maintain an action for the seduction of his daughter (or the enticement of a son who left home), since this deprived him of services or earnings.

In more modern times, Frank Sinatra was charged in New Jersey in 1938 with seduction, having enticed a woman “of good repute to engage in sexual intercourse with him upon his promise of marriage. The charges were dropped when it was discovered that the woman was already married.”

Buddhist Pali texts narrate legends where the Buddha explains the karmic consequences of adultery. For example, states Robert Goldman, one such story is of Thera Soreyya. Buddha states in the Soreyya story that “men who commit adultery suffer hell for hundreds of thousands of years after rebirth, then are reborn a hundred successive times as women on earth, must earn merit by “utter devotion to their husbands” in these lives, before they can be reborn again as men to pursue a monastic life and liberation from samsara.

According to Muhammad, an unmarried person who commits adultery or fornication is punished by flogging 100 times; a married person will then be stoned to death. A survey conducted by the Pew Research Center found support for stoning as a punishment for adultery mostly in Arab countries; it was supported in Egypt (82% of respondents in favor of the punishment) and Jordan (70% in favor), as well as Pakistan (82% favor), whereas in Nigeria (56% in favor) and in Indonesia (42% in favor) opinion is more divided, perhaps due to diverging traditions and differing interpretations of Sharia.

The Roman Lex Julia, Lex Iulia de Adulteriis Coercendis (17 BC), punished adultery with banishment. The two guilty parties were sent to different islands (“dummodo in diversas insulas relegentur”), and part of their property was confiscated. Fathers were permitted to kill daughters and their partners in adultery. Husbands could kill the partners under certain circumstances and were required to divorce adulterous wives.

Durex’s Global Sex Survey found that worldwide 22% of people surveyed admitted to have had extramarital sex. In the United States Alfred Kinsey found in his studies that 50% of males and 26% of females had extramarital sex at least once during their lifetime. Depending on studies, it was estimated that 26–50% of men and 21–38% of women, or 22.7% of men and 11.6% of women, had extramarital sex. Other authors say that between 20% and 25% of Americans had sex with someone other than their spouse. Three 1990s studies in the United States, using nationally representative samples, have found that about 10–15% of women and 20–25% of men admitted to having engaged in extramarital sex.

The Standard Cross-Cultural Sample described the occurrence of extramarital sex by gender in over 50 pre-industrial cultures. The occurrence of extramarital sex by men is described as “universal” in 6 cultures, “moderate” in 29 cultures, “occasional” in 6 cultures, and “uncommon” in 10 cultures. The occurrence of extramarital sex by women is described as “universal” in 6 cultures, “moderate” in 23 cultures, “occasional” in 9 cultures, and “uncommon” in 15 cultures.

Traditionally, many cultures, particularly Latin American ones, had strong double standards regarding male and female adultery, with the latter being seen as a much more serious violation.

Adultery involving a married woman and a man other than her husband was considered a very serious crime. In 1707, English Lord Chief Justice John Holt stated that a man having sexual relations with another man’s wife was “the highest invasion of property” and claimed, in regard to the aggrieved husband, that “a man cannot receive a higher provocation” (in a case of murder or manslaughter).

The Encyclopedia of Diderot & d’Alembert, Vol. 1 (1751), also equated adultery to theft writing that, “adultery is, after homicide, the most punishable of all crimes, because it is the most cruel of all thefts, and an outrage capable of inciting murders and the most deplorable excesses.”


The United States Federal law does not list a specific criminal determination for consensual BDSM acts. Some states specifically address the idea of “consent to BDSM acts” within their assault laws, such as the state of New Jersey, which defines “simple assault” to be “a disorderly persons offense unless committed in a fight or scuffle entered into by mutual consent, in which case it is a petty disorderly persons offense”.

Mutual combat, a term commonly used in United States courts, occurs when two individuals intentionally and consensually engage in a fair fight, while not hurting bystanders or damaging property. There is not an official law that forbids mutual combat in the United States. There have been numerous cases where this concept was successfully used in defense of the accused. In some cases, mutual combat may nevertheless result in killings.

Oregon Ballot Measure 9 was a ballot measure in the U.S. state of Oregon in 1992, concerning sadism, masochism, gay rights, pedophilia, and public education, that drew widespread national attention. It would have added the following text to the Oregon Constitution:

All governments in Oregon may not use their monies or properties to promote, encourage or facilitate homosexuality, pedophilia, sadism or masochism. All levels of government, including public education systems, must assist in setting a standard for Oregon’s youth which recognizes that these behaviors are abnormal, wrong, unnatural and perverse and they are to be discouraged and avoided.

Dildos or any object used for “the stimulation of human genital organs” cannot be made or sold in Alabama. The Anti-Obscenity Enforcement Act says that anyone caught with such tools could face a fine up to $20,000, a one-year jail sentence or 12-months doing hard labor.

Florida bans “lewd and lascivious behavior,” which is defined as a situation where “any man and woman, not being married to each other, lewdly and lasciviously associate and cohabit together.” The misdemeanor is punishable by a fine of up to $500. In Mississippi, an unmarried couple caught living together “whether in adultery or fornication” can face up to six months in jail and/or a $500 fine.

In 2003, the U.S. Supreme Court deemed a Texas state law that banned the practice of anal and oral sex between same-sex couples as unconstitutional. Despite the ruling, a sizable list of states, including Texas, still have anti-sodomy laws on the books.

Louisiana’s “crime against nature” statute prohibits the “the unnatural carnal copulation by a human being with another of the same sex or opposite sex or with an animal.” The state legislature in April failed to pass a bill that would have repealed the law except for human-on-animal relations.

Other states that have some form of anti-sodomy laws include Kansas, Oklahoma, Alabama, Florida, Idaho, Louisiana, Mississippi, North Carolina, South Carolina, and Utah, according to the Human Rights Campaign. Virginia repealed its ban in March.

On Divorce

Today, every state plus the District of Columbia permits no-fault divorce, though requirements for obtaining a no-fault divorce vary. California was the first U.S. state to pass a no-fault divorce law. Its law was signed by Governor Ronald Reagan, a divorced and remarried former movie actor, and came into effect in 1970. New York was the last state to pass a no-fault divorce law; that law was passed in 2010.

Prior to the advent of no-fault divorce, a divorce was processed through the adversarial system as a civil action, meaning that a divorce could be obtained only through a showing of fault of one (and only one) of the parties in a marriage. This required that one spouse plead that the other had committed adultery, abandonment, felony, or other similarly culpable acts. However, the other spouse could plead a variety of defenses, like recrimination (essentially an accusation of “so did you”). A judge could find that the respondent had not committed the alleged act or the judge could accept the defense of recrimination and find both spouses at fault for the dysfunctional nature of their marriage. Either of these two findings was sufficient to defeat an action for divorce, which meant that the parties remained married.

Before no-fault divorce was available, spouses seeking divorce would often allege false grounds for divorce. Removing the incentive to perjure was one motivation for the no-fault movement.

In the States of Wisconsin, Oregon, Washington, Nevada, Nebraska, Montana, Missouri, Minnesota, Michigan, Kentucky, Kansas, Iowa, Indiana, Hawaii, Florida, Colorado and California, a person seeking a divorce is not permitted to allege a fault-based ground (e.g. adultery, abandonment or cruelty).

In some states, requirements were even more stringent. For instance, under its original (1819) constitution, Alabama required not only the consent of a court of chancery for a divorce (and only “in cases provided for by law”), but equally that of two-thirds of both houses of the state legislature. This requirement was dropped in 1861, when the state adopted a new constitution at the outset of the American Civil War. The required vote in this case was even stricter than that required to overturn the governor’s veto in Alabama, which required only a simple majority of both houses of the General Assembly.

These requirements could be problematic if both spouses were at fault or if neither spouse had committed a legally culpable act but both spouses desired a divorce by mutual consent. Lawyers began to advise their clients on how to create legal fictions to bypass the statutory requirements. One method popular in New York was referred to as “collusive adultery”, in which both sides deliberately agreed that the wife would come home at a certain time and discover her husband committing adultery with a “mistress” obtained for the occasion. The wife would then falsely swear to a carefully tailored version of these facts in court (thereby committing perjury). The husband would admit a similar version of those facts. The judge would convict the husband of adultery, and the couple could be divorced. Specifically, they report that “states that adopted no-fault divorce experienced a decrease of 8 to 16 percent in wives’ suicide rates and a 30 percent decline in domestic violence.”

The Code of Hammurabi (1754 BC) declares that a man must provide sustenance to a woman who has borne him children, so that she can raise them:

If a man wish to separate from a woman who has borne him children, or from his wife who has borne him children: then he shall give that wife her dowry, and a part of the usufruct of field, garden, and property, so that she can rear her children. When she has brought up her children, a portion of all that is given to the children, equal as that of one son, shall be given to her. She may then marry the man of her heart.

In the 1970s, the United States Supreme Court ruled against gender bias in alimony awards and, according to the U.S. Census Bureau, the percentage of alimony recipients who are male rose from 2.4% in 2001 to 3.6% in 2006. In states like Massachusetts and Louisiana, the salaries of new spouses may be used in determining the alimony paid to the previous partners.

Some of the possible factors that bear on the amount and duration of the support are:



Length of the marriage or civil union

Generally, alimony lasts for a term or period. However, it will last longer if the marriage or civil union lasted longer. A marriage or civil union of over 10 years is often a candidate for permanent alimony.

Time separated while still married

In some U.S. states, separation is a triggering event, recognized as the end of the term of the marriage. Other U.S. states do not recognize separation or legal separation. In a state not recognizing separation, a 2-year marriage followed by an 8-year separation will generally be treated like a 10-year marriage.

Age of the parties at the time of the divorce

Generally, more youthful spouses are considered to be more able to ‘get on’ with their lives, and therefore thought to require shorter periods of support.

Relative income of the parties

In U.S. states that recognize a right of the spouses to live ‘according to the means to which they have become accustomed’, alimony attempts to adjust the incomes of the spouses so that they are able to approximate, as best possible, their prior lifestyle.

Future financial prospects of the parties

A spouse who is going to realize significant income in the future is likely to have to pay higher alimony than one who is not.

Health of the parties

Poor health goes towards need, and potentially an inability to support oneself. The courts are disinclined to leave one party indigent.

Fault in marital breakdown

In U.S. states where fault is recognized, fault can significantly affect alimony, increasing, reducing or even nullifying it. Many U.S. states are ‘no-fault‘ states, where one does not have to show fault to get divorced. No-fault divorce spares the spouses the acrimony of the ‘fault’ processes, and closes the eyes of the court to any and all improper spousal behavior. In Georgia, however, a person who has an affair that causes the divorce is not entitled to alimony.

Record-breaking Collatz chains!

One of my favorite bits of mathematics is the Collatz conjecture. It’s so simple that you can explain it to any grade schooler, and yet is so mysterious that Paul Erdös declared “Mathematics may not be ready for such problems.”

Here’s the conjecture:

Take any natural number N. If this number is even, divide it by two. If on the other hand N is odd, we multiply it by three and add 1. So N becomes N/2 if N is even, and 3N+1 if N is odd. Now, repeat this process for the resulting number! If even, divide by two. If odd, multiply by three and add one. And then repeat with the new number, and keep on chugging away.

The conjecture is that you will always eventually get to 1 (regardless of the initial value of N!)

That’s it! It’s that simple. Here are some examples:

Say we start with N=2. Since 2 is even, we divide by 2 and get 1. And that’s that, we’re done!

What if we start with N=3? Now, since 3 is even, we must multiply by three and add 1. So 3 becomes 10. 10 is even, so we divide by 2 to get 5. And 5 is odd, so we multiply by three and add one to get 16. And lo and behold, 16 goes to 8, which goes to 4, which goes to 2, and then back down to 1!

Here’s a table of the first few “Collatz sequences”:

Screen Shot 2019-06-11 at 10.37.52 PM.png

A few comments:

First, if you look closely, you’ll notice that there’s a lot of repetition in that list. For instance, look at the chain that starts at 9. It ends up at 7 after just three steps. And then the whole rest of the chain is identical to the one two above it! And starting at 10, one step takes us to 5, the chain of which we had already calculated.

This hints that any method for calculating the Collatz sequences of numbers could likely benefit from dynamic programming: using the work you’ve previously done to make your future tasks quicker. Think about it, by calculating the chains of just the numbers 3 through 10, we’ve also managed in the process to figure out the exact chains for 11, 13, 14, 16, 17, 20, and more numbers up to 52!

Second comment: Notice that these chains look really weird. They jump up and down unpredictably, and their lengths are all over the place. Here’s what the “9” chain looks like as it jumps up and down:

Screen Shot 2019-06-11 at 10.46.17 PM.png

And, to take an even more dramatic example, here’s what the chain starting with 27 looks like:


That takes 111 steps to stop! And in the process, it rises as high as 9,232!

Based on this example, you might be thinking that these chains are in general going to increase faster than their starting value. But don’t be deceived: I cherry-picked 27 because I knew that it would generate an interesting long chain. In fact, the lengths of the chains are in general remarkably small relative to their starting value. Here’s a plot of the first 100 chain lengths:


Though the values are growing, and some of them are up in the 100s, most of them are in the sub-40 range.

If you’re like me, though, you might notice some very curious patterns in this graph. What’s going on with those clusters of three or two that keep showing up right beside each other? And why do we seem to have these two distinct “regions” of chain lengths? Let’s look to see if these patterns persist as we plot the chain lengths for higher values, this time up to 1000:


Interesting! So it appears like the “two regions” kind of blend together, but this pattern of strings of nearby numbers with similar chain lengths stays the same! This is extremely mysterious to me. Many of the time the specific paths the numbers take on their way to zero are quite different, so why should we see them having the same length?

Well, part of the answer might be that in fact, the paths taken by nearby numbers are sometimes actually not too different at all. For instance, 500 and 501 both take 110 steps to get to 1, but their chains become identical after just four steps. Let’s look closely at these steps:

500 becomes 250 becomes 125 becomes 376
501 becomes 1504 becomes 752 becomes 376

Let’s look in general at what happens to N and N+1 if they follow this exact sequence of moves:

N becomes N/2 becomes N/4 becomes 3N/4 + 1
N+1 becomes 3(N+1) + 1 becomes (3(N+1) + 1)/2 becomes (3(N+1) + 1)/4

But hold on: (3(N+1) + 1)/4 just is 3N/4 + 1! So in fact, we should expect that whenever we do these sequences of moves, N and N+1 will have the same chain length. And when does this happens? Well, N had to be divisible by 4 but not 8. And 3N + 4 had to be divisible by 4. This specific set of conditions is true for about 1/8 of all numbers, so this already tells us that at least 1/8 of neighboring numbers have the same length!

And this was only looking at chains that became identical after three steps! We could go further and look at other possible sequences of moves that would make our chains become identical quickly, and in the process get a better understanding of the structure of the graph.

By the way, want to see how this graph behaves for even larger starting values? Here ya go! N up to 10,000:


Our N has 10-tupled, but our chain lengths have barely increased at all! Same for N up to 100,000:


Same story! Increasing our N by a factor of 10 seems to really only increase our chain lengths by adding about 100! This indicates that the relationship between starting value and chain length might be logarithmic (a multiplicative increase in starting value corresponding to an additive increase in chain length). Let’s look at a semilog plot of the same range of starting values:


Wow! This exposes a whole new level of structure in the chain lengths. Look at all those parallel lines of clusters! And also look at how nicely linearly the chain lengths appear to grow with the exponent of the x-axis! This is a bit mind-blowing to me that the chain length of N seems to be so naturally related to log(N).

Extending this graph to 100 times more values of N:


We can really clearly see how linear everything remains! Another feature that pops out here is the little “ledges” that occasionally jut out to the left in the graph. These tend to appear whenever we find an N whose chain length is dramatically larger than the previous numbers, and it then carries along some of its larger neighbors.

This might raise the question: how often do we actually see new record chain lengths? I mean, clearly we’re going to get larger and larger chain lengths forever, off to infinity, but how often do these record-breaking chains appear?

Surprisingly rarely! Let’s define a record-breaking Collatz chain as a Collatz chain such that all the Collatz chains with smaller starting values are shorter in length. Let’s look at where these record-breakers appear for N up to 100,000:


They look pretty evenly spaced! But remember, the x-axis is growing exponentially. So in fact, they get exponentially farther apart.

By the way, look at that weird straight line that appears about midway between 101 and 102. The exact values of these record breakers are:

54: 112
73: 115
97: 118
129: 121
171: 124
241: 127
313: 130

That’s right, the record increases by exactly 3, six times in a row! This might be a fluke, or maybe there’s something deeper going on.

Also, I’m sure you’re curious about that one record-breaker that jumps way above the previous record-breakers. It turns out that you’ve already seen it: it’s 27!

27 is unique in just how much it exceeds all the previous records, going from 23 to 111 (increasing the record by 89). As you can see in the graph, no other record-breaker beats its previous records by nearly as much, even though we’re looking all the way up to 100,000. Perhaps this is just a fluke caused by the early records being especially low? This seems supported by the fact that if we look all the way up to 10 million, we still see that nothing beats 27.


Define a record-breaking record-breaker as a record-breaker that beats the previous record by a greater amount than all previous record breakers. Now, 27 is clearly a record-breaking record-breaker. There is also one before it, namely 3, which beats the previous record by 6. (7 is a record-tying record-breaker, as it also beats the previous record by 6.)

We might have a postulate in mind: 3 and 27 are the only record-breaking record-breakers. Can we prove this?

Well, it turns out that we can’t. Because the postulate is not true! Let’s extend our graph by another factor of 10, going up to 100 million. (By the way, the dynamic programming I mentioned earlier is absolutely essential at this point for any of this to be possible.)


Now we can see that in fact there is another record-breaking record-breaker, all the way over to the right side of the graph! It turns out to be 63,728,127, and it beats the previous record by 205! So 27 is apparently not the largest one. A new question that naturally arises, a question to which I have no answer, is: are there infinitely many record-breaking record-breakers? Or does the sequence end at some point? I conjecture that yes, there are infinitely many such overachievers.

I have two more things to geek out about before I finish up. First of all, take a look at the actual values of the record-breakers up to 10000:

2: 1
3: 7
6: 8
7: 16
9: 19
18: 20
25: 23
27: 111
54: 112
73: 115
97: 118
129: 121
171: 124
231: 127
313: 130
327: 143
649: 144
703: 170
871: 1780
1161: 181
2223: 182
2463: 208
2919: 216
3711: 237
6171: 261
10971: 267
13255: 275
17647: 278
23529: 281
26623: 307
34239: 310
35655: 323
52527: 339
77031: 350

If you skim this list over, you’ll notice that these record-breakers appear to all be odd. But there are in fact four exceptions in this list that appear near its beginning: 2, 6, 18, and 54. And after these four, the even record-breakers appear to disappear. This sort of makes sense; if the first thing you do with your number is make it larger, this will in general result in a longer chain. But are there any more even record breakers larger than 54?

As it turns out, yes there are! But the next even record breaker doesn’t appear until we get into the tens of millions. It is 31,466,382. This is pretty baffling! Try showing your friends the sequence [2, 6, 18, 54, 31466382] and see if they can figure out what comes next! (It turns out that the next number is 127,456,254, which is exactly twice the previous record breaker!)

And now one final observation. There seem to be disproportionately many record-breakers whose chain lengths differ by only 1 or 3. For the heck of it, let’s define shiftless record-breakers as any record breakers who only beat the previous record by 3, and lazy record-breakers as any record-breakers who beat the previous record by 1.

One simple way this might happen is if the previous record breaker is N, and there are no new record breakers up to 2N. Then 2N will be the new record breaker, since its first step will be dividing by 2, and then it will take the rest of Ns chain for itself. So its chain length will be one larger than the chain length of N. (By the way, it turns out that our large even record breaker is in fact an example of a lazy record breaker.)

Are there really disproportionately many lazy and shiftless record breakers? Let’s look at the amount that each record breaker beat its previous record breaker by. This sequence looks like:

6, 1, 8, 3, 1, 3, 88, 1, 3, 3, 3, 3, 3, 3, 13, 1, 26, 8, 3, 1, 26, 8, 21, 24, 6, 8, 3, 3, 26, 3, 13, 16, 11, 3, 21, 8, 3, 57, 6, 21, 39, 16, 3, 3, 26, 3, 3, 21, 13, 16, 52, 21, 3, 3, 13, 1, 39, 205

This is another really mind-blowing observation for me! It looks like the differences between record breakers always seem to take on only very specific values, like 1, 3, 6, 8, and 11. Is there ever a difference of 2? or 5? I have no idea, and I’d love to know! I’ve leave you with this histogram showing how often each difference appears in all record-breakers up to 100 million:


Producing all numbers using just four fours

How many of the natural numbers do you think you can produce just by combining four 4s with standard mathematical operations?

The answer might blow your mind a little bit. It’s all of them!

In particular, you can get any natural number by using the symbols ‘4’, ‘√’, ‘log’, and ‘/’. And in particular, you can do it with just four 4s!

Try it for yourself before moving on!



Continue reading “Producing all numbers using just four fours”

Firing Squads and The Fine Tuning Argument

I’m confused about how satisfactory a multiverse is as an alternative explanation for the fine-tuning of our universe (alternative to God, that is).

My initial intuition about this is that it is a perfectly satisfactory explanation. It looks like we can justify this on Bayesian grounds by noting that the probability of the universe we’re in being fine-tuned for intelligent life given that there is a multiverse is nearly 1. The probability of fine-tuning given God is also presumably nearly 1, so the observation of fine-tuning shouldn’t push us much in one direction or other.

(Obligatory photo of the theorem doing the work here)


But here’s another argument I’m aware of: A firing squad of twenty sharpshooters aims at you and fires. They all miss. You are obviously very surprised by this. But now somebody comes up to you and tells you that in fact there is a multiverse full of “you”s in identical situations. They all faced down the firing squad, and the vast majority of them died. Now, given that you exist to ask the question, of COURSE you are in the universe in which they all missed. So should you be no longer surprised?

I take it the answer to this is “No, even though I know that I could only be alive right now asking this question if the firing squad missed, this doesn’t remove any mystery from the firing squad missing. It’s exactly as mysterious that I am alive right now as that the firing squad missed, so my existence doesn’t lessen the explanatory burden we face.

The firing squad situation seems exactly parallel to the fine-tuning of the universe. We find ourselves in a universe that is remarkably fine tuned in a way that seems extremely a priori improbable. Now we’re told that there are in fact a massive number of universes out there, the vast majority of which are devoid of life. So of course we exist in one of the universes that is fine-tuned for our existence.

Let’s make this even more intuitive: The earth exists in a Goldilocks zone around the Sun. Too much closer or further away and life would not be possible. Maybe this was mysterious at some point when humans still thought that there was just one solar system in the universe. But now we know that galaxies contain hundreds of billions of solar systems, most of which probably don’t have any planets in their Goldilocks zones. And with this knowledge, the mystery entirely disappears. Of course we’re on a planet that can support life, where else would we be??

So my question is: Why does this argument feel satisfactory in the fine-tuning and Goldilocks examples but not the firing squad example?

A friend I asked about this responded:

if you modify the firing squad scenario so that you don’t exist prior to the shooting and are only brought into existence if they all miss does it still feel less satisfactory then the multiverse case?

And I responded that no, it no longer feels less satisfactory than the multiverse case! Somehow this tweak “fixes” the intuitions. This suggests that the relevant difference between the two cases is something about existence prior to the time of the thought experiment. But how do we formalize this difference? And why should it be relevant? I’m perplexed.

Is the double slit experiment evidence that consciousness causes collapse?

No! No no no.

This might be surprising to those that know the basics of the double slit experiment. For those that don’t, very briefly:

A bunch of tiny particles are thrown one by one at a barrier with two thin slits in it, with a detector sitting on the other side. The pattern on the detector formed by the particles is an interference pattern, which appears to imply that each particle went through both slits in some sense, like a wave would do. Now, if you peek really closely at each slit to see which one each particle passes through, the results seem to change! The pattern on the detector is no longer an interference pattern, but instead looks like the pattern you’d classically expect from a particle passing through only one slit!

When you first learn about this strange dependence of the experimental results on, apparently, whether you’re looking at the system or not, it appears to be good evidence that your conscious observation is significant in some very deep sense. After all, observation appears to lead to fundamentally different behavior, collapsing the wave to a particle! Right?? This animation does a good job of explaining the experiment in a way that really pumps the intuition that consciousness matters:

(Fair warning, I find some aspects of this misleading and just plain factually wrong. I’m linking to it not as an endorsement, but so that you get the intuition behind the arguments I’m responding to in this post.)

The feeling that consciousness is playing an important role here is a fine intuition to have before you dive deep into the details of quantum mechanics. But now consider that the exact same behavior would be produced by a very simple process that is very clearly not a conscious observation. Namely, just put a single spin qubit at one of the slits in such a way that if the particle passes through that slit, it flips the spin upside down. Guess what you get? The exact same results as you got by peeking at the screen. You never need to look at the particle as it travels through the slits to the detector in order to collapse the wave-like behavior. Apparently a single qubit is sufficient to do this!

It turns out that what’s really going on here has nothing to do with the collapse of the wave function and everything to do with the phenomenon of decoherence. Decoherence is what happens when a quantum superposition becomes entangled with the degrees of freedom of its environment in such a way that the branches of the superposition end up orthogonal to each other. Interference can only occur between the different branches if they are not orthogonal, which means that decoherence is sufficient to destroy interference effects. This is all stuff that all interpretations of quantum mechanics agree on.

Once you know that decoherence destroys interference effects (which all interpretations of quantum mechanics agree on), and also that a conscious observing the state of a system is a process that results in extremely rapid and total decoherence (which everybody also agrees on), then the fact that observing the position of the particle causes interference effects to vanish becomes totally independent of the question of what causes wave function collapse. Whether or not consciousness causes collapse is 100% irrelevant to the results of the experiment, because regardless of which of these is true, quantum mechanics tells us to expect observation to result in the loss of interference!

This is why whether or not consciousness causes collapse has no real impact on what pattern shows up in the wall. All interpretations of quantum mechanics agree that decoherence is a thing that can happen, and decoherence is all that is required to explain the experimental results. The double slit experiment provides no evidence for consciousness causing collapse, but it also provides no evidence against it. It’s just irrelevant to the question! That said, however, given that people often hear the experiment presented in a way that makes it seem like evidence for consciousness causing collapse, hearing that qubits do the same thing should make them update downwards on this theory.

Consistently reflecting on decision theory

There are many principles of rational choice that seem highly intuitively plausible at first glance. Sometimes upon further reflection we realize that a principle we initially endorsed is not quite as appealing as we first thought, or that it clashes with other equally plausible principles, or that it requires certain exceptions and caveats that were not initially apparent. We can think of many debates in philosophy as clashes of these general principles, where the thought experiments generated by philosophers serve as the datum that put on display their relative merits and deficits. In this essay, I’ll explore a variety of different principles for rational decision making and consider the ways in which they satisfy and frustrate our intuitions. I will focus in especially on the notion of reflective consistency, and see what sort of decision theory results from treating this as our primary desideratum.

I want to start out by illustrating the back-and-forth between our intuitions and the general principles we formulate. Consider the following claim, known as the dominance principle: If a rational agent believes that doing A is better than not doing A in every possible world, then that agent should do A even if uncertain about which world they are in. Upon first encountering this principle, it seems perfectly uncontroversial and clearly valid. But now consider the following application of the dominance principle:

“A student is considering whether to study for an exam. He reasons that if he will pass the exam, then studying is wasted effort. Also, if he will not pass the exam, then studying is wasted effort. He concludes that because whatever will happen, studying is wasted effort, it is better not to study.” (Titelbaum 237)

The fact that this is clearly a bad argument casts doubt on the dominance principle. It is worth taking a moment to ask what went wrong here. How did this example turn our intuitions on their heads so completely? Well, the flaw in the student’s line of reasoning was that he was ignoring the effect of his studying on whether or not he ends up passing the exam. This dependency between his action and the possible world he ends up in should be relevant to his decision, and it apparently invalidates his dominance reasoning.

A restricted version of the dominance principle fixes this flaw: If a rational agent prefers doing A to not doing A in all possible worlds and which world they are in is independent of whether they do A or not, then that agent should do A even if they are uncertain about which world they are in. I’ll call this the simple dominance principle. This principle is much harder to disagree with than our starting principle, but the caveat about independence greatly limits its scope. It applies only when our uncertainty about the state of the world is independent of our decision, which is not the case in most interesting decision problems. We’ll see by the end of this essay that even this seemingly obvious principle can be made to conflict with another intuitively plausible principle of rational choice.

The process of honing our intuitions and fine-tuning our principles like this is sometimes called seeking reflective consistency, where reflective consistency is the hypothetical state you end up in after a sufficiently long period of consideration. Reflective consistency is achieved when you have reached a balance between your starting intuitions and other meta-level desiderata like consistency and simplicity, such that your final framework is stable against further intuition-pumping. This process has parallels in other areas of philosophy such as ethics and epistemology, but I want to suggest that it is particularly potent when applied to decision theory. The reason for this is that a decision theory makes recommendations for what action to take in any given setup, and we can craft setups where the choice to be made is about what decision theory to adopt. I’ll call these setups self-reflection problems. By observing what choices a decision theory makes in self-reflection problems, we get direct evidence about whether the decision theory is reflectively consistent or not. In other words, we don’t need to do all the hard work of allowing thought experiments to bump our intuitions around; we can just take a specific decision algorithm and observe how it behaves upon self-reflection!

What we end up with is the following principle: Whatever decision theory we end up endorsing should be self-recommending. We should not end up in a position where we endorse decision theory X as the final best theory of rational choice, but then decision theory X recommends that we abandon it for some other decision theory that we consider less rational.

The connection between self-recommendation and reflective consistency is worth fleshing out in a little more detail. I am not saying that self-recommendation is sufficient for reflective consistency. A self-recommending decision theory might be obviously in contradiction with our notion of rational choice, such that any philosopher considering this decision theory would immediately discard it as a candidate. Consider, for instance, alphabetical decision theory, which always chooses the option which comes alphabetically first in its list of choices. When faced with a choice between alphabetical decision theory and, say, evidential decision theory, alphabetical decision theory will presumably choose itself, reasoning that ‘a’ comes before ‘e’. But we don’t want to call this a virtue of alphabetical decision theory. Even if it is uniformly self-recommending, alphabetical decision theory is sufficiently distant from any reasonable notion of rational choice that we can immediately discard it as a candidate.

On the other hand, even though not all self-recommending theories are reflectively consistent, any reflectively consistent decision theory must be self-recommending. Self-recommendation is a necessary but not sufficient condition for an adequate account of rational choice.

Now, it turns out that this principle is too strong as I’ve phrased it and requires a few caveats. One issue with it is what I’ll call the problem of unfair decision problems. For example, suppose that we are comparing evidential decision theory (henceforth EDT) to causal decision theory (henceforth CDT). (For the sake of time and space, I will assume as background knowledge the details of how each of these theories work.) We put each of them up against the following self reflection problem:

An omniscient agent peeks into your brain. If they see that you are an evidential decision theorist, they take all your money. Otherwise they leave you alone. Before they peek into your brain, you have the ability to modify your psychology such that you become either an evidential or causal decision theorist. What should you do?

EDT reasons as follows: If I stay an EDT, I lose all my money. If I self-modify to CDT, I don’t. I don’t want to lose all my money, so I’ll self-modify to CDT. So EDT is not self-recommending in this setup. But clearly this is just because the setup is unfairly biased against EDT, not because of any intrinsic flaw in EDT. In fact, it’s a virtue of a decision theory to not be self-recommending in such circumstances, as doing so indicates a basic awareness of the payoff structure of the world it faces.

While this certainly seems like the right thing to say about this particular decision problem, we need to consider how exactly to formalize this intuitive notion of “being unfairly biased against a decision theory.” There are a few things we might say here. For one, the distinguishing feature of this setup seems to be that the payout is determined not based off the decision made by an agent, but by their decision theory itself. This seems to be at the root of the intuitive unfairness of the problem; EDT is being penalized not for making a bad decision, but simply for being EDT. A decision theory should be accountable for the decisions it makes, not for simply being the particular decision theory that it happens to be.

In addition, by swapping “evidential decision theory” and “causal decision theory” everywhere in the setup, we end up arriving at the exact opposite conclusion (evidential decision theory looks stable, while causal decision theory does not). As long as we don’t have any a priori reason to consider one of these setups more important to take into account than the other, then there is no net advantage of one decision theory over the other. If a decision problem belongs to a set of equally a priori important problems obtained by simply swapping out the terms for different decision theories, and no decision theory comes out ahead on the set as a whole, then perhaps we can disregard the entire set for the purposes of evaluating decision theories.

The upshot of all of this is that what we should care about is decision problems that don’t make any direct reference to a particular decision theory, only to decisions. We’ll call such problems decision-determined. Our principle then becomes the following: Whatever decision theory we end up endorsing should be self-recommending in all decision-determined problems.

There’s certainly more to be said about this principle and if any other caveats need be applied to it, but for now let’s move on to seeing what we end up with when we apply this principle in its current state. We’ll start out with an analysis of the infamous Newcomb problem.

You enter a room containing two boxes, one opaque and one transparent. The transparent box contains $1,000. The opaque box contains either $0 or $1,000,000. Your choice is to either take just the opaque box (one-box) or to take both boxes (two-box). Before you entered the room, a predictor scanned your brain and created a simulation of you to see what you would do. If the simulation one-boxed, then the predictor filled the opaque box with $1,000,000. If the simulation two-boxed, then the opaque box was left empty. What do you choose?

EDT reasons as follows: If I one-box, then this gives me strong evidence that I have the type of brain that decides to one-box, which gives me strong evidence that the predictor’s simulation of me one-boxed, which in turn gives me strong evidence that the opaque box is full. So if I one-box, I expect to get $1,000,000. On the other hand, if I two-box, then this gives me strong evidence that my simulation two-boxed, in which case the opaque box is empty. So if I two-box, I expect to get only $1,000. Therefore one-boxing is better than two-boxing.

CDT reasons as follows: Whether the opaque box is full or empty is already determined by the time I entered the room, so my decision has no causal effect upon the box’s contents. And regardless of the contents of the box, I always expect to leave $1,000 richer by two-boxing than by one-boxing. So I should two-box.

At this point it’s important to ask whether Newcomb’s problem is a decision determined problem. After all, the predictor decides whether to fill the transparent box by scanning your brain and stimulating you. Isn’t that suspiciously similar to our earlier example of penalizing agents based off their decision theory? No. The simulator decides what to do not by evaluating your decision theory, but by its prediction about your decision. You aren’t penalized for being a CDT, just for being the type of agent that one-boxes. To see this you only need to observe that any decision theory that one-boxes would be treated identically to CDT in this problem. The determining factor is the decision, not the decision theory.

Now, let’s make the Newcomb problem into a test of reflective consistency. Instead of your choice being about whether to one-box or to two-box while in the room, your choice will now take place before you enter the room, and will be about whether to be an evidential decision theorist or a causal decision theorist when in the room. What does each theory do?

EDT’s reasoning: If I choose to be an evidential decision theorist, then I will one-box when in the room. The predictor will simulate me as one-boxing, so I’ll end up walking out with $1,000,000. If I choose to be a causal decision theorist, then I will two-box when in the room, the predictor will predict this, and I’ll walk out with only $1,000. So I will stay an EDT.

Interestingly, CDT agrees with this line of reasoning. The decision to be an evidential or causal decision theorist has a causal effect on how the predictor’s simulation behaves, so a causal decision theorist sees that the decision to stay a causal decision theorist will end up leaving them worse off than if they had switched over. So CDT switches to EDT. Notice that in CDT’s estimation, the decision to switch ends up making them $999,000 better off. This means that CDT would pay up to $999,000 just for the privilege of becoming an evidential decision theorist!

I think that looking at an actual example like this makes it more salient why reflective consistency and self-recommendation is something that we actually care about. There’s something very obviously off about a decision theory that knows beforehand that it will reliably perform worse than its opponent, so much so that it would be willing to pay up to $999,000 just for the privilege of becoming its opponent. This is certainly not the type of behavior that we associate with a rational agent that trusts itself to make good decisions.

Classically, this argument has been phrased in the literature as the “why ain’tcha rich?” objection to CDT, but I think that the objection goes much deeper than this framing would suggest. There are several plausible principles that all apply here, such as that a rational decision maker shouldn’t regret having the decision theory they have, a rational decision maker shouldn’t pay to limit their future options, and a rational decision maker shouldn’t pay to decrease the values in their payoff matrix. The first of these is fairly self-explanatory. One influential response to it has been from James Joyce, who said that the causal decision theorist does not regret their decision theory, just the situation they find themselves in. I’d suggest that this response makes little sense when the situation the find themselves in is a direct result of their decision theory. As for the second and third of these, we could imagine giving a causal decision theorist the choice to pay money to remove the future possibility of two-boxing, or to hire a gunman that would shoot them if they try to take the transparent box. In each of these cases, CDT would endorse the decision to pay. I mention these other principles just to suggest that we have sound philosophical reason to care about self-recommendation independent of the reflective consistency considerations that we started out by discussing.

The takeaway from this modified Newcomb problem is that CDT is not reflectively consistent, and that it will self-modify to EDT in the right circumstances. Does this mean that EDT is more reflectively consistent than CDT? It turns out that no, this is not the case. We can construct another thought experiment in which EDT self-modifies to become CDT, called the Newcomb’s Soda problem:

You were just given either Soda 1 or Soda 2, which are indistinguishable. There was a 50% chance of each being the type you were given. You are now given a choice as to whether to choose chocolate ice cream or vanilla ice cream. Those that had Soda 1 have a strong unconscious inclination to choose chocolate ice cream and will be given $1,000,000 after they choose the ice cream. Those that had Soda 2 have a strong unconscious inclination to choose vanilla ice cream and are given nothing. If you choose vanilla ice cream, you get $1,000. What do you choose?

EDT reasons that choosing chocolate ice cream gives them strong evidence that they were given Soda 1, in which case they will be given $1,000,000. So they would be willing to give up the $1000 reward for choosing vanilla ice cream, in order to increase their chances of getting the million. CDT reasons that what soda they were given is a fixed fact that isn’t causally influenced by the decision they make. So they have a 50% chance of getting the million regardless of their choice, but choosing to take the vanilla guarantees them an additional $1000. Therefore they take vanilla.

This is clearly a decision-determined problem, so let’s see how EDT and CDT do upon self-reflection. Given the chance to modify themselves to EDT before being given the soda, a causal decision theorist would see no advantage to doing so. After all, being an evidential decision theorist doesn’t make you more or less likely to get the $1,000,000, it just ensures that you won’t get the $1000. And the evidential decision theorist agrees! By changing to causal decision theory, they guarantee themselves an extra $1000 while not making it any more or less likely that they’ll be given the chocolate soda (and accordingly, the million dollars).

The upshot of this is that both CDT and EDT are reflective inconsistent in the class of decision determined problems. What we seek, then, is a new decision theory that behaves like EDT in the Newcomb problem and like CDT in Newcomb’s Soda. One such theory was pioneered by machine learning researcher Eliezer Yudkowsky, who named it timeless decision theory (henceforth TDT). To deliver different verdicts in the two problems, we must find some feature that allows us to distinguish between their structure. TDT does this by distinguishing between the type of correlation arising from ordinary common causes (like the soda in Newcomb’s Soda) and the type of correlation arising from faithful simulations of your behavior (as in Newcomb’s problem).

This second type of correlation is called logical dependence, and is the core idea motivating TDT. The simplest example of this is the following: two twins, physically identical down to the atomic level, raised in identical environments in a deterministic universe, will have perfectly correlated behavior throughout the lengths of their lives, even if they are entirely causally separated from each other. This correlation is apparently not due to a common cause or to any direct causal influence. It simply arises from the logical fact that two faithful instantiations of the same function will return the same output when fed the same input. Considering the behavior of a human being as an instantiation of an extremely complicated function, it becomes clear why you and your parallel-universe twin behave identically: you are instantiations of the same function! We can take this a step further by noting that two functions can have a similar input-output structure, in which case the physical instantiations of each function will have correlated input-output behavior. This correlation is what’s meant by logical dependence.

To spell this out a bit further, imagine that in a far away country, there are factories that sell very simple calculators. Each calculator is designed to only run only one specific computation. Some factories are multiplication-factories; they only sell calculators that compute 713*291. Others are addition-factories; they only sell calculators that compute 713+291. You buy two calculators from one of these factories, but you’re not sure which type of factory you purchased from. Your credences are 50/50 split between the factory you purchased from being a multiplication-factory and it being an addition-factory. You also have some logical uncertainty regarding what the value of 713*291 is. You are evenly split between the value being 207,481 and the value being 207,483. On the other hand, you have no uncertainty about what the value of 713+291 is; you know that it is 1004.

Now, you press “ENTER” on one of the two calculators you purchased, and find that the result is 207,483. For a rational reasoner, two things should now happen: First, you should treat this result as strong evidence that the factory from which both calculators were bought was a multiplication-factory, and therefore that the other calculator is also a multiplier. And second, you should update strongly on the other calculator outputting 207,483 rather than 207,481, since two calculators running the same computation will output the same result.

The point of this example is that it clearly separates out ordinary common cause correlation from a different type of dependence. The common cause dependence is what warrants you updating on the other calculator being a multiplier rather than an adder. But it doesn’t warrant you updating on the result on the other calculator being specifically 207,483; to do this, we need the notion of logical dependence, which is the type of dependence that arises whenever you encounter systems that are instantiating the same or similar computations.

Connecting this back to decision theory, TDT treats our decision as the output of a formal algorithm, which is our decision-making process. The behavior of this algorithm is entirely determined by its logical structure, which is why there are no upstream causal influences such as the soda in Newcomb’s Soda. But the behavior of this algorithm is going to be correlated with the parts of the universe that instantiate a similar function (as well as the parts of the universe it has a causal influence on). In Newcomb’s problem, for example, the predictor generates a detailed simulation of your decision process based off of a brain scan. This simulation of you is highly logically correlated with you, in that it will faithfully reproduce your behavior in a variety of situations. So if you decide to one-box, you are also learning that your simulation is very likely to one-box (and therefore that the opaque box is full).

Notice that the exact mechanism by which the predictor operates becomes very important for TDT. If the predictor operates by means of some ordinary common cause where no logical dependence exists, TDT will treat its prediction as independent of your choice. This translates over to why TDT behaves like CDT on Newcomb’s Soda, as well as other so-called “medical Newcomb problems” such as the smoking lesion problem. When the reason for the correlation between your behavior and the outcome is merely that both depend on a common input, TDT treats your decision as an intervention and therefore independent of the outcome.

One final way to conceptualize TDT and the difference between the different types of correlation is using structural equation modeling:

Direct causal dependence exists between A and B when A is a function of B or when B is a function of A.
 > A = f(B) or B = g(A)

Common cause dependence exists between A and B when A and B are both functions of some other variable C.
 > A = f(C) and B = g(C)

Logical dependence exists between A and B when A and B depend on their inputs in similar ways.
 > A = f(C) and B = f(D)

TDT takes direct causal dependence and logical dependence seriously, and ignores common cause dependence. We can formally express this by saying that TDT calculates the expected utility of a decision by treating it like a causal intervention and fixing the output of all other instantiations of TDT to be identical interventions. Using Judea Pearl’s do-calculus notation for causal intervention, this looks like:

Screen Shot 2019-03-09 at 12.48.13 AM

Here K is the TDT agent’s background knowledge, D is chosen from a set of possible decisions, and the sum is over all possible worlds. This equation isn’t quite right, since it doesn’t indicate what to do when the computation a given system instantiates is merely similar to TDT but not logically identical, but it serves as a first approximation to the algorithm.

You might notice that the notion of logical dependence depends on the idea of logical uncertainty, as without it the result of the computations would be known with certainty as soon as you learn that the calculators came out of a multiplication-factory, without ever having to observe their results. Thus any theory that incorporates logical dependence into its framework will be faced with a problem of logical omniscience, which is to say, it will have to give some account of how to place and update reasonable probability distributions over tautologies.

The upshot of all of this is that TDT is reflectively consistent on a larger class of problems than both EDT and CDT. Both EDT and CDT would self-modify into TDT in Newcomb-like problems if given the choice. Correspondingly, if you throw a bunch of TDTs and EDTs and CDTs into a world full of Newcomb and Newcomb-like problems, the TDTs will come out ahead. However, it turns out that TDT is not itself reflectively consistent on the whole class of decision-determined problems. Examples like the transparent Newcomb problem, Parfit’s hitchhiker, and counterfactual mugging all expose reflective inconsistency in TDT.

Let’s look at the transparent Newcomb problem. The structure is identical to a Newcomb problem (you walk into a room with two boxes, $1000 in one and either $1000000 or $0 in the other, determined based on the behavior of your simulation), except that both boxes are transparent. This means that you already know with certainty the contents of both boxes. CDT two-boxes here like always. EDT also two-boxes, since any dependence between your decision and the box’s contents is made irrelevant as soon as you see the contents. TDT agrees with this line of reasoning; even though it sees a logical dependence between your behavior and your simulation’s behavior, knowing whether the box is full or empty fully screens off this dependence.

Two-boxing feels to many like the obvious rational choice here. The choice you face is simply whether to take $1,000,000 or $1,001,000 if the box is full. If it’s empty, your choice is between taking $1,000 or walking out empty-handed. But two-boxing also has a few strange consequences. For one, imagine that you are placed, blindfolded, in a transparent Newcomb problem. You can at any moment decide to remove your blindfold. If you are an EDT, you will reason that if you don’t remove your blindfold, you are essentially in an ordinary Newcomb problem, so you will one-box and correspondingly walk away with $1,000,000. But if you do remove your blindfold, you’ll end up two-boxing and most likely walking away with only $1000. So an EDT would pay up to $999,000, just for the privilege of staying blindfolded. This seems to conflict with an intuitive principle of rational choice, which goes something like: A rational agent should never expect to be worse off by simply gaining information. Paying money to keep yourself from learning relevant information seems like a sure sign of a pathological decision theory.

Of course, there are two ways out of this. One way is to follow the causal decision theorist and two-box in both the ordinary Newcomb problem and the transparent problem. This has all the issues that we’ve already discussed, most prominently that you end up systematically and predictably worse off by doing so. If you pit a causal decision theorist against an agent that always one-boxes, even in transparent Newcomb problems, CDT ends up the poorer. And since CDT can reason this through beforehand, they would willingly self-modify to become the other type of agent.

What type of agent is this? None of the three decision theories we’ve discussed give the reflectively consistent response here, so we need to invent a new decision theory. The difficulty with any such theory is that it has to be able to justify sticking to its guns and one-boxing even after conditioning on the contents of the box.

In general, similar issues will arise whenever the recommendations made by a decision theory are not time-consistent. For instance, the decision that TDT prescribes for an agent with background knowledge K depends heavily on the information that TDT has at the time of prescription. This means that at different times, TDT will make different recommendations for what to do in the same situation (before entering the room TDT recommends one-boxing once in the room, while after entering the room TDT recommends two-boxing). This leads to suboptimal performance. Agents that can decide on one course of action and credibly precommit to it get certain benefits that aren’t available to agents that don’t have this ability. I think the clearest example of this is Parfit’s hitchhiker:

You are stranded in the desert, running out of water, and soon to die. A Predictor approaches and tells you that they will drive you to town only if they predict you will pay them $100 once you get there.

All of EDT, CDT, and TDT wish that they could credibly precommit to paying the money once in town, but can’t. Once they are in town they no longer have any reason to pay the $100, since they condition on the fact that they . The fact that EDT, CDT, and TDT all have time-sensitive recommendations makes them worse off, leaving them all stranded in the desert to die. Each of these agents would willingly switch to a decision theory that doesn’t change their recommendations over time. How would such a decision theory work? It looks like we need a decision theory that acts as if it doesn’t know whether they’re in town even once in town, and acts as if it doesn’t know the contents of the box even after seeing them. One strategy for achieving this behavior is simple; you just decide on your strategy without ever conditioning on the fact that you are in town!

The decision theory that arises from this choice is appropriately named updateless decision theory (henceforth UDT). UDT is peculiar in that it never actually updates on any information when determining how to behave. That’s not to say that for UDT, the decision you make does not depend on the information you get through your lifetime. Instead, UDT tells you to choose a policy – a mapping from the possible pieces of information you might receive to possible decisions you could make – that maximizes the expected utility, calculated using your prior on possible worlds. This policy is set from time zero and never changes, and it determines how the UDT agent responds to any information they might receive at later points. So, for instance, a UDT agent reasons that adopting a policy of one-boxing in the transparent Newcomb case regardless of what you see maximizes expected utility as calculated using your prior. So once the UDT agent is in the room with the transparent box, it one-boxes. We can formalize this this by analogy with TDT:

Screen Shot 2019-03-09 at 8.23.54 AM

One concern with this approach is that a UDT agent might end up making silly decisions as a result of not taking into account information that is relevant to their decisions. But once again, the UDT agent does take into account the information they learn in their lifetime. It’s merely that they decide what to do with that information before receiving it and never update this prescription. For example, suppose that a UDT agent anticipates facing exactly one decision problem in their life, regarding whether to push a button or not. They have a 50% prior credence that pushing the button will result in the loss of $10, and 50% that it will result in gaining $10. Now, at some point before they decide to push the button, they are given the information about whether pushing the button causes you to gain $10 or to lose $10. UDT deals with this by choosing a policy for how to respond to that information in either case. The expected utility maximizing policy here would be to push the button if you learn that pushing the button leads to gaining $10, and to not push the button if you learn the opposite.

Since UDT chooses its preferred policy based on its prior, this recommendation never changes throughout a UDT agent’s lifetime. This seems to indicate that UDT will be self-recommending in the class of all decision-determined problems, although I’m not aware of a full proof of this. If this is correct, then we have reached our goal of finding a self-recommending decision theory. It is interesting to consider what other principles of rational choice ended up being violated along the way. The simple dominance principle that we started off by discussing appears to be an example of this. In the transparent Newcomb problem, there is only one possible world that the agent considers when in the room (the one in which the box is full, say), and in this one world, two-boxing dominates one-boxing. Given that the box is full, your decision to one-box or to two-box is completely independent of the box’s contents. So the simple dominance principle recommends two-boxing. But UDT disagrees.

Another example of a deeply intuitive principle that UDT violates is the irrelevance of impossible outcomes. This principles says that impossible outcomes should not factor into your decision-making process. But UDT seems to often recommend acting as if some impossible world might come to be. For instance, suppose a predictor walks up to you and gives you a choice to either give them $10 or to give them $100. You will not face any future consequences on the basis of your decision (besides whether you’re out $100 or only $10). However, you learn that the predictor only approached you because it predicted that you would give the $10. Do you give the $10 or the $100? UDT recommends giving the $100, because agents that do so are less likely to have been approached by the predictor. But if you’ve already been approached, then you are letting considerations about an impossible world influence your decision process!

Our quest for reflective consistency took us from EDT and CDT to timeless decision theory. TDT used the notion of logical dependence to get self-recommending behavior in the Newcomb problem and medical Newcomb cases. But we found that TDT was itself reflectively inconsistent in problems like the Transparent Newcomb problem. This led us to create a new theory that made its recommendations without updating on information, which we called updateless decision theory. UDT turned out to be a totally static theory, setting forth a policy determining how to respond to all possible bits of information and never altering this policy. The unchanging nature of UDT indicates the possibility that we have found a truly self-recommending decision theory, while also leading to some quite unintuitive consequences.

Screen Shot 2019-03-09 at 12.05.45 PM

Decision Theory

Everywhere below where a Predictor is mentioned, assume that their predictions are made by scanning your brain to create a highly accurate simulation of you and then observing what this simulation does.

All the below scenarios are one-shot games. Your action now will not influence the future decision problems you end up in.

Newcomb’s Problem
Two boxes: A and B. B contains $1,000. A contains $1,000,000 if the Predictor thinks you will take just A, and $0 otherwise. Do you take just A or both A and B?

Transparent Newcomb, Full Box
Newcomb problem, but you can see that box A contains $1,000,000. Do you take just A or both A and B?

Transparent Newcomb, Empty Box
Newcomb problem, but you can see that box A contains nothing. Do you take just A or both A and B?

Newcomb with Precommitment
Newcomb’s problem, but you have the ability to irrevocably resolve to take just A in advance of the Predictor’s prediction (which will still be just as good if you do precommit). Should you precommit?

Take Opaque First
Newcomb’s problem, but you have already taken A and it has been removed from the room. Should you now also take B or leave it behind?

Smoking Lesion
Some people have a lesion that causes cancer as well as a strong desire to smoke. Smoking doesn’t cause cancer and you enjoy it. Do you smoke?

Smoking Lesion, Unconscious Inclination
Some people have a lesion that causes cancer as well as a strong unconscious inclination to smoke. Smoking doesn’t cause cancer and you enjoy it. Do you smoke?

Smoking and Appletinis
Drinking a third appletini is the kind of act much more typical of people with addictive personalities, who tend to become smokers. I’d like to drink a third appletini, but I really don’t want to be a smoker. Should I order the appletini?

Expensive Hospital
You just got into an accident which gave you amnesia. You need to choose to be treated at either a cheap hospital or an expensive one. The quality of treatment in the two is the same, but you know that billionaires, due to unconscious habit will be biased towards using the expensive one. Which do you choose?

Rocket Heels and Robots
The world contains robots and humans, and you don’t know which you are. Robots rescue people whenever possible and have rockets in their heels that activate whenever necessary. Your friend falls down a mine shaft and will die soon without robotic assistance. Should you jump in after them?

Death in Damascus
If you and Death are in the same city tomorrow, you die. Death is a perfect predictor, and will come where he predicts you will be. You can stay in Damascus or pay $1000 to flee to Aleppo. Do you stay or flee?

Psychopath Button
If you press a button, all psychopaths will be killed. Only a psychopath would press such a button. Do you press the button?

Parfit’s Hitchhiker
You are stranded in the desert, running out of water, and soon to die. A Predictor will drive you to town only if they predict you will pay them $1000 once you get there. You have been brought into town. Do you pay?

XOR Blackmail
An honest predictor sends you this letter: “I sent this letter because I predicted that you have termites iff you won’t send me $100. Send me $100.” Do you send the money?

Twin Prisoner’s Dilemma
You are in a prisoner’s dilemma with a twin of yourself. Do you cooperate or defect?

Predictor Extortion
A Predictor approaches you and threatens to torture you unless you hand over $100. They only approached you because they predicted beforehand that you would hand over the $100. Do you pay up?

Counterfactual Mugging
Predictor flips coin which lands heads, and approaches you and asks you for $100. If the coin had landed tails, it would have tortured you if it predicted you wouldn’t give the $100. Do you give?

Newcomb’s Soda
You have 50% credence that you were given Soda 1, and 50% that you were given Soda 2. Those that had Soda 1 have a strong unconscious inclination to choose chocolate ice cream and will be given $1,000,000. Those that had Soda 2 have a strong unconscious inclination to choose vanilla ice cream and are given nothing. If you choose vanilla ice cream, you get $1000. Do you choose chocolate or vanilla ice cream?

Meta-Newcomb Problem
Two boxes: A and B. A contains $1,000. Box B will contain either nothing or $1,000,000. What B will contain is (or will be) determined by a Predictor just as in the standard Newcomb problem. Half of the time, the Predictor makes his move before you by predicting what you’ll do. The other half, the Predictor makes his move after you by observing what you do. There is a Metapredictor, who has an excellent track record of predicting Predictor’s choices as well as your own. The Metapredictor informs you that either (1) you choose A and B and Predictor will make his move after you make your choice, or else (2) you choose only B, and Predictor has already made his choice. Do you take only B or both A and B?

Rationality in the face of improbability

I recently read my favorite Wikipedia article of all time. It’s about a park ranger named Roy Cleveland Sullivan, whose claim to fame was having been hit by lightning on seven different occasions and surviving all of them. The details of these events are both tragic and a little hilarious, and raise some interesting questions about rationality.

From the article:

In spring 1972, Sullivan was working inside a ranger station in Shenandoah National Park when another strike occurred. It set his hair on fire; he tried to smother the flames with his jacket. He then rushed to the restroom, but couldn’t fit under the water tap and so used a wet towel instead. Although he never was a fearful man, after the fourth strike he began to believe that some force was trying to destroy him and he acquired a fear of death. For months, whenever he was caught in a storm while driving his truck, he would pull over and lie down on the front seat until the storm passed. He also began to believe that he would somehow attract lightning even if he stood in a crowd of people, and carried a can of water with him in case his hair was set on fire.

Put yourself in his situation and ask yourself if you might start doing the same thing after four times. Now what about if it kept happening?

On August 7, 1973, while he was out on patrol in the park, Sullivan saw a storm cloud forming and drove away quickly. But the cloud, he said later, seemed to be following him. When he finally thought he had outrun it, he decided it was safe to leave his truck. Soon after, he was struck by a lightning bolt.

The next strike, on June 5, 1976, injured his ankle. It was reported that he saw a cloud, thought that it was following him, tried to run away, but was struck anyway. His hair also caught fire.

He was struck the seventh time while fishing in a freshwater pool, which in a weird turn of events was followed by a confrontation with a bear over some trout that he had caught.

What’s more, Sullivan claimed to have been struck by lightning another time as a child, when out helping his father cut wheat in a field.

And furthermore…

Sullivan’s wife was also struck once, when a storm suddenly arrived as she was out hanging clothes in their backyard. Her husband was helping her at the time, but escaped unharmed.

Apparently his fear of lightning was a little contagious:

He was avoided by people later in life because of their fear of being hit by lightning, and this saddened him. He once recalled “For instance, I was walking with the Chief Ranger one day when lightning struck way off (in the distance). The Chief said, ‘I’ll see you later.'”

Okay, so besides from being a hilariously weird series of events, this article does raise some issues related to anthropic reasoning. Namely: what would it be rational for Roy Sullivan to believe?

I want to say that this man had really really good evidence that some angry Thor-like deity existed and was actively hunting him down. In his position, I think I’d feel like it was only rational to try to run from approaching clouds and thunderstorms (although that strategy didn’t seem to be super effective for him).

But at the same time, in a world of billions of people, it’s almost guaranteed to be the case that somebody will find themselves in circumstances just as unlikely as this. If Sullivan had one day looked up lightning strike statistics, and found that the numbers for the overall population were perfectly consistent with a naturalistic hypothesis in which lightning doesn’t target any particular individuals, how should he have responded?

And what should we believe about Roy Sullivan and lightning? Presumably we should not accept his non-naturalistic conclusions. But then what exactly is the difference between what we know and what he knows? We both have the same statistical information about the general trends in lightning strikes, and we both know that Roy Sullivan Cleveland was hit by lightning a bunch of times, so why should we come to different conclusions?

The obvious thought here is that it has something to do with anthropic reasoning. Sure, I have the same non-anthropic evidence as Roy Sullivan, but we have different anthropic evidence. Sullivan doesn’t just know the comparatively unremarkable proposition that “somebody was hit by lightning seven times and survived,” he knows the indexical proposition that “I was hit by lightning seven times and survived.” The non-Sullivans of the world don’t have access to this proposition, and maybe  this is the difference that matters.

Perhaps any population will end up having some individuals that happen to find themselves in very unusual situations, in which it becomes rational for them to come to bizarre conclusions about the world for anthropic reasons. And the bigger the population, the stranger and more rationally certain these beliefs might become.  Imagine a population big enough that it becomes not unlikely that some individual walks around commanding Thor to send bolts of lightning where they’re pointing, and then lo and behold it happens each time.

There would be many many many more individuals out there who succeeded a few times, and even more that never succeed at doing so. But for that tiny fraction that appears to manifest god-like powers, what should they believe? What should their friends and family believe? How far does the anthropic update extend? I’m not sure.

The history of lighting technology

Behold, one of my favorite tables of all time:


There’s so much to absorb here. Let’s look at just the “Light Price in Terms of Labor” column. At 500,000 BC, our starting point, we have this handsome guy:

Peking Man

The Peking man was a Homo erectus found in a cave with evidence of tool use and basic fire technology.  At this point, it would have taken him about 58 hours of work for every 1000 hours of light. Lighting a fire by hand or even with basic tools is hard, as seen here:

Nothing much changes for hundreds of thousands of years, until people begin using basic candles and oil lamps in the 1800s. After that, things slowly begin to accelerate, with gas lighting, incandescent lamps, and eventually fluorescent bulbs and LEDs…


Notice that this is a logarithmic plot! So a straight line corresponds to an exponential decrease in the amount of labor required to produce light. By the end we have less than 1 second to produce 1000 hours of light. And this doesn’t even include LED technologies!

Here’s a more detailed timeline of milestones in lighting technology up to the 1980s:

Screen Shot 2018-12-23 at 4.06.50 AM

And finally, a comparison of the efficiency of different lighting technologies over time.