TIMN view of social evolution

(Papers here and here)

In the Neolithic era, societies are thought to have been mostly small groups bonded by kinship relations, with little social stratification. As technological advancement accommodated more complex social structures and larger groups of humans living together, problems of coordination became increasingly difficult. In response, more complex social structures arose, such as Chiefdoms, States and eventually Empires.

These structures solved coordination problems through a top-down command-and-control approach, enforced by strict hierarchical power structures. Historical exemplars of such structures include Ancient Egypt and the Roman Empire. These societies experienced immense growth, stretching out to dominate vast stretches of territory and millions of humans.

But as they grew, these societies began facing increasingly difficult problems of managing vast amounts of information involving complex exchanges and economic dynamics. Eventually, old mercantilist systems in which the state was in charge of economic transactions gave way to a grand new form of social structure: the market.

Societies that adopted market structures alongside the state became global leaders, dominating technological, social, and economic progress up until the present day. And just as previous forms of society had their distinctive failings, capitalistic societies face problems in the creation of social inequalities without the ability to address them.

Advances in technology that allow a revolutionary capacity for information exchange are resulting in the formation of a new form of social structure to address these problems. This structure is characterized by complicated heterarchical cooperation between massive networks of physically dispersed individuals, all coordinating on the basis of shared ideological aims. It is to them that the future belongs.

This is the view of history offered by political scientist David Ronfeldt, who framed the TIMN theory of social evolution.

If I were to summarize his entire theory in four sentences, I would say:

Societies through history can be explained through the interactions of four major forms of social structure: the Tribe, the Institution, the Market, and the Network. Each form defines a structure of governance and the way that individuals interact with one another, as well as cultural values and beliefs about the way society should be organized.

Each has different strengths and its weaknesses, and the progress of history has been a move towards adopting all four forms in a complicated balance. The future will belong to those societies that realize the potential of the network form and successfully incorporate it into their social structure.

There are a lot of parallels between this and previous things that I’ve read. I’ll go into that in a moment, but first will lay out more detailed definitions of his four primary structures.

The Tribe: Tribes are characterized by tight kinship relationships. Tribal social structures create strong senses of social identity and belonging, and define the culture of successive societies. They are small, egalitarian, and generally lack a strong leader. Their limitations are problems of administration and coordination as they grow, as well as nepotism and intertribal wars. Historical examples abound in the Neolithic era, and in modern times they exist in certain hot spots in Third World countries. In the First World, tribal patterns exist within families, urban gangs, civic clubs, and more abstractly in nationalism, racism and sports team mania.

The Institution: Institutions are characterized by authority figures, strict hierarchies, management structures, and administrative bureaucracies. Their strengths involve administration and solving coordination problems. They are afflicted with problems of corruption and abuse of power, as well as difficulty processing large amounts of information, leading to economic inefficiency. Examples include the great Empires, and they exist today in states, military organizations, religious organizations, and corporations.

The Market: Markets are characterized by competition and voluntary exchanges between self-interested individuals. They are uncentralized and nonhierarchical, and do well at handling enormous amounts of complex information and optimizing economic efficiency in exchanges of private goods. They lead to productive and innovative societies with thriving trade and commerce. Markets struggle to deal with externalities and lead to social inequality. Markets historically took off in the transition from mercantilism to capitalism in Europe, and are exemplified by the economies of the U.S. and the U.K. and more recently Chile, China, and Mexico.

The Network: Networks are characterized by cooperation between many autonomous individuals with no single central authority, where each individual is connected to all others. They are tied together not by blood or kinship relationships, but by ideology and common goals. Their strengths are yet to be seen, though Ronfeldt thinks that they could do well at promoting “group empowerment” and solving social issues. Same with their weaknesses, though he points vaguely in the direction of “information overload” and “deception”. Examples include social networks and transnational networks of NGOs.

Networks are the most poorly specified and speculative of the four forms. This is perhaps to be expected; after all, he thinks they have only begun to come into prominence at the advent of the Information Age.

They’re also the form that he stresses the most, making lots of breathless predictions about networked societies superseding the market-state societies that dominate the status quo. He urges states like the U.S. and the U.K. to become active participants in the ushering in of this great new era if they want to remain global leaders.

This part was less interesting to me. I’m not convinced that the problems of social inequality that he thinks Networks are necessary for cannot be fixed in a Market/State paradigm. All the same, it was nice to see falsifiable predictions from an otherwise highly theoretical work.

What I enjoyed most was his view of history. He sees the four forms as additive. When a society incorporates a new form, it does not discard the old, but builds upon it. Both end up modifying and influencing each other, and the end product is a combined system that incorporates both.

So for instance, the culture of a Tribe bleeds into its later instantiations as a State-run society, and can remain generations after the more visible tribal structures have passed on. And the adoption of free-market economic systems forces a reshaping of the State towards political democracy. He quotes Charles Lindblom:

However poorly the market is harnessed to democratic purposes, only within market-oriented systems does political democracy arise. Not all market-oriented systems are democratic, but every democratic system is also a market-oriented system. Apparently, for reasons not wholly understood, political democracy has been unable to exist except when coupled with the market. An extraordinary proposition, it has so far held without exception.

Ronfeldt explains this as a result of the market form pushing social values towards personal freedom, individuality, representation, and governmental accountability.

***

First connection:

I was reminded of psychologist Jonathan Haidt’s categorization of the different basic types of moral intuitions in The Righteous Mind. These are:

  • Care/Harm: Includes feelings like empathy and compassion. These intuitions are most triggered by experiences of vulnerable children, intense suffering and need, and cruelty.
  • Fairness/Cheating: Includes feelings of reciprocity, injustice, and equality. Triggered by others displaying cooperation or selfishness towards us.
  • Loyalty/Betrayal: Includes feelings of tribalism, unity and kinship. Triggered by involvement in tight groups
  • Authority/Subversion: Includes feelings of respect for parents, teachers, rulers, and religious leaders, as well as the feelings that this respect is owed. Involved in hierarchical thinking and perceptions of dominance relations.
  • Sanctity/Degradation: Includes feelings of disgust, purity, cleanliness, dirtiness, sacredness, and corruption.
  • Liberty/Oppression: Includes feelings of individualism, freedom, and resentment towards being dominated or oppressed.

Different political ideologies line up very well with different “moral foundations profiles”. Liberals tend to care primarily about the first two categories, Libertarians the last, and Conservatives a roughly equal mix of all six. You can take a questionnaire to see your personal moral profile here.

These categories look like they map really nicely onto the TIMN model as organizing principles for the different forms. Here’s my speculation on how the different social forms engage and capitalize on the different types of intuitions:

Tribes: Loyalty/Betrayal

Institutions: Authority/Subversion

Markets: Liberty/Oppression

Networks: Care/Harm?

The natural next question is what types of social forms would have as organizing principles the values of Fairness/Cheating or Sanctity/Degradation.

Second connection:

Sociologist Robert Nisbet attempted to categorize the different basic patterns of social interactions. He gave five categories: cooperation, conflict, exchange, coercion and conformity. For some reason this categorization seemed very deep to me when I first heard it, and it has stuck with me ever since.

Cooperation involves coordination between individuals that have a shared goal, while exchange involves coordination between individuals that are each motivated by their own self-interest.

Conflict occurs when individuals work against each other, competing for a larger share of rewards, for instance. Coercion is the forced cooperation between individuals with different goals. And conformity involves behavior that matches group expectations.

These categories nicely match the types of social interactions that characterize the different social forms in the TIMN model.

Tribes are a social form that are dominated by conformity interactions. Identity is tightly bound up with tribal culture, lineage, and adherence to social norms involving mutual defense and aid and who can have kids with whom.

The structure of Institutions is quite clearly analogous to coercion, and Markets to exchange and conflict. And by Ronfeldt’s description, Networks seem to be analogous to cooperative interactions.

Third connection:

Scott Alexander makes the point that democracies have several unique features that set them apart from previous forms of government.

These features all arise from the fact that democracies answer questions of leadership succession by handing them to the people. This is a big deal, for two main reasons:

First, democracies put an upper bound on how terrible a leader can be.

Why? The basic justification is that while the people don’t get to select the absolute best choice for leadership, they do get to select against the worst choices.

(FPTP is terrible enough that I actually don’t know if this is in general true. But this is in contrast to monarchical forms of government, which involve no feedback from the population, so the point stands.)

When the king of a hereditary monarchy dies and the throne passes to his oldest son, there is no formally recognized way to guard against the possibility that the kid is literally the next Hitler. At best, the population can just try to throw him out when they’ve had enough and let whoever wins out in the resulting scramble for power take over.

Second, democracy provides a great Schelling point for leadership succession.

(A Schelling point is a decision that would be arbitrary except that that is made on the basis of an expectation that everybody else will make the same decision. So if you’re supposed to meet a stranger in NYC, and you don’t know where, you’ll choose to go to Grand Central Terminal, and so will they. Not because of any psychic communication between the two of you, nor any sort of official designation of Grand Central Terminal as the One True Stranger Meeting Spot, but because you each expect the other to be there. Thus Grand Central Terminal is a geographical Schelling point for NYC.)

The Schelling point for leadership succession in a hereditary monarchy is royal blood. Which is to say that when the leader dies, everybody looks for the person (usually the man) with the most royal blood, and elects them.

But who determines if somebody’s blood is truly royal? What do you do if some other family decides that they have the truly royal blood? What if two people have equally royal blood?

The Schelling point for leadership succession in a theocratic monarchy like Ancient Egypt is the Official Word Of God.

Who determines which individual God actually wants in charge? What if two people both claim that God chose them to rule?

The problem is that these legitimacy claims are founded on fictions. There is no quality of royal-ness to blood, and there is no God to choose rulers.  In a democracy, the Schelling point for democracy is a real thing that is easily verifiable: the popular vote.

Everybody agrees who the correct leader is, because everybody can just look at the election results. And if somebody disagrees on who the correct leader is, then they have a clear action to take: mobilize voters to change their mind by the next election.

Thus democracy plays the dual role of ending succession squabbles and providing a natural pressure valve for those dissatisfied with the current leader.

These differences in structure seem really significant. I think that I would want to break apart Ronfeldt’s Institution category and replace it with two social forms: the Hierarchy and the Democracy.

A Hierarchy would be a social structure in which there is a strict top-down system of authority, and where the population at large does not have a formal role in determining who makes it at the top.

A Democracy also has a top-down system of power, but now also has a formal mechanism for feedback from the population to the top levels of power (e.g. an election). (I’d like a word for this that does not have as political a connotation, but failed to think of any)

***

The TIMN framework naturally leads to a story of the gradual progress of humans in our joint project of perfecting civilization. At each stage in history, new social structures arise to fix the failings of the old, and in this way forward-progress is made.

Overall, I think that the framework offers a potentially useful way of assessing different political and economic systems, by looking at the ways in which they utilize the strengths of these four structures and how they fall victim to the weaknesses.

Race, Ethnicity, and Labels

(This post is me becoming curious about the variety of different opinions on racial labels, spending far too many hours researching the topic, and writing up what I find.)

One thing that I find interesting is that basically every minority ethnic and racial group in the United States has constantly dealt with terminological disputes about their proper group name.

One possible explanation for this constant turn-over was given by disability rights activist Evan Kemp, who wrote:

As long as a group is ostracized or otherwise demeaned, whatever name is used to designate that group will eventually take on a demeaning flavor and have to be replaced. The designation will keep changing every generation or so until the group is integrated into society. Whatever name is in vogue at the point of social acceptance will be the lasting one.

If this is the right explanation, then maybe we’d be able to measure the relative degrees of discrimination faced by different groups on the basis of their ‘terminological velocity’ – how quick a turnover the name for their group has.

Regardless, looking into these issues revealed a bunch of interesting history and weird trivia. So here goes!

***

Native American vs American Indian

A 1995 Census Bureau survey of American Indians found that 49% preferred the term ‘American Indian’ and 37% preferred ‘Native American’. I couldn’t find any more recent polls on this question.

This may seem unusual if you don’t know much about American Indian culture and history. It’s a bit confusing to me; as somebody with a parent born in India, I’m pretty sure that I’m an American Indian.

Why is a term that derives from the geographical error of early European colonists the most favored of all available terms? And why not ‘Native American’? From an outside perspective, ‘Native American’ feels like a respectful term, one that pays homage to the history of American Indians as the original residents of the Americas.

It turns out the answer to these questions comes from a quick look at the history of these terms, which is super fascinating.

‘Native American’ was a term originally used by WASPs in the 1850s to differentiate themselves from Catholic Irish and German immigrants. The anti-immigrant Know-Nothing Party, whose supporters were known for violent riots in Catholic neighborhoods, burning down churches, and tarring and feathering of Catholic priests, was originally known as the Native American Party.

The term fell out of use for a century upon the rise of the anti-slavery movement and subsequent collapse of the Know-Nothings. This time gap probably indicates that the early usage of the term has little current relevance to associations with the term, but I included it anyway. I find it darkly amusing to imagine white anti-Catholic nativists running around calling themselves Native Americans.

The term ‘Native American’ was revived in the civil rights era by anthropologists eager for historical accuracy and disassociation from the negative stereotypes associated with ‘Indian’. This was adopted widely by government agencies, and apparently in doing so picked up a negative connotation.

Prominent Lakota activist Russell Means described the term as “a generic government term used to describe all the indigenous prisoners of the United States.” Some American Indians emphasize a sense of lack of ownership over the term, and feel that it was a “colonial term” given to them by outsiders.

‘American Indian’ is apparently more widely favored. Widespread acceptance of this term dates back to 1968 and the rise of the American Indian Movement (AIM). At a UN conference in 1977, AIM’s International Indian Treaty Council urged collective identification of American Indians with the term.

One argument made for the term is that while the names of other races in America have ‘American’ as their second word (e.g. ‘Asian American’, ‘Arab American’), ‘American Indian’ would have American as its first word, giving American Indians a special distinction. I’m serious, this was a real argument.

‘American Indian’ is etymologically close to ‘Indian’, which dates back to early European colonists that systematically drove American Indian populations out of their homes. Some note derogatory stereotypes from old Western movies associated with ‘cowboys and Indians’, and feel that the association carries over to ‘American Indian’.

Other American Indians say that they would prefer to be identified by their specific tribal nation, feeling that terms like ‘Native American’ and ‘Indian American’ lump all tribes together and ignore important differences in heritage. The problem with this is that there are 562 federally recognized distinct tribes, making this cognitively unfeasible. It’s also just useful to have a term to talk about these tribes in the aggregate.

Interestingly, when I was researching this, I found a Washington Post poll in 2016 that reported that 73% of American Indians felt that the word ‘Redskin’ was not disrespectful, and 80% would not be offended if referred to as a Redskin. A 2004 poll found similar results, with 90% of American Indians saying that the name of the Washington Redskins didn’t bother them. This is significantly more than the percentage of all Americans that don’t find the name offensive, which is around 68%.

I tried to find good arguments against these poll results, and could only find some groundless conspiracy theories suggesting the polls had been infiltrated by white people claiming to be American Indians. In the absence of alternative explanations, I really don’t know what to make of this, besides that it suggests a complete disconnect between American Indian activists and the general American Indian population.

Black vs African American

The 2010 United States Census included “Black, African Am., or Negro” as one of their racial identifications. In response to many complaints and black Americans refusing to select the term, they have now switched to the shorter ‘Black or African American’.

Something that caught my eye was their explanation of this choice, which was that apparently previous research had shown that if polls didn’t allow self-identification as ‘Negro’, a significant number of older African Americans would take the time to write it in under the ‘some other race’ category.

The term ‘Negro’ became popular in the 1920s as a polite term to replace ‘Colored’, which was in turn originally a polite alternative to ‘Nigger’ in the 1900s. An actual argument made for adopting ‘Negro’ was that it was easier to pluralize than ‘Colored’, which required the addition of another word (‘Negroes’ vs ‘Colored people’). Bizarre, but okay!

In 1890, the US Census used a four-way classification: ‘Black’ for those with at least ¾ black blood, ‘mulatto’ from 3/8 to 5/8, ‘quadroon’ for ¼, and ‘octoroon’ for 1/8. Unsurprisingly, this did not catch on.

‘Negro’ was simpler, and quickly became the politically correct and respectful term, used by black leaders like Booker T Washington, Marcus Garvey, W.E.B. Du Bois, and later Martin Luther King Jr. Many black organizations replaced ‘Colored’ in their title with ‘Negro’, with the notable exception of the NAACP.

During the civil rights era, radical and militant black organizations began to attack the term, claiming that it was associated with the history of slavery and racism. ‘Black’ became a term that identified you with radical progressive blacks (think of slogans like ‘Black Power’ and ‘Black is beautiful’), while ‘Negro’ was associated with the status quo and the old guard.

The last US president to use the term ‘Negro’ was Lyndon Johnson, and by 1980 there was a large majority of African Americans in favor of ‘Black’. And of course, in modern times the term ‘Negro’ is commonly perceived as a racial slur. Obama banned the term from usage in federal law in 2016.

Meanwhile ‘Black’ became the standard term employed in surveys and used by black organizations, and having gained popular acceptance, lost its radical connections.

(Quick aside: This looks to me like an instance of what’s called semantic bleaching, where a word weakens in meaning as it increases in usage. My favorite example of this is the phrase ‘God be with you’, which over the years lost its religious connotation and became… ‘goodbye’!)

This lasted until around 1990, when Jesse Jackson announced that ‘Black’ was a term disconnected from cultural heritage, and declared a switch to ‘African American’.

While some organizations changed their names and declared their support for ‘African American’, this didn’t gather the same level of universal acceptance as ‘Black’ had in the 1960s, or indeed ‘Negro’ in the 1900s. The 1995 Census found that 44% of Black Americans still preferred ‘Black’, and only 28% preferred ‘African American’. Some argued that modern African Americans have created a culture that is not tied to Africa, and indeed that there is no coherent concept of a ‘single African culture’.

One paper I read attributed Jackson’s lack of success in making ‘African American’ the universally used term to a missing confrontational intensity that existed in the Black Power movement. For instance, when Malcolm X and other radical black activists challenged the term ‘Negro’, they attacked it harshly and made its usage a social taboo.

Jackson may have lacked the political power to sufficiently mobilize Black Americans. A 2007 Gallup poll found that 61% of Black Americans didn’t care about what term they were described by, reflecting a high level of apathy towards his cause. A 2005 paper found that Black Americans were nearly equally divided between the two.

Currently there’s an uneasy shifting balance between these two terms, where both are acceptable, though sometimes one becomes more acceptable than the other. In my personal experience, I recall a several-year period where I perceived that the term “Black” was becoming increasingly politically incorrect. I later had (and currently have) a sense that this political incorrectness around the term had backed off, keeping it in public acceptance.

Hispanic vs Latino

Americans who trace their roots to Spanish-speaking countries were grouped together by the US government under the umbrella term ‘Hispanic’ in the 1970s. ‘Latino’ later became popular as well, and was first included in the 2000 Census. These terms are defined as synonyms by the U.S. Census Bureau.

Polls indicate that around half of Latinos don’t like either term, and prefer to be identified with their country of origin. When forced to choose, more than twice as many prefer ‘Hispanic’ over ‘Latino’. (Interestingly, Latino friends of mine tell me that they and their Latino friends and family overwhelmingly prefer ‘Latino’ over ‘Hispanic’, which points to some sort of selection bias around me that I don’t understand.)

The federal government officially defines ‘Latino’ not as a race, but an ethnicity. Latinos apparently disagree – 56% claim that is both a race and an ethnicity and 11% that it is a race. Only 19% agree with the official definition!

Both terms ‘Latino’ and ‘Hispanic’ are fairly unique to the United States. Terms that arose from Latino social movements like ‘Chicano’ have never won out among Latinos. This might be in part because of the lack of a strong shared identity – about 70% of Latinos think that there is not a common culture between American Latinos, and instead see a loose group composed of many individual cultures. There’s also a relevant lack of widely-known Latino activists and clear representatives of Latino people to champion these terms.

An older term designed to de-gender the term ‘Latino’ is ‘Latin@’, starting in the 1990s. This was apparently not inclusive enough, as the ‘@’ represents only ‘o’ and ‘a’ and not those that identify with neither. More recently, social justice activists have tried to encourage the adoption of the term ‘Latinx’. This term breaks with the gendered nature of the Spanish language and hardly rollss off the tongue, but has become relatively popular with LGBT activists.

Asian American vs Oriental

The term ‘Oriental’ was prohibited in the same bill in which Obama prohibited the use of the term ‘Negro’ in federal documents. There is a fairly strong consensus at this point that ‘Asian American’ is the appropriate term (though there remains some academic debate about this term).

‘Oriental’ is an old old term, dating back to the late Roman Empire. Over its history, the geographical region it referred to shifted constantly eastward (ad orientalem), from Morocco (yes, at some point it might have been proper to refer to Moroccans as Oriental!) to Egypt and the Levant to India and finally to East and Southeast Asia by the mid-1900s.

The term picked up baggage in the U.S. during the racist campaigns against Asian Americans in the late 1800s and early 1900s, and by now is fairly universally considered a pejorative term.

It was replaced by the term ‘Asian American’, which began to enter into popular use in the 1960s. The US Census definition of ‘Asian American’ still includes Indians, which feels really really wrong to me. I tried and failed to find public opinion polls on how many people feel comfortable with the term ‘Asian’ being applied to Indians.

And others…

The terminological situation of the Roma people is uniquely terrible. They are mostly referred to by the pejorative term ‘Gypsy’, which is essentially synonymous with ‘dangerous thieving wanderer’. The term ‘gypped’, meaning cheated or swindled, also has its origins in this term. They are also commonly referred to by the term ‘Tigan’, another pejorative term that derives from the Greek word for ‘untouchable’.

In a 2013 BBC TV interview, former Romanian prime minister Victor Ponta took care to distinguish Romanians from the Roma, noting that Romanians want to distance themselves from the Roma due to the negative connotations of the similar term.

And in 2010, the Romanian government supported a constitutional amendment legally renaming the Roma to the pejorative ‘Tigan’. (This law was later rejected by the Romanian Senate) Another such amendment was proposed in 2013, this time hoping to ban the self-identification of Roma in Romania as Romanians.

Jewish people are also in an unusual terminological situation. The term ‘Israelite’ was apparently commonly used until the 1947 formation of Israel. While ‘Jew’ is the only remaining commonly used term, there are problems with it. From The American Heritage Dictionary:

It is widely recognized that the attributive use of the word Jew, in phrases such as Jew lawyer or Jew ethics, is both vulgar and highly offensive. In such contexts Jewish is the only acceptable possibility. Some people, however, have become so wary of this construction that they have extended the stigma to any use of Jew as a noun, a practice that carries risks of its own. In a sentence such as There are several Jews on the council, which is unobjectionable, the substitution of a circumlocution like Jewish people or persons of Jewish background may in itself cause offense for seeming to imply that Jew has a negative connotation when used as a noun.

***

All in all, it looks like a really complicated mixture of factors ends up determining how this part of the language evolves.

On the one hand there are syntactic features (like ‘American Indian’ having ‘Indian’ on the right as opposed to the standard left, or ‘Colored’ having a complicated pluralization compared to ‘Negro’).

And on the other hand there are semantic features like the ancient and automatic negative associations with words like ‘dark’ and ‘black’, or the colonial associations tied to the term ‘Indian’.

There are contemporary factors like the existence of a strong shared racial/ethnic identity, the presence of a charismatic racial/ethnic leader, and whether or not the introducer of a new term for a group is an insider or outsider to the group.

Then there are phenomena like semantic bleaching, whereby terms that enter common use have their meaning diluted and weakened, and concept creep, whereby words change their meaning over long stretches of history by altered patterns of usage.

And finally there are longer-term historical effects like the gradual inundation of language with dark undertones over decades of racism and discriminatory treatment.

Is quantum mechanics simpler than classical physics?

I want to make a few very fundamental comparisons between classical and quantum mechanics. I’ll be assuming a lot of background in this particular post to prevent it from getting uncontrollably long, but am planning on writing a series on quantum mechanics at some point.

***

Let’s assume that the universe consists of N simple point particles (where N is an ungodly large number), each interacting with each other in complicated ways according to their relative positions. These positions are written as x1, x2, …, xN.

The classical description for this simple universe makes each position a function of time, and gives the following set of N equations of motion, one for each particle:

Fk(x1, x2, …, xN) = mk · ∂t2xk

Each force function Fk will be a horribly messy nonlinear function of the positions of all the particles in the universe. These functions encode the details of all of the interactions taking place between the particles.

Analytically solving this equation is completely hopeless – It’s a set of N separate equations, each one a highly nonlinear second order differential equation. You couldn’t solve any of them on their own, and on top of that, they are tightly entangled together, making it impossible to solve any one without also solving all the others.

So if you thought that Newton’s equation F = ma was simple, think again!

Compare this to how quantum mechanics describes our universe. The state of the universe is described by a function Ψ(x1, x2, …, xN, t). This function changes over time according to the Schrödinger equation:

tΨ = -i·H[Ψ]

H is a differential operator that is a complicated function of all of the positions of all the particles in the universe. It encodes the information about particle interactions in the same way that the force functions did in classical mechanics.

I claim that Schrodinger’s equation is infinitely easier to solve than Newton’s equation. In fact, I will by the end of this post write out the exact solution to the wave function of the entire universe.

At first glance, you can notice a few features of the equation that make it look potentially simpler than the classical equation. For one, there’s only one single equation, instead of N entangled equations.

Also, the equation is only first order in time derivatives, while Newton’s equation is second order in time derivatives. This is extremely important. The move from a first order differential equation to a second order differential equation is a huge deal. For one thing, there’s a simple general solution to all first order linear differential equations, and nothing close for second order linear differential equations.

Unfortunately… Schrodinger’s equation, just like Newton’s, is highly highly nonlinear, because of the presence of H. If we can’t find a way to simplify this immensely complex operator, then we’re probably stuck.

But quantum mechanics hands us exactly what we need: two magical facts about the universe that allow us to turn Schrodinger’s equation into a linear first-order differential equation.

First: It guarantees us that there exist a set of functions φE(x1, x2, …, xN) such that:

HE] = E · φE

E is an ordinary real number, and its physical meaning is the energy of the entire universe. The set of values of E is the set of allowed energies for the universe. And the functions φE(x1, x2, …, xN) are the wave functions that correspond to each allowed energy.

Second: it tells us that no matter what complicated state our universe is in, we can express it as a weighted sum over these functions:

Ψ = ∑ a· φE

With these two facts, we’re basically omniscient.

Since Ψ is a sum of all the different functions φE, if we want to know how Ψ changes with time, we can just see how each φE changes with time.

How does each φE change with time? We just use the Schrodinger equation:

tφE = -i · HE]
= -iE · φE

And we end up with a first order linear differential equation. We can write down the solution right away:

φE(x1, x2, …, xN, t) = φE(x1, x2, …, xN) · e-iEt

And just like that, we can write down the wave function of the entire universe:

Ψ(x1, x2, …, xN, t) = ∑ a· φE(x1, x2, …, xN, t)
= ∑ a· φE(x1, x2, …, xN) · e-iEt

Hand me the initial conditions of the universe, and I can hand you back its exact and complete future according to quantum mechanics.

***

Okay, I cheated a little bit. You might have guessed that writing out the exact wave function of the entire universe is not actually doable in a short blog post. The problem can’t be that simple.

But at the same time, everything I said above is actually true, and the final equation I presented really is the correct wave function of the universe. So if the problem must be more complex, where is the complexity hidden away?

The answer is that the complexity is hidden away in the first “magical fact” about allowed energy states.

HE] = E · φE

This equation is a highly non-linear and in general second-order differential equation. If we actually wanted to expand out Ψ in terms of the different functions φE, we’d have to solve this equation.

So there is no free lunch here. But what’s interesting is where the complexity moves when switching from classical mechanics to quantum mechanics.

In classical mechanics, virtually zero effort goes into formalizing the space of states, or talking about what configurations of the universe are allowable. All of the hardness of the problem of solving the laws of physics is packed into the dynamics. That is, it is easy to specify an initial condition of the universe. But describing how that initial condition evolves forward in time is virtually impossible.

By contrast, in quantum mechanics, solving the equation of motion is trivially easy. And all of the complexity has moved to defining the system. If somebody hands you the allowed energy levels and energy functions of the universe at a given moment of time, you can solve the future of the rest of the universe immediately. But actually finding the allowed energy levels and corresponding wave functions is virtually impossible.

***

Let’s get to the strangest (and my favorite) part of this.

If quantum mechanics is an accurate description of the world, then the following must be true:

Ψ(x1, x2, …, xN, 0) = ∑ a· φE(x1, x2, …, xN)
implies
Ψ(x1, x2, …, xN, t) = ∑ a· φE(x1, x2, …, xN) · e-iEt

This equation has two especially interesting features. First, each term in the sum can be broken down separately into a function of position and a function of time.

And second, the temporal component of each term is an imaginary exponential – a phase factor e-iEt.

Let me take a second to explain the significance of this.

In quantum mechanics, physical quantities are invariably found by taking the absolute square of complex quantities. This is why you can have a complex wave function and an equation of motion with an i in it, and still end up with a universe quite free of imaginary numbers.

But when you take the absolute square of e-iEt, you end up with e-iEt · eiEt = 1. What’s important here is that the time dependence seems to fall away.

A way to see this is to notice that y = e-ix, when graphed, looks like a point on a unit circle in the complex plane.

Phase

So e-iEt, when graphed, is just a point repeatedly spinning around the unit circle. The larger E is, the faster it spins.
2-Interference

Taking the absolute square of a complex number is the same as finding its distance from the origin on the complex plane. And since e-iEt always stays on the unit circle, its absolute square is always 1.

So what this all means is that quantum mechanics tells us that there’s a sense in which our universe is remarkably static. The universe starts off as a superposition of a bunch of possible energy states, each with a particular weight. And it ends up as a sum over the same energy states, with weights of the exact same magnitude, just pointing different directions in the complex plane.

Imagine drawing the universe by drawing out all possible energy states in boxes, and shading these boxes according to how much amplitude is distributed in them. Now we advance time forward by one millisecond. What happens?

Absolutely nothing, according to quantum mechanics. The distribution of shading across the boxes stays the exact same, because the phase factor multiplication does not change the magnitude of the amplitude in each box.

Given this, we are faced with a bizarre question: if quantum mechanics tells us that the universe is static in this particular way, then why do we see so much change and motion and excitement all around us?

I’ll stop here for you to puzzle over, but I’ve posted an answer here.

Iterated Simpson’s Paradox

Previous: Simpson’s paradox

In the last post, we saw how statistical reasoning can go awry in Simpson’s paradox, and how causal reasoning can rescue us. In this post, we’ll be generalizing the idea behind the paradox and producing arbitrarily complex versions of it.

The main idea behind Simpson’s paradox is that conditioning on an extra variable can sometimes reverse dependencies.

In our example in the last post, we saw that one treatment for kidney stones worked better than another, until we conditioned on the kidney stone’s size. Upon conditioning, the sign of the dependence between treatment and recovery changed, so that the first treatment now looked like it was less effective than the other.

We explained this as a result of a spurious correlation, which we represented with ‘paths of dependence’ like so:

simpsons-paradox-paths1.png

But we can do better than just one reversal! With our understanding of causal models, we are able to generate new reversals by introducing appropriate new variables to condition upon.

Our toy model for this will be a population of sick people, some given a drug and some not (D), and some who recover and some who do not (R). If there are no spurious correlations between D and R, then our diagram is simply:

Iter Simpson's 0

Now suppose that we introduce a spurious correlation, wealth (W). Wealthy people are more likely to get the drug (let’s say that this occurs through a causal intermediary of education level E), and are more likely to recover (we’ll suppose that this occurs through a casual intermediary of nutrition level of diet N).

Now we have the following diagram:

Iter Simpson's 1

Where there was only previously one path of dependency between D and R, there is now a second. This means that if we observe W, we break the spurious dependency between D and R, and retain the true causal dependence.

Iter Simpson's 1 all paths          Iter Simpson's 1 broken.png

This allows us one possible Simpson’s paradox: by conditioning upon W, we can change the direction of the dependence between D and R.

But we can do better! Suppose that your education level causally influences your nutrition. This means that we now have three paths of dependency between D and R. This allows us to cause two reversals in dependency: first by conditioning on W and second by conditioning on N.

Iter Simpson's 2 all paths.png  Iter Simpson's 2 broke 1  Iter Simpson's 2 broke 2

And we can keep going! Suppose that education does not cause nutrition, but both education and nutrition causally impact IQ. Now we have three possible reversals. First we condition on W, blocking the top path. Next we condition on I, creating a dependence between E and N (via explaining away). And finally, we condition on N, blocking the path we just opened. Now, to discern the true causal relationship between the drug and recovery, we have two choices: condition on W, or condition on all three W, I, and N.

Iter Simpson's 3 all pathsiter-simpsons-3-cond-w-e1514586779193.pngIter Simpson's 3 cond WIIter Simpson's 3 cond WIN

As might be becoming clear, we can do this arbitrarily many times. For example, here’s a five-step iterated Simpson paradox set-up:

Big iter simpson

The direction of dependence switches when you condition on, in this order: A, X, B’, X’, C’. You can trace out the different paths to see how this happens.

Part of the reason that I wanted to talk about the iterated Simpson’s paradox is to show off the power of causal modeling. Imagine that somebody hands you data that indicates that a drug is helpful in the whole population, harmful when you split the population up by wealth levels, helpful when you split it into wealth-IQ classes, and harmful when you split it into wealth-IQ-education classes.

How would you interpret this data? Causal modeling allows you to answer such questions by simply drawing a few diagrams!

Next we’ll move into one of the most significant parts of causal modeling – causal decision theory.

Previous: Simpson’s paradox

Next: Causal decision theory

Causal decision theory

Previous: Iterated Simpson’s Paradox

We’ll now move on into slightly new intellectual territory, that of decision theory.

While what we’ve previously discussed all had to do with questions about the probabilities of events and causal relationships between variables, we will now discuss questions about what the best decision to make in a given context is.

***

Decision theory has two ingredients. The first is a probabilistic model of different possible events that allows an agent to answer questions like “What is the probability that A happens if I do B?” This is, roughly speaking, the agent’s beliefs about the world.

The second ingredient is a utility function U over possible states of the world. This function takes in propositions, and returns the value to a particular agent of that proposition being true. This represents the agent’s values.

So, for instance, if A = “I win a million dollars” and B = “Somebody cuts my ear off”, U(A) will be a large positive number, and U(B) will be a large negative number. For propositions that an agent feels neutral or apathetic about, the utility function assigns them a value of 0.

Different decision theories represent different ways of combining a utility function with a probability distribution over world states. Said more intuitively, decision theories are prescriptions for combining your beliefs and your values in order to yield decisions.

A proposition that all competing decision theories agree on is “You should act to maximize your expected utility.” The difference between these different theories, then, is how they think that expected utility should be calculated.

“But this is simple!” you might think. “Simply sum over the value of each consequence, and weight each by its likelihood given a particular action! This will be the expected utility of that action.”

This prescription can be written out as follows:

Evidential Decision Theory.png

Here A is an action, C is the index for the different possible world states that you could end up in, and K is the conjunction of all of your background knowledge.

***

While this is quite intuitive, it runs into problems. For instance, suppose that scientists discover a gene G that causes both a greater chance of smoking (S) and a greater chance of developing cancer (C). In addition, suppose that smoking is known to not cause cancer.

Smoking Lesion problem

The question is, if you slightly prefer to smoke, then should you do so?

The most common response is that yes, you should do so. Either you have the cancer-causing gene or you don’t. If you do have the gene, then you’re already likely to develop cancer, and smoking won’t do anything to increase that chance.

And if you don’t have the gene, then you already probably won’t develop cancer, and smoking again doesn’t make it any more likely. So regardless of if you have the gene or not, smoking does not affect your chances of getting cancer. All it does is give you the little utility boost of getting to smoke.

But our expected utility formula given above disagrees. It sees that you are almost certain to get cancer if you smoke, and almost certain not to if you don’t. And this means that the expected utility of smoking includes the utility of cancer, which we’ll suppose to be massively negative.

Let’s do the calculation explicitly:

EU(S) = U(C & S) * P(C | S) + U(~C & S) * P(~C| S)
= U(C & S) << 0
EU(~S) =  U(~S & C) * P(C | ~S) + U(~S & ~C) * P(~C | ~S)
= U(~S & ~C) ~ 0

Therefore we find that EU(~S) >> EU(S), so our expected utility formula will tell us to avoid smoking.

The problem here is evidently that the expected utility function is taking into account not just the causal effects of your actions, but the spurious correlations as well.

The standard way that decision theory deals with this is to modify the expected utility function, switching from ordinary conditional probabilities to causal conditional probabilities.

Causal Decision Theory.png

You can calculate these causal conditional probabilities by intervening on S, which corresponds to removing all its incoming arrows.

Smoking Lesion problem mutilated

Now our expected utility function exactly mirrors our earlier argument – whether or not we smoke has no impact on our chance of getting cancer, so we might as well smoke.

Calculating this explicitly:

EU(S) = U(S & C) * P(C | do S) + U(S & ~C) * P(~C | do S)
= U(S & C) * P(C) + U(S & ~C) * P(~C)
EU(~S) = U(~S & C) * P(C | do ~S) + U(S & ~C) * P(~C | do S)
= U(~S & C) * P(C) + U(~S & ~C) * P(~C)

Looking closely at these values, we can see that EU(S) must be greater than EU(~S), regardless of the value of P(C).

***

The first expected utility formula that we wrote down represents the branch of decision theory called evidential decision theory. The second is what is called causal decision theory.

We can roughly describe the difference between them as that evidential decision theory looks at possible consequences of your decisions as if making an external observation of your decisions, while causal decision theory looks at the consequences of your decisions as if determining your decisions.

EDT treats your decisions as just another event out in the world, while CDT treats your decisions like causal interventions.

Perhaps you think that the choice between these is obvious. But Newcomb’s problem is a famous thought experiment that famously splits people along these lines and challenges both theories. I’ve written about it here, but for now will leave decision theory for new topics.

Previous: Iterated Simpson’s Paradox

Next: Causality for philosophers

Free will and decision theory

This post is about one of the things that I’ve been recently feeling confused about.

In a previous post, I described different decision theories as different algorithms for calculating expected utility. So for instance, the difference between an evidential decision theorist and a causal decision theorist can be expressed in the following way:

EDT vs CDT

What I am confused about is that each decision theory involves a choice to designate some variables in the universe as “actions”, and all the others as “consequences.” I’m having trouble making a principled rule that tells us why some things can be considered actions and others not, without resorting to free will talk.

So for example, consider the following setup:

There’s a gene G in some humans that causes them to have strong desires for candy (D). This gene also causes low blood sugar (B) via a separate mechanism. Eating lots of candy (E) causes increased blood sugar. And finally, people have self-control (S), which help them not eat candy, even if they really desire it.

We can represent all of these relationships in the following diagram.

Free will.png

Now we can compare how EDT and CDT will decide on what to do.

If EDT looks at the expected utility of eating candy vs not eating candy, they’ll find both a negative dependence (eating candy makes a low blood sugar less likely), and a positive dependence (eating candy makes it more likely that you have the gene, which makes it more likely that you have a low blood sugar).

Let’s suppose that the positive dependence outweighs the low dependence, so that EDT ends up seeing that eating candy makes it overall more likely that you have a low blood sugar.

P(B | E) > P(B)

What does the CDT calculate? Well, they look at the causal conditional probability P(B | do E). In other words, they calculate their probabilities according to the following diagram.

Free will CDT

Now they’ll see only a single dependence between eating candy (E) and having a low blood sugar (B) – the direct causal dependence. Thus, they end up thinking that eating candy makes them less likely to have a low blood sugar.

P(B | do E) < P(B)

This difference in how they calculate probabilities may lead them to behave differently. So, for instance, if they both value having a low blood sugar much more than eating candy, then the evidential decision theorist will eat the candy, and the causal decision theorist will not.

Okay, fine. This all makes sense. The problem with this is, both of them decided to make their decision on the basis of what value of E maximizes expected utility. But this was not their only choice!

They could instead have said, “Look, whether or not I actually eat the candy is not under my direct control. That is, the actual movement of my hand to the candy bar and the subsequent chewing and swallowing. What I’m controlling in this process is my brain state before and as I decide to eat the candy. In other words, what I can directly vary is the value of S – whether or not the self-controlled part of my mind tells me to eat the candy or not. The value of E that ends up actually obtaining is then a result of my choice of the value of S.”

If they had thought this way, then instead of calculating EU(E) and EU(~E), they would calculate EU(S) and EU(~S), and go with whichever one maximizes expected utility.

But now we get a different answer than before!

In particular, CDT and EDT are now looking at the same diagram, because when the causal decision theorist intervenes on the value of S, there are no causal arrows for them to break. This means that they calculate the same probabilities.

P(B | S) = P(B | do S)

And thus get the same expected utility values, resulting in them behaving the same way.

Furthermore, somebody else might argue “No, don’t be silly. We don’t only have control over S, we have control over both S, and E.” This corresponds to varying both S and E in our expected utility calculation, and choosing the optimal values. That is, they choose the actions that correspond to the max of the set { EU(S, E), EU(S, ~E), EU(~S, E), EU(~S, ~E) }.

Another person might say “Yes, I’m in control of S. But I’m also in control of D! That is, if I try really hard, I can make myself not desire things that I previously desired.” This person will vary S and D, and choose that which optimizes expected utility.

Another person will claim that they are in control of S, D, and E, and their algorithm will look at all eight combinations of these three values.

Somebody else might say that they have partial control over D. Another person might claim that they can mentally affect their blood sugar levels, so that B should be directly included in their set of “actions” that they use to calculate EU!

And all of these people will, in general, get different answers.

***

Some of these possible choices of the “set of actions” are clearly wrong. For instance, a person that says that they can by introspection change the value of G, editing out the gene in all of their cells, is deluded.

But I’m not sure how to make a principled judgment as to whether or not a person should calculate expected utilities varying S and D, varying just S, varying just E, and other plausible choices.

What’s worse, I’m not exactly sure how to rigorously justify why some variables are “plausible choices” for actions, and others not.

What’s even worse, when I try to make these types of principled judgments, my thinking naturally seems to end up relying on free-will-type ideas. So we want to say that we are actually in control of S, and in a sense we can’t really freely choose the value of D, because it is determined by our genes.

But if we extend this reasoning to its extreme conclusion, we end up saying that we can’t control any of the values of the variables, as they are all the determined results of factors that are out of our control.

If somebody hands me a causal diagram and tells me which variables they are “in control of”, I can tell them what CDT recommends them to do and what EDT recommends them to do.

But if I am just handed the causal diagram by itself, it seems that I am required to make some judgments about what variables are under the “free control” of the agent in question.

One potential way out of this is to say that variable X is under the control of agent A if, when they decide that they want to do X, then X happens. That is, X is an ‘action variable’ if you can always trace a direct link between the event in the brain of A of ‘deciding to do X’ and the actual occurrence of X.

Two problems that I see with this are (1) that this seems like it might be too strong of a requirement, and (2) that this seems to rely on a starting assumption that the event of ‘deciding to do X’ is an action variable.

On (1): we might want to say that I am “in control” of my desire for candy, even if my decision to diminish it is only sometimes effectual. Do we say that I am only in control of my desire for candy in those exact instances when I actually successfully determine their value? How about the cases when my decision to desire candy lines up with whether or not I desire candy, but purely by coincidence? For instance, somebody walking around constantly “deciding” to keep the moon in orbit around the Earth is not in “free control” of the moon’s orbit, but this way of thinking seems to imply that they are.

And on (2): Procedurally, this method involves introducing a new variable (“Decides X”), and seeing whether or not it empirically leads to X. After all, if the part of your brain that decides X is completely out of your control, then it makes as much sense to say that you can control X as to say that you can control the moon’s orbit. But then we have a new question, about how much this decision is under your control.  There’s a circularity here.

We can determine if “Decides X” is a proper action variable by imagining a new variable “Decides (Decides X)”, and seeing if it actually is successful at determining the value of “Decides X”. And then, if somebody asks us how we know that “Decides (Decides X)” is an action variable, we look for a variable “Decides (Decides (Decides X))”. Et cetera.

How can we figure our way out of this mess?

Simpson’s paradox

Previous: Screening off and explaining away

A look at admission statistics at a college reveals that women are less likely to be admitted to graduate programs than men. A closer investigation reveals that in fact when the data is broken down into individual department data, women are more likely to be admitted than men. Does this sound impossible to you? It happened at UC Berkeley in 1973.

When two treatments are tested on a group of patients with kidney stones, Treatment A turns out to lead to worse recovery rates than Treatment B. But when the patients are divided according to the size of their kidney stone, it turns out that no matter how large their kidney stone, Treatment A always does better than Treatment B. Is this a logical contradiction? Nope, it happened in 1986!

What’s going on here? How can we make sense of this apparently inconsistent data? And most importantly, what conclusions do we draw? Is Berkeley biased against women or men? Is Treatment A actually more effective or less effective than Treatment B?

In this post, we’ll apply what we’ve learned about causal modeling to be able to answer these questions.

***

Quine gave the following categorization of types of paradoxes: veridical paradoxes (those that seem wrong but are actually correct), falsidical paradoxes (those that seem wrong and actually are wrong), and antinomies (those that are premised on common forms of reasoning and end up deriving a contradiction).

Simpson’s paradox is in the first category. While it seems impossible, it actually is possible, and it happens all the time. Our first task is to explain away the apparent falsity of the paradox.

Let’s look at some actual data on the recovery rates for different treatments of kidney stones.

Treatment A Treatment B
All patients 78% (273/350) 83% (289/350)

The percentages represent the number of patients that recovered, out of all those that were given the particular treatment. So 273 patients recovered out of the 350 patients given Treatment A, giving us 78%. And 289 patients recovered out of the 350 patients given Treatment B, giving 83%.

At this point we’d be tempted to proclaim that B is the better treatment. But if we now break down the data and divide up the patients by kidney stone size, we see:

Treatment A Treatment B
Small stones 93% (81/87) 87% (234/270)
Large stones 73% (192/263) 69% (55/80)

And here the paradoxical conclusion falls out! If you have small stones, Treatment A looks better for you. And if you have large stones, Treatment A looks better for you. So no matter what size kidney stones you have, Treatment A is better!

And yet, amongst all patients, Treatment B has a higher recovery rate.

Small stones: A better than B
Large stones: A better than B
All sizes: B better than A

I encourage you to check out the numbers for yourself, in case you still don’t believe this.

***

The simplest explanation for what’s going on here is that we are treating conditional probabilities like they are joint probabilities. Let’s look again at our table, and express the meaning of the different percentages more precisely.

Treatment A Treatment B
Small stones P(Recovery | Small stones & Treatment A) P(Recovery | Small stones & Treatment B)
Large stones P(Recovery | Large stones & Treatment A) P(Recovery | Large stones & Treatment B)
Everybody P(Recovery | Treatment A) P(Recovery | Treatment B)

Our paradoxical result is the following:

P(Recovery | Small stones & Treatment A) > P(Recovery | Small stones & Treatment B)
P(Recovery | Large stones & Treatment A) > P(Recovery | Large stones & Treatment B)
P(Recovery | Treatment A) < P(Recovery | Treatment B)

But this is no paradox at all! There is no law of probability that tells us:

If P(A | B & C) > P(A | B & ~C)
and P(A | ~B & C) > P(A | ~B & ~C),
then P(A | C) > P(A | ~C)

There is, however, a law of probability that tells us:

If P(A & B | C) > P(A & B | ~C)
and P(A & ~B | C) > P(A & ~B | ~C),
then P(A | C) > P(A | ~C)

And if we represented the data in terms of these joint probabilities (probability of recovery AND small stones given Treatment A, for example) instead of conditional probabilities, we’d find that the probabilities add up nicely and the paradox vanishes.

Treatment A Treatment B
Small stones 23% (81/350) 67% (234/350)
Large stones 55% (192/350) 16% (55/350)
All patients 78% (273/350) 83% (289/350)

It is in this sense that the paradox arises from improper treatment of conditional probabilities as joint probabilities.

***

This tells us why we got a paradoxical result, but isn’t quite fully satisfying. We still want to know, for instance, whether we should give somebody with small kidney stones Treatment A or Treatment B.

The fully satisfying answer comes from causal modeling. The causal diagram we will draw will have three variables, A (which is true if you receive Treatment A and false if you receive Treatment B), S (which is true if you have small kidney stones and false if you have large), and R (which is true if you recovered).

Our causal diagram should express that there is some causal relationship between the treatment you receive (A) and whether you recover (R). It should also show a causal relationship between the size of your kidney stone (S) and your recovery, as the data indicates that larger kidney stones make recovery less likely.

And finally, it should show a causal arrow from the size of the kidney stone to the treatment that you receive. This final arrow comes from the fact that more people with large stones were given Treatment A than Treatment B, and more people with small stones were given Treatment B than Treatment B.

This gives us the following diagram:

Simpson's paradox

The values of P(S), P(A | S), and P(A | ~S) were calculated from the table we started with. For instance, the value of P(S) was calculated by adding up all the patients that had small kidney stones, and dividing by the total number of patients in the study: (87 + 270) / 700.

Now, we want to know if P(R | A) > P(R | ~A) (that is, if recovery is more likely given Treatment A than given Treatment B).

If we just look at the conditional probabilities given by our first table, then we are taking into account two sources of dependency between treatment type and recovery. The first is the direct causal relationship, which is what we want to know. The second is the spurious correlation between A and R as a result of the common cause S.

Simpson's paradox paths

Here the red arrows represent “paths of dependency” between A and R. For example, since those with small stones are more likely to get treatment B, and are also more likely to recover, this will result in a spurious correlation between small stones and recovery.

So how we do we determine the actual non-spurious causal dependency between A and R?

Easy!

If we observe the value of S, then we screen A off from R through S! This removes the spurious correlation, and leaves us with just the causal relationship that we want.

Simpson's paradox broken

What this means is that the true nature of the relationship between treatment type and recovery can be determined by breaking down the data in terms of kidney stone size. Looking back at our original data:

Recovery rate Treatment A Treatment B
Small stones 93% (81/87) 87% (234/270)
Large stones 73% (192/263) 69% (55/80)
All patients 78% (273/350) 83% (289/350)

This corresponds to looking at the data divided up by size of stones, and not the data on all patients. And since for each stone size category, Treatment A was more effective than Treatment B, this is the true causal relationship between A and R!

***

A nice feature of the framework of causal modeling is that there are often multiple ways to think about the same problem. So instead of thinking about this in terms of screening off the spurious correlation through observation of S, we could also think in terms of causal interventions.

In other words, to determine the true nature of the causal relationship between A and R, we want to intervene on A, and see what happens to R.

This corresponds to calculating if P(R | do A) > P(R | do ~A), rather than if P(R | A) > P(R | ~A).

Intervention on A gives us the new diagram:

Simpson's paradox intervene

With this diagram, we can calculate:

P(R | do A)
= P(R & S | do A) + P(R & ~S | do A)
= P(S) * P(R | A & S) + P(~S) * P(R | A & ~S)
= 51% * 93% + 49% * 73%
= 83.2%

And…

P(R | do ~A)
= P(R & S | do ~A) + P(R & ~S | do ~A)
= P(S) * P(R | ~A & S) + P(~S) * P(R | ~A & ~S)
= 51% * 87% + 49% * 69%
= 78.2%

Now not only do we see that Treatment A is better than Treatment B, but we can have the exact amount by which it is better – it improves recovery chances by about 5%!

Next, we’re going to go kind of crazy with Simpson’s paradox and show how to construct an infinite chain of Simpson’s paradoxes.

Fantastic paper on all of this here.

Previous: Screening off and explaining away

Next: Iterated Simpson’s paradox

Screening off and explaining away

Previous: Correlation and causation

In this post, I’ll explain three of the most valuable tools for inference that arise naturally from causal modeling.

Screening off via causal intermediary
Screening off via common cause
Explaining away

First:

Suppose that the rain causes the sidewalk to get wet, and the sidewalk getting wet causes you to slip and break your elbow.

rain & slip & elbow.png

This means that if you know that it’s raining, then you know that a broken elbow is more likely. But if you also know that the sidewalk is wet, then learning whether or not it is raining no longer makes a broken elbow more likely. After all, the rain is only a useful piece of information for predicting broken elbows insofar as it allows you to infer sidewalk-wetness.

In other words, the information about sidewalk-wetness screens off the information about whether or not it is raining with respect to broken elbows. In particular, sidewalk-wetness screens off rain because it is a causal intermediary to broken elbows.

Second:

Suppose that being wealthy causes you to eat more nutritious food, and being wealthy also causes you to own fancy cars.

common cause.png

This means that if you see somebody in a fancy car, you know it is more likely that they eat nutritious food. But if you already knew that they were wealthy, then knowing that their car is fancy tells you no more about the nutritiousness of their diet. After all, the fanciness of the car is only a useful piece of information for predicting nutritious diets insofar as it allows you to infer wealth.

In other words, wealth screens off ownership of fancy cars with respect to nutrition. In particular, wealth screens off ownership of fancy cars because it is a common cause of nutrition and fancy car owning.

Third:

Suppose that being really intelligent causes you to get on television, and being really attractive causes you to get on television, but attractiveness and intelligence are not directly causally related.

smart & hot & tv.png

This means that in the general population, you don’t learn anything about somebody’s intelligence by assessing their attractiveness. But if you know that they are on television, then you do learn something about their intelligence by assessing their attractiveness.

In particular, if you know that somebody is on television, and then you learn that they are attractive, then it becomes less likely that they intelligent than it was before you learned this.

We say that in this scenario attractiveness explains away intelligence, given the knowledge that they are on television.

***

I want to introduce some notation that will allow us to really compactly describe these types of effects and visualize them clearly.

We’ll depict an ‘observed variable’ in a causal diagram as follows:

A&gt;B&gt;C with observed B

This diagram says that A causes B, B causes C, and the value of B is known.

In addition, we talked about the value of one variable telling you something about the value of another variable, given some information about other variables. For this we use the language of dependence.

To say, for example, that A and B are independent given C, we write:

(A ⫫ B) | C

And to say that A and B are dependent given C, we just write:

~(A ⫫ B) | C

With this notation, we can summarize everything I said above with the following diagram:

Screening off and explaining away

In words, the first row expresses dependent variables that become independent when conditioning on causal intermediaries. B screens off A from C as a causal intermediary.

The second expresses dependent variables that become independent when conditioning on common causes. B screens off A from C as a common cause.

And the third row expresses independent variables that become dependent when conditioning on common effects. A explains away C, given B.

***

Repeated application of these three rules allows you to determine dependencies in complicated causal diagrams. Let’s say that somebody gives you the following diagram:

Complex cause

First they ask you if E and F are going to be correlated.

We can answer this just by tracing causal paths through the diagram. If we look at all connected triples on paths leading from E to F and find that there is dependence between the end variables in each triple, then we know that E and F are dependent.

The path ECA is a causal chain, and C is not observed, so E and A are dependent along this path. Next, the path CAD is a common cause path, and the common cause (A) is not observed, thus retaining dependence again along the path. And finally, the path ADF is a causal chain with D unobserved, so A and F are dependent along the path.

So E and F are dependent.

Now your questioner tell you the value of D, and re-asks you if E and F are dependent.

Complex cause obs D

Now dependence still exists along the paths ECA and CAD, but the path ADF breaks the dependence. This follows from the rule in row 1: D is observed, so A is screened off from F. Since A is screened off, E is as well. This means that E and F are now independent.

Suppose they asked you if E and B were dependent before telling you the value of D. In this case, the dependence travels along ECA, and along CAD, but is broken along ADB by observation of D. This follows from our rule in row 3.

And if they asked you if E and B were dependent after telling you the value of D, then you would respond that they are dependent. Now the last leg of the path (ADB) is dependent, because A and B explain each other away.

The general ability to look at a complicated causal diagram is a valuable tool, and we will come back to it in the future.

Next, I’ll talk about one of my current favorite applications of causal diagrams: Simpson’s paradox!

Previous: Correlation and causation

Next: Simpson’s paradox

Societal Failure Modes

(Nothing original, besides potentially this specific way of framing the concepts. This post started off short and ended up wayyy too long, and I don’t have the proper level of executive control to make myself shorten it significantly. So sorry, you’re stuck with this!)

Noam Chomsky in a recent interview said about the Republican Party:

I mean, has there ever been an organization in human history that is dedicated, with such commitment, to the destruction of organized human life on Earth? Not that I’m aware of. Is the Republican organization – I hesitate to call it a party – committed to that? Overwhelmingly. There isn’t even any question about it.

And later in the same interview:

… extermination of the species is very much an – very much an open question. I don’t want to say it’s solely the impact of the Republican Party – obviously, that’s false – but they certainly are in the lead in openly advocating and working for destruction of the human species.

In Chomsky’s mind, members of the Republican Party apparently sit in dark rooms scheming about how best to destroy all that is good and sacred.

I just watched the most recent Star Wars movie, and was struck by a sense of some relationship between the sentiment being expressed by Chomsky here and a statement made by Supreme Leader Snoke:

The seed of the Jedi Order lives. As long as he does, hope lives within the galaxy. I thought you would be the one to snuff it out.

There’s a really easy pattern of thought to fall into, which is something like “When things go wrong, it’s because of evil people doing evil things.”

It’s a really tempting idea. It diagnoses our societal problems as a simple “good guys vs bad guys” story – easy to understand and to convince others of. And it comes with an automatic solution, one that is very intuitive, simple, and highly self-gratifying: “Get rid of the bad guys, and just let us good guys make all the decisions!”

I think that the prevalence of this sort of story in the entertainment industry gives us some sort of evidence of its memetic power as a go-to explanation for problems. Think about how intensely the movie industry is optimizing for densely packed megadoses of gratifying storylines, visual feasts, appealing characters, and all the rest. The degree to which two and a half hours can be packed with constant intense emotional stimulation is fairly astounding.

Given this competitive market for appealing stories, it makes sense that we’d expect to gain some level of insight into the types of memes that we are most vulnerable to by looking at those types of stories and plot devices that appear over and over again. And this meme in particular, the theme of “social problems are caused by evil people,” is astonishingly universal across entertainment.

***

That this meme is wrong is the first of two big insights that I’ve been internalizing more and more in the past year. These are:

  1. When stuff goes wrong, or the world seems like it’s stuck in shitty and totally repairable ways, the only explanation is not evil people. In fact, this is often the least helpful explanation.
  2. Talking about the “motives” of an institution can be extremely useful. These motives can overpower the motives of the individuals that make up that institution, making them more or less irrelevant. In this way, we can end up with a description of institutions with weird desires and inclinations that are totally distinct from those of the people that make them up, and yet the institutions are in charge of what actually happens in the world.

On the second insight first: this is a sense in which institutions can be very very powerful. It’s not just the sense of powerful that means “able to implement lots of large-scale policies and cause lots of big changes”. It’s more like “able to override the desires of individuals within your range of influence, manipulating and bending them to your will.”

I was talking to my sister and her fiancé, both law students, about the US judicial system, and Supreme Court justices in particular. I wanted to understand what it is that really constrains the decisions of these highest judicial authorities; what are the forces that result in Justice Ginsberg writing the particular decision that she ends up writing.

What they ended up concluding is that there are essentially no such external forces.

Sure, there are ways in which Supreme Court justices can lose their jobs in principle, but this has never actually happened. And Congress can and does sometimes ignore Supreme Court decisions on statutory issues, but this doesn’t generally give the Justices any less reason to write their decision any differently.

What guides Justice Ginsberg is what she believes is right – her ideology – and perhaps legacy. In other words, purely internal forces. I wanted to think of other people in positions that allow similar degrees of power in ability to enact social change, and failed.

The first sense of power as ‘able to cause lots of things to happen” really doesn’t align with the second sense of ‘free from external constraints on your decision-making‘. An autocratic ruler might be plenty powerful in terms of ability to decide economic policy or assassinate journalists or wage war on neighboring states, but is highly constrained in his decisions by a tight incentive structure around what allows him to keep doing these things.

On the other hand, a Supreme Court justice could have total power to do whatever she personally desires, but never do anything remarkable or make any significant long-term impact on society.

The fact that this is so rare – that we could only think of a single example of a position like this – tells us about the way that powerful institutions are able to warp and override the individual motivations of the humans that compose them.

The rest of this post is on the first insight, about the idea that social problems are often not caused by evil people. There are two general things to say about evil people:

  1. I think that it’s often the case that “evil people” is a very surface-level explanation, able to capture some aspects of reality and roughly get at the problem, but not touching anywhere near the roots of the issue. One example of this may be when you ask people what the cause of the 2007 financial crisis was, and they go on about greedy bankers destroying America with their insatiable thirst for wealth.
    While they might be landing on some semblance of truth there, they are really missing a lot of important subtlety in terms of the incentive structures of financial institutions, and how they led the bankers to behave in the way that they did. They are also very naturally led to unproductive “solutions” to the problems – what do we do, ban greed? No more bankers? Chuck capitalism? (Viva la revolución?) If you try to explain things on the deeper level of the incentive structures that led to “greedy banker” behavior, then you stand a chance of actually understanding how to solve the root problem and prevent it from recurring.
  2. Appeals to “evil people” can only explain a small proportion of the actual problems that we actually see in the world. There are a massive number of ways in which groups of human beings, all good people not trying to cause destruction and chaos or extinguish the last lights of hope in the universe, can end up steering themselves into highly suboptimal and unfortunate states.

My main goal in this post is to try to taxonomize these different causes of civilizational failure.

Previously I gave a barebones taxonomy of some of the reasons that low-hanging policy fruits might be left unplucked. Here I want to give a more comprehensive list.

***

I think a useful way to frame these issues is in terms of Nash equilibria. The worst-case scenario is where there are Pareto improvements all around us, and yet none of these improvements correspond to worlds that are in a Nash equilibrium. These are cases where the prospect of improvement seems fairly hopeless without a significant restructuring of our institutions.

Slightly better scenarios are where we have improvements that do correspond to a world in a Nash equilibrium, but we just happen to be stuck in a worse Nash equilibrium. So to start with, we have:

  • The better world is not in a Nash equilibrium
  • The better world is in a Nash equilibrium

I think that failures of the first kind are very commonly made amongst bright-eyed idealists trying to imagine setting up their perfect societies.

These types of failures correspond to questions like “okay, so once you’ve set up your perfect world, how will you assure that it stays that way?” and can be spotted in plans that involve steps like “well, I’m just assuming that all the people in my world are kind enough to not follow their incentives down this obvious path to failure.”

Nash equilibria correspond to stable societal setups. Any societal setup that is not in a Nash equilibrium can fairly quickly be expected to degenerate into some actually stable societal set-up.

The ways in which a given societal set up fails to be stable can be quite subtle and non-obvious, which I suspect is why this step is so often overlooked by reformers that think they see obvious ways to improve the world.

One of my favorite examples of this is the make-up problem. It starts with the following assumptions: (1) makeup makes people more attractive (which they want to be), and (2) an individual’s attractiveness is valued relative to the individuals around them.

Let’s now consider two societies, a make-up free society and a makeup-ubiquitous society. In both societies, everybody’s relative attractiveness is the same, which means that nobody is better off or worse off in one society over another on the basis of their attractiveness.

But the society in which everybody wears makeup is worse for everybody, because everybody has to spend a little bit of their money buying makeup. In other words, the makeup-free world represents a Pareto improvement over the makeup-ubiquitous world.

What’s worse; the makeup-free world is not in a Nash equilibrium, and the makeup-ubiquitous society is!

We can see this by imagining a society that starts makeup-free, and looking at the incentives of an individual within that society. This individual only stands to gain by wearing makeup, because she becomes more attractive relative to everybody else. So she buys makeup. Everybody else reasons the same way, so the make-up free society quickly degenerates into its equilibrium version, the makeup-ubiquitous society.

Sure, she can see that if everybody reasoned this way, then she will be worse off (she would have spent her money and gained nothing from it). But this reasoning does not help her. Why? Because regardless of what everybody else does, she is still better off wearing makeup.

If nobody wears makeup, then her relative attractiveness rises if she wears makeup. And if everybody else wears makeup, then her relative attractiveness rises if she wears makeup. It’s just that it’s rising from a lower starting point.

So no matter what society we start in, we end up in the suboptimal makeup-ubiquitous society. (I have to point out here that this is assuming a standard causal decision theory framework, which I think is wrong. Timeless decision theory will object to this line of reasoning, and will be able to maintain a makeup free equilibrium.)

We want to say “but just in this society assume that everybody is a good enough person to recognize the problem with makeup-wearing, and doesn’t do so!“

But that’s missing the entire point of civilization building – dealing with the fact that we will end up leaving non-Nash-equilibrium societal setups and degenerating in unexpected ways.

This failure mode arises because of the nature of positional goods, which are exactly what they sound like. In our example, attractiveness is a positional good, because your attractiveness is determined by looking at your position with respect to all other individuals (and yes this is a bit contrived and no I don’t think that attractiveness is purely positional, though I think that this is in part an actual problem).

To some degree, prices are also a positional good. If all prices fell tomorrow, then everybody would quickly end up with the same purchasing power as they had yesterday. And if everybody got an extra dollar to spend tomorrow, then prices would rise in response, the value of their money would decrease, and nobody would be better off (there are a lot of subtleties that make this not actually totally true, but let’s set that aside for the sake of simplicity).

Positional goods are just one example where we can naturally end up with our desired societies not being Nash equilibria.

The more general situation is just bad incentive structures, whereby individuals are incentivized to defect against a benevolent order, and society tosses and turns and settles at the nearest Nash equilibrium.

  • The better world is not a Nash equilibrium
    • Positional goods
    • Bad incentive structures
  • The better world is a Nash equilibrium

***

If the better world is in a Nash equilibrium, then we can actually imagine this world coming into being and not crumbling into a degenerate cousin-world. If a magical omniscient society-optimizing God stepped in and rearranged things, then they would likely stay that way, and we’d end up with a stable and happier world.

But there are a lot of reasons why all of us that are not magical society-optimizing Gods can do very little to make the changes that we desire. Said differently, there are many ways in which current Nash equilibria can do a great job of keeping us stuck in the existing system.

Three basic types of problems are (1) where the decision makers are not incentivized to implement this policy, (2) where valuable information fails to reach decision makers, and (3) where decision makers do have the right incentives and information, but fail because of coordination problems.

  • The better world is not a Nash equilibrium
    • Positional goods
    • Bad incentive structures
  • The better world is a Nash equilibrium
    • You can’t reach it because you’re stuck in a lesser Nash equilibrium.
      • Lack of incentives in decision makers
      • Asymmetric information
      • Coordination problems

Lack of incentives in decision makers can take many forms. The most famous of these occurs when policies result in externalities. This is essentially just where decision-makers do not absorb some of the consequences of a policy.

Negative externalities help to explain why behaviors that are net negative to society exist and continue (resulting in things like climate change and overfishing, for example), and positive externalities help to explain why some behaviors that would be net positive for society are not happening.

An even worse case of misalignment of incentives would be where the positive consequences on society would be negative consequences on decision-makers, or vice-versa. Our first-past-the-post voting system might be an example of this – abandoning FPTP would be great exactly because it allows us to remove the current set of decision-makers and replace them with a better set. This would great for us, but not so great for them.

I’m not aware of a name for this class of scenarios, and will just call it ‘perverse incentives.’

I think that this is also where the traditional concept of “evil people” would lie – evil people are those whose incentives are dramatically misaligned. This could mean that they are apathetic towards societal improvements, but typically fiction’s common conception of villains is individuals actively trying to harm society.

Lack of liquidity is another potential source of absent incentives. This is where there are plenty of individuals that do have the right incentives, but there is not enough freedom for them to actually make significant changes.

An example of this could be if a bunch of individuals all had the same idea for a fantastic new app that would perform some missing social function, and all know how to make the app, but are barred by burdensome costs of actually entering the market and getting the app out there.

The app will not get developed and society will be worse off, as a result of the difficulty in converting good app ideas to cash.

  • Lack of incentives in decision makers
    • Misalignment of incentives
      • Externalities
      • Perverse incentives
        • Evil people
      • Lack of liquidity

***

Asymmetric information is a well-known phenomenon that can lead societies into ruts. The classic example of this is the lemons problem. There are versions of asymmetric information problems in the insurance market, the housing market, the health care market and the charity market.

This deserves its own category because asymmetric information can bar progress, even when decision-makers have good incentives and important good policy ideas are out there.

  • Lack of incentives in decision makers
    • Misalignment of incentives
      • Externalities
      • Perverse incentives
        • Evil people
      • Lack of liquidity
    • Asymmetric information

And of course, there are coordination problems. The makeup example given earlier is an example of a coordination problem – if everybody could successfully coordinate and avoid the temptation of makeup, then they’d all end up better off. But since each individual is incentivized to defect, the coordination attempts will break down.

Coordination problems generally occur when you have multi-step or multi-factor decision processes. I.e. when the decision cannot be unilaterally made by a single individual, and must be done as a cooperative effort between groups of individuals operating under different incentive structures.

A nice clear example of this comes from Eliezer Yudkowsky, who imagines a hypothetical new site called Danslist, designed to be a competitor to Craigslist.

Danslist is better than Craigslist in every way, and everybody would prefer that it was the site in use. The problem is that Craigslist is older, so everybody is already on that site.

Buyers will only switch to Danslist if there are enough sellers there, and sellers will only switch to Danslist if there are enough buyers there. This makes the decision to switch to Danslist a decision that is dependent on two factors, the buyers and the sellers.

In particular, an N-factor market is one where there are N different incentive structures that must interact for action to occur. In N-factor markets, the larger N is, the more difficult it is to make good decisions happen.

This is really important, because when markets are stuck in this way, inefficiencies arise and people can profit off of the sub-optimality of the situation.

So Craigslist can charge more than Danslist, while offering a worse service, as long as this doesn’t provide sufficient incentive for enough people to switch over.

Yudkowsky also talks about Elsevier as an instance of this. Elsevier is a profiteer that captured several large and prestigious scientific journals and jacked up subscription prices. While researchers, universities, and readers could in principle just unanimously switch their publication patterns to non-Elsevier journals, this involves solving a fairly tough coordination problem. (It has happened a few times)

One solution to coordination problems is an ability to credibly pre-commit. So if everybody in the makeup-ubiquitous world was able to sign a magical agreement that truly and completely credibly bound their future actions in a way that they couldn’t defect from, then they could end up in a better world.

When individuals cannot credibly pre-commit, then this naturally results in coordination problems.

And finally, there are other weird reasons that are harder to categorize for why we end up stuck in bad Nash equilibria.

For instance, a system in which politicians respond to the wills of voters and are genuinely accountable to them seems like a system with a nicely aligned incentive structure.

But if for some reason, the majority of the public resists policies that will actually improve their lives, or push policies that will hurt them, then this system will still end up in a failure mode. Perhaps this failure mode is not best expressed as a Nash equilibrium, as there is a sense in which voters do have the incentive to switch to a more sensible view, but I will express it as such regardless.

This looks to me like what is happening with popular opinion about minimum wage laws.

Huge amounts of people support minimum wage laws, including those that may actually lose their jobs as the result of those laws. While I’m aware that there isn’t a strong consensus among economists as to the real effects of a moderate minimum-wage increase, it is striking to me that so many people are so convinced that it can only be net positive for them, when there is plenty of evidence that it may not be.

Another instance of this is the idea of “wage stickiness”.

This is the idea that employers are more likely to fire their workers than to lower their wages, resulting in an artificial “stickiness” to the current wages. The proposed reason for why this is so is that worker morale is hurt more by decreased wages than by coworkers being fired.

Sticky wages are especially bad when you take into account inflation effects. If an economy has an inflation rate of 10%, then an employer that keeps her employees’ wages constant is in effect cutting their wages by 10%. Even if she raises their wages by 5%, they’re still losing money!

And if the economy enters a recession, with say an inflation rate of -5%, then an employer will have to cut wages by 5% in order to stay at the market equilibrium. But since wages are sticky and her workers won’t realize that they are actually not losing any money despite the wage cut, she will be more likely to fire workers instead.

A friend described to me an interaction he had had with a coworker at a manufacturing plant. My friend had been recently hired in the same position as this man, and was receiving the minimum wage at 5 dollars an hour.

His coworker was telling him about how he was being paid so much, because he had been working there so many years and was constantly getting pay raises. He was mortified when he compared wages with my friend, and found that they were receiving the exact same amount.

Status quo bias is another important effect to keep in mind here. Individuals are likely to favor the current status quo, for no reason besides that it is the status quo. This type of effect can add to political inertia and further entrench society in a suboptimal Nash equilibrium.

I’ll just lump all of these effects in as “Stupidity & cognitive biases.”

***

I want to close by adding a third category that I’ve been starting to suspect is more important than I previously realized. This is:

  • The better world is in a Nash equilibrium, and you can reach it, and you will reach it, just WAIT a little bit.

I add this because I sometimes forget that society is a massive complicated beast with enormous inertia behind its existing structure, and that just because some favored policy of yours has not yet been fully implemented everywhere, this does not mean that there is a deep underlying unsolvable problem.

So, for instance, one time I puzzled for a couple weeks about why, given the apparently low cost of ending global poverty forever, it still exists.

Aren’t there enough politicians that are aware of the low cost? And aren’t they sufficiently motivated to pick up the windfall of public support and goodwill that they would surely get? (To say nothing of massively improving the world)

Then I watched Hans Rosling’s 2008 lecture “Don’t Panic” (which, by the way, should be required watching for everyone) and realized that global poverty is actually being ended, just slowly and gradually.

The UN set a goal in 2000 to completely end all world poverty by 2030. They’ve already succeeded in cutting it in half, and are five years ahead of their plan.

We’re on course to see the end of extreme poverty; it’ll just take a few more years. And after all, it should be expected that raising an entire segment of the world’s population above the poverty line will take some time.

So in this case, the answer to my question of “Why is this problem not being solved, if solutions exist?” was actually “Um, it is being solved, you’re just impatient.”

And earlier I wrote about overfishing and the ridiculously obvious solutions to the problem. I concluded by pessimistically noting that the fishing lobby has a significant influence over policy makers, which is why the problem cannot by solved.

While the antecedent of this is true, it is in fact the case that ITQ policies are being adopted in more and more fisheries, the Atlantic Northwest cod fisheries are being revived as a result of marine protection policies, and governments are making real improvements along this front.

This is a nice optimistic note to end on – the idea that not everything is a horrible unsolvable trap and that we can and do make real progress.

***

So we have:

  • The better world is not a Nash equilibrium
    • Positional goods
    • Bad incentive structures
  • The better world is a Nash equilibrium
    • You can’t reach it because you’re stuck in a lesser Nash equilibrium.
      • Lack of incentives in decision makers
        • Misalignment of incentives
          • Externalities
          • Perverse incentives
          • Lack of liquidity
      • Asymmetric information
      • Coordination problems
        • Multi-factor markets
        • Multi-step decision processes
        • Inability to pre-commit
      • Stupidity & cognitive biases
    • You can and will reach it, just be patient.

I don’t think that this overall layout is perfect, or completely encompasses all failure modes of society. But I suspect that it is along the right lines of how to think about these issues. I’ve had conversations where people will say things like “Society would be better if we just got rid of all money” or “If somebody could just remove all those darned Republicans from power, imagine how much everything would improved” or “If I was elected dictator-for-life, I could fix all the world’s problems.”

I think that people that think this way are often really missing the point. It’s dead easy to look at the world’s problems, find somebody or something to point at and blame, and proclaim that removing them will fix everything. But the majority of the work you need to do to actually improve society involves answering really hard questions like “Am I sure that I haven’t overlooked some way in which my proposed policy degenerates into a suboptimal Nash equilibrium? What types of incentive structures naturally arise if I modify society in this way? How could somebody actually make this societal change from within the current system?”

That’s really the goal of this taxonomy – is to try to give a sense of what the right questions to be asking are.

(More & better reading along these same lines here and here.)

Dialogue: Why you should one-box in Newcomb’s problem

(Nothing original here, just my presentation of the most interesting arguments I’ve seen on the various sides)

Newcomb’s problem: You find yourself in a room with two boxes in it. Box #1 is clear, and you can see $10,000 inside. Box #2 is opaque. A loud voice announces to you: “Box 2 has either 1 million dollars inside of it or nothing. You have a choice: Either you take just Box 2 by itself, or you take both Box 1 and Box 2.”

As you’re reaching forward to take both boxes, the voice declares: “Wait! There’s a catch.

Sometime before you entered the room, a Predictor with enormous computing power scanned you, made an incredibly detailed simulation of you, and used it to make a prediction about what decision you would make. The Predictor has done similar simulations many times in the past, and has never been wrong. If the Predictor predicted that you would take just Box 2, then it filled up the box with 1 million dollars. And if the Predictor predicted that you would take both boxes, then it left Box 2 empty. Now you may make your choice.”

Newcomb.png

The most initially intuitive answer to most people is to take both boxes. Here’s the strongest argument for why this makes sense, presented by Claus the causal thinker.

Claus: “The Predictor has already made its prediction and fixed the contents of the box. So we know for sure that my decision can’t possibly have any impact on whether Box 2 is full or empty. And in either case, I am better off taking both boxes than just one! Think about it like this: whether I one-box or two-box, I still end up taking Box 2. So let’s consider Box 2 taken – I have no choice in the matter. Now the only real question is if I’m also going to take Box 1. And Box 1 has $10,000 inside it! I can see it right there! My choice is really whether to take the free $10,000 or not, and I’d be a fool to leave it behind.”

***

Claus makes a very convincing argument. On his calculation, the expected value of two-boxing is strictly greater than the expected value of one-boxing, regardless of what probabilities he puts on the second box being empty. We’ll call this the dominance argument.

But Claus is making a fundamental error in his calculation. Let’s let a different type of decision theorist named Eve interrogate Claus.

Eve: “So, Claus, I’m curious about how you arrived at your answer. You say that your decision about whether or not to take Box 1 can’t possibly impact the contents of Box 2. I think that I agree with this. But do you agree that if you don’t take Box 1, it is more likely that Box 2 has a million dollars inside it?”

Claus: “I can’t see how that could be the case. The box’s contents are already fixed. How could my decision about something entirely causally unrelated make it any more likely that the contents are one way or the other?”

Eve: “Well, it’s not actually that unusual. There are plenty of things that are correlated without any direct causal impact between them. For example, say that a certain gene causes you to be a good juggler, but also causes a high chance of a certain disease. In this case, juggling ability and incidence of the disease will end up being correlated in the population, even though neither one is directly causing the other. And if you’re a good juggler, then you should be more worried that maybe you also have the disease!”

Claus: “Sure, but I don’t see how that case is anything like this one…”

Eve: “The two cases are actually structurally identical! Let me draw some causal diagrams…” (Claus rummages around for paper and a pencil)

Newcomb vs Juggling alt.png

Eve: “In our disease example, we have a common cause (the gene) that is directly causally linked to both the disease and to being a good juggler. So the “disease” variable and the “good juggler” variable are dependent because of the “gene” variable. In your Newcomb problem, the common cause is your past self at the moment that the Predictor scanned you. This common cause is directly linked to both your decision to one-box or two-box in the present, and to the contents of the box. Which means that in the exact same way that being a good juggler makes you more likely to have the disease, two-boxing makes you more likely to end up with an empty Box 2! The two cases are exactly analogous!”

Claus: “Hmm, that all seems correct. But even if my decision to take Box 1 isn’t independent of the contents of Box 2, this doesn’t necessarily mean that I shouldn’t still take both.”

Eve: “Right! But it does invalidate your dominance argument, which implicitly rested on the assumption that you could treat the contents of the box as if they were unaffected by your action. While your actions do not strictly speaking causally effect the contents of the box, they do change the likelihoods of the different possible contents! So there is a real sense in which your actions do statistically affect the contents of the box, even though they don’t causally affect them. Anyway, we can just calculate the actual expected values and see whether one-boxing or two-boxing comes out ahead.”

Eve writes out some expected utility calculations:

Newcomb vs Juggling 2

Eve: “So you see, it actually turns out to always be better to one-box than to two-box!”

Claus: “Hmm, I guess you’re right. Okay never mind, I guess that I’ll one-box. Thanks!”

***

Claus goes away for a while, and comes back a more sophisticated causal thinker.

Claus: “Hey, remember that I agreed that my decision and the contents of Box 1 are actually dependent upon each other, just not causally?”

Eve: “Yes.”

Claus: “Well, I do still agree with that. But I am also still a two-boxer. I’ll explain – would you hand me that paper?”

Claus scribbles a few equations beneath Eve’s diagrams.

EDT vs CDT.png

Claus: “When you calculated the expected values of one-boxing and two-boxing, you implicitly used Equation (1). Let’s call this equation the “Evidential Decision Algorithm.” You summed over all the possible consequences of your actions, and multiplied the values of each consequence by the conditional probability of that consequence, given the decision.”

Eve: “Yes…”

Claus: “Well, I have a different way to calculate expected values! It’s Equation (2), and I call it the “Causal Decision Algorithm.” I also sum over all possible consequences, but I multiply the value of each consequence by its causal conditional probability, not it’s ordinary conditional probability! And when you calculate the expected value, it turns out to be larger for two-boxing!”

Eve: “Hmm, doesn’t this seem a little arbitrary? Maybe a little ad-hoc?”

Claus: “Not at all! The point of rational decision-making is to choose the decision that causes the best outcomes. What we should be interested in is only the causal links between our decisions and their possible consequences, not the spurious correlations.”

Eve: “Hmm, I can see how that makes sense…”

Claus: “Here, let’s look back at your earlier example about juggling and disease. I agree with what you said that if you observe that you’re a good juggler, you should be worried that you have the disease. But imagine that instead of just observing whether or not you’re a good juggler, you get to decide whether or not to be a good juggler. Say that you can decide to spend many hours training your juggling, and at the end of that process you know that you’ll be a good juggler. Now, according to your decision theory, deciding to train to become a good juggler puts you at a higher risk for having the disease. But that’s ridiculous! We know for sure that your decision to become a good juggler does not make you any more likely to have the disease. Since you’re deciding what actions to take, you should treat your decisions like causal interventions, in which you set the decision variable to one value or another and in the process break all other causal arrows directed at it. And that’s why you should be using the causal conditional probability, not the ordinary conditional probability!”

Eve: “Huh. What you’re saying does have some intuitive appeal. But now I’m starting to think that there is an important difference that we both missed between the juggling example and Newcomb’s problem.”

Eve draws two more diagrams on a new page.

Newcomb vs Juggling 3.png

Eve: “In the juggling case, it makes sense to describe your decision to become a good juggler or not as a causal intervention, because this decision is not part of the chain of causes leading from your genes to your juggling ability – it’s a separate cause, independent of whether or not you have the gene. But in Newcomb’s problem, your decision to one-box or two-box exists along the path of the causal arrow from your past character to your current action! The Predictor predicted every part of you, including the part of you that’s thinking about what action you’re going to take. So while modeling your decision as a causal intervention in the juggling example makes sense, doing so in Newcomb’s case is just empirically wrong! Whatever part of your brain ends up deciding to “intervene” and two-box, the Predictor predicted that this would happen! By the nature of the problem, any way in which you attempt to intervene on your decision will inevitably not actually be a causal intervention.”

***

(Tim, a new type of decision theorist, appears in a puff of smoke)

Claus and Eve: “Gasp! Who are you?”

Tim: “I’m Tim, a new type of decision theorist! And I’m here to say that you’re both wrong!”

Claus and Eve: “Gasp!”

Tim: “I’ll explain with a thought experiment. You both know the prisoner’s dilemma, right? Two prisoners each get to make a choice either to cooperate or defect. The best outcome for each one is that they defect and the other prisoner cooperates, the second best outcome is that both cooperate, the second worst is that they both defect, and the worst is that they cooperate and the other prisoner defects. Famously, two rational agents in a prisoner’s dilemma will end up both defecting, because defecting dominates cooperating as a strategy. If the other prisoner defects, you’re better off defecting, and if the other prisoner cooperates, you’re better off defecting. So you should defect.”

Claus and Eve: “Yes, that seems right…”

Tim: “Well, first of all notice that two rational agents end up behaving in a sub-optimal way. They would both be better off if they each cooperated. But apparently, being ‘rational’ in this case entails ending up worse off. This should be a little unusual to you if you think that rational decision-making is about optimizing your outcomes. But now consider this variant: now you are in a prisoner’s dilemma with an exact clone of yourself. You have identical brains, have lived identical lives, and are now making this decision in identical settings. Now what do you do?”

Claus: “Well, on my decision theory, it’s still the case that I can’t causally effect my clone with my decision. This means that when I treat my decision as an intervention, I won’t end up making the probability that my clone defects given that I defect any higher. So defecting still dominates cooperating as a strategy. I defect!”

Eve: “Well, my answer depends on the set-up of the problem. If there’s some common cause that explains why my clone and I are identical (like maybe we were both manufactured in a twin-clone-making factory), then our decisions will be dependent. If I defect, then my clone will certainly defect, and if I cooperate, then my clone will cooperate. So my algorithm will tell me that cooperation maximizes expected utility.”

Tim: “There is no common cause. It’s by an insanely unlikely coincidence that you and your clone happen to have the same brains and to have lived the same lives. Until this moment, the two of you have been completely causally cut off from each other, with no possibility of any type of causal relationship .

Eve: “Okay, then I gotta agree with Claus. With no possible common cause and no causal intermediaries, my decision can’t affect my clone’s decision, causally or statistically. So I’ll defect too.”

Tim: “You’re both wrong. Both of you end up defecting, along with your clones, and everybody is worse off. Look, both of you ended up concluding that your decision and the decision of your clone cannot be correlated, because there are no causal connections to generate that correlation. But you and your clone are completely physically identical. Every atom in your brain is in a functionally identical spot as the atoms in your clone’s brain. Are you determinists?”

Eve: “Well, in quantum mechanics -“

Tim: “Forget quantum mechanics! For the purpose of this thought experiment, you exist in a completely deterministic world, where the same initial conditions lead to the same final conditions in every case, always. You and your clone are in identical initial conditions. So your final condition – that is, your decision about whether to cooperate or defect, must be the same. In the setup as I’ve described it, it is logically impossible that you defect and your clone cooperates, or that you cooperate and your clone defects.”

Claus: “Yes, I think you’re right… but then how do we represent this extra dependence in our diagrams? We can’t draw any causal links connecting the two, so how can we express the logical connection between our actions?”

Tim: “I don’t really care how you represent it in your diagram. Maybe draw a special fancy common cause node with special fancy causal arrows that can’t be broken towards both your decision and your clone’s decision.”

TDT Clones.png

Tim: “The point is: there are really only two possible worlds. In World 1, you defect and your clone defects. In World 2, you cooperate and your clone cooperates. Which world would you rather be in?”

Claus and Eve: “World 2.”

Tim: “Good! So you’ll both cooperate. Now, what if the clone is not exactly identical to you? Let’s say that your clone only ends up doing the same thing as you 99.999% of the time. Now what do you do?”

Claus: “Well, if it’s no longer logically impossible for my clone to behave differently from me, then maybe I should defect again?”

Tim: “Do you really want a decision theory that has a discontinuous jump in your behavior from a 99.999% chance to a 100% chance? I mean, I’ve told you that the chance that the clone gives a different answer than your answer is .001%! Rational agents should take into account all of their information, not only selective pieces of it. Either you ignore this information and end up worse off, or you take it into account and win!”

Eve: “Okay, yes, it seems reasonable to still expect a 99.999% chance of identical choices in this case. So we should cooperate again. But what does all of this have to do with Newcomb’s problem?”

Tim: “It relates to your answers to Newcomb’s problem in two ways. First, it shows that both of your decision algorithms are wrong! They are failing to take into account that extra logical dependency between actions and consequences that we drew with fancy arrows. And second, Newcomb’s problem is virtually identical to the prisoner’s dilemma with a clone!”

Eve and Claus: “Huh?”

Tim: “Here, let’s modify the prisoner’s dilemma in the following way: If you both cooperate, then you get one million dollars. If you cooperate and your clone defects, you get $0. If you defect and your clone cooperates, you get $1,010,000. And if you both defect, then you get $10,000. Now “cooperating” is the same as one-boxing, and “defecting” is the same as two-boxing!”

Eve: “But hold on, isn’t the logical dependency between my actions and my clone’s actions not carried over to the prisoner’s dilemma? Like, it’s not logically impossible that I one-box and the box has a million dollars in it, right?”

Tim: “It is with a perfect Predictor, yes! Remember, the Predictor works by creating a perfect simulation of you and seeing what it does. This means that your decision to one-box or to two-box is logically dependent on the Predictor’s prediction of what you do (and thus the contents of the box) in the exact same way that your decision to cooperate is logically dependent on your clone’s decision to one-box!”

TDT

Claus: “Yes, I see. So with a perfect Predictor, there are really only two worlds to consider: one in which I one-box and get a million dollars, and another in which I two-box and get just $10,000. And of course I prefer the first, so I should one-box.”

Tim: “Exactly! And if the Predictor is not perfectly accurate, and is only right 99.999% of the time…”

Eve: “Well, then there’s still only a .001% chance that I two-box and get an extra million bucks. So, I’m still much better off if I one-box than if I two-box.”

Tim: “Yep! It sounds like we’re all on the same page then. There’s a logical dependence between your action and the contents of the box that you are rationally required to take into account, and when you do take it into account, you end up seeing that one-boxing is the rational action.”

***

The decision theory that “Tim” is using is called timeless decision theory. It’s also been variously called functional decision theory, logical decision theory, and updateless decision theory.

Timeless decision theory ends up better off in Newcomb-like problems, invariably walking away with 1 million dollars instead of $10,000. It also does better than evidential decision theory (Eve’s theory) and causal decision theory (Claus’s theory) at prisoner’s-dilemmas-with-a-clone. These are fairly contrived problems, and it’d be easy for Eve or Claus to just deny that these problems have any real-world application.

But timeless decision theorists also cooperate with each other in ordinary prisoner’s dilemmas. They have a much easier time with coordination problems in general. They do better in bargaining problems. And they can’t be blackmailed in a large general class of situations. It’s harder to write these results off as strange quirks that don’t relate to real life.

A society of TDTs wouldn’t be plagued with doubts about the rationality of voting, wouldn’t find themselves stuck in as many sub-optimal Nash equilibria, and would look around and see a lot fewer civilizational inadequacies and low-hanging policy fruit than we currently have. This is what’s most interesting to me about TDT – that it gives a foundation for rational decision-making that seems like it has potential for solving real civilizational problems.