Cults, tribes, states, and markets

The general problem solved by Civilization is how to get a bunch of people with different goals, each partial to themselves, to live together in peace and build a happy society instead of all just killing each other. It’s easy to forget just how incredibly hard of a problem this is. The lesson of game theory is that even two people whose interests don’t align can end up in shitty suboptimal Nash equilibria where they’re both worse off, by each behaving apparently perfectly rationally. Generalize this to twenty people, or a thousand people, or 300 million people, and you start to get a sense of how surprising it is that civilization exists on the scale that it does at all.

Yes, history tells many thousands of tales about large-scale defecting (civil wars, corruption, oppressive treatment of minority populations, outbreaks of violence and lawlessness, disputes over the line of succession) and the anarchic chaos that results, but it’s easy to imagine it being way, way worse. People are complex things with complex desires, and when you put that many people together, you should expect some serious failures. Hell, even a world of selfless altruists with shared goals would still have a tough time solving coordination problems of this size. Nobody thinks that the average person is better than this, so what gives?

Part of the explanation comes from psychologists like Jonathan Haidt and Joshua Greene, who detail the process by which humans evolved a moral sense that involved things like tit-for-tat emotional responses and tribalistic impulses. This baseline level of desire to form cooperative equilibria with friends helps push the balance away from chaos towards civilization, but it can’t be the whole explanation. After all, history does not reveal a constant base-rate of cooperative capacity between different humans, but instead tells a story of increasingly large-scale and complex civilizations. We went from thousands of small tribes scattered across Africa and Asia, to chiefdoms of tens of thousands individuals all working together, to vast empires that were home to millions of humans, and to today’s complex balance of global forces that make up a cooperative web that we are all part of. And we did this in the space of some ten thousand years.

This is not the type of timescale over which we can reasonably expect that evolution drastically reshaped our brains. Our moral instincts (love of kin, loyalty to friends, deference to authority, altruistic tendencies) can help us explain the cooperation we saw in 6000 B.C.E. in a tribe of a few hundred individuals. But they aren’t as helpful when we’re talking about the global network of cooperation, in which lawfulness is ensured by groups of individuals thousands of miles away, in which virtually every product that we rely on in our day-to-day life is the result of a global supply chain that brings together thousands of individuals that have never even seen each other, and in which a large and growing proportion of the world have safe access to hospitals and schools and other fruits of cooperation.

The explanation for this immense growth of humanity’s cooperative capacity is the development of institutions. As time passed, different bands of humans tried out different ways of structuring their social order. Some ways of structuring society worked better and lived on to the next generations of humans, who made further experiments in civilizational engineering. I think there is a lot to be learned by looking at the products of this thousand-year-long selection process for designing stable cooperative structures and seeing what happened to work best. In a previous post I described the TIMN theory of social evolution, which can be thought of as a categorization of the most successful organizational strategies that we’ve invented across throughout history. The following categorization is inspired by this framing, but different in many places.

The State: Cooperation is enforced by a central authority who can punish defectors. This central authority employs vast networks of hierarchically descending authority and systems of bureaucracy to be able to reach out across huge populations and keep individuals from defecting, even if they are nowhere near the actual people in charge. “State” is technically too narrow of a term, as these types of structures are not limited to governments, but can include corporate governance by CEOs, religious organizations, and criminal organizations like the Medellin Cartel. Ronfeldt uses the term Institution for this instead, but that sounds too broad to me.

The Market: Cooperation is not enforced by anybody, but instead arises as a natural result of the self-interested behaviors of individuals that each stand to gain through an exchange of goods. Markets have some really nice properties that a structure like the State doesn’t have, such as natural tendencies for exchange rates to equilibrate towards those that maximize efficiency. They also are fantastically good at dealing with huge amounts of complex information that a single central authority would be unable to parse (for instance, a weather event occurs on one coast of the United States, affecting suppliers of certain products, who then adjust their prices to re-equilibrate, which then results in a cascade of changes in consumer behavior across other markets, which also then react, and eventually the “news” of the weather event has traveled to the other coast, adjusting prices so that the products are allocated efficiently). A beautiful feature of the Market structure is that you can get HUGE amounts of people to cooperate in order to produce incredibly innovative and valuable stuff, without this cooperation being explicitly enforced by threats of punishment for defecting. Of course, Markets also have numerous failings, and the nice properties I discussed only apply for certain types of goods (those that are excludable and rival). When the Market structure extends outside of this realm, you see catastrophic failures of organization, the scale of which pose genuine threats to the continued existence of human civilization.

The Tribe: Cooperation is achieved not through a central authority or through mutually beneficial exchange, but through strong kinship and friendship relations. Tribe-type structures spring up naturally all the time in extended families, groups of friends, or shared living situations. Strong loyalty intuitions and communitarian instincts can serve to functionally punish defectors through social exclusion from their tribe, giving it some immunity to invading defector strategies. But the primary mechanism through which cooperation is enforced is the part of our psychology that keeps us from lying to our friends or stealing from our partners, even when we think we can get away with it. The problem with this structure is that it scales really poorly. Our brains can only handle a few dozen real friendships at a time, and typically these relationships require regular contact to be maintained. Historically, this has meant that tribes can only survive for fairly small groups of people that are geographically close to each other, and this is pretty much the range of their effectiveness.

The Cult: The primary idea of this category is that cooperation does not arise from self-interested exchange or from punishment for defectors, but from shared sacred beliefs or values. These beliefs often shape their holders’ entire world-views and relate to intense feelings of meaning, purpose, reverence, and awe. They can be about political ideology, metaphysics, aesthetics, or anything else that carries with it sufficient value as to penetrate into and reshape a whole worldview. The world’s major religions are the most striking examples of this, having been one of the biggest shapers of human behavior throughout history. Different members of the same religion can pour countless hours into dedicated cooperative work, not because of any sense of kinship with one another, but because of a sense of shared purpose.

The Pope won’t throw you in jail if you stop going to church, and you don’t go to make an exchange of goods with your priest (except in some very metaphorical sense that I don’t find interesting). You go because you believe deeply in the importance of going. There are aspects of Science that remind me of the Cult structure, like the hours of unpaid and anonymous work that senior scientists put into reviewing the papers of their colleagues in the field in order to give guidance to journals, grant-funders, or the researchers themselves on the quality of the material. When I’ve asked why spend so much time on doing this when they are not getting paid or recognized for their work, the responses I’ve gotten make reference to the value of the peer-review process and the joy and importance of advancing the frontier of knowledge. This type of response clearly indicates the sense of Science as a Sacred Value that serves as a driving force in the behavior of many scientists.

A Cult is like a Tribe in many ways, but one that is not limited to small sizes. Cults can grow and become global behemoths, inspiring feelings of camaraderie between total strangers that have nothing in common besides shared worldview. While the term ‘Cult’ is typically derogatory, I don’t mean to use it in this sense here. Cults are incredibly powerful ways to get huge numbers of people to work together, despite there being no obvious reason why they should do so to anybody on the outside of their worldview. And not only do they inspire large-scale cooperative behavior, but they are powerful sources of meaning and purpose in our lives. This seems tremendously valuable and loaded with potential for developing a better future society. Think about the strength of something like Judaism, and how it persevered through thousands of years of repeated extermination attempts, diasporas, and religious factioning, all the while maintaining a strong sense of Jewish identity and fervent religious belief. Taking the perspective of an alien visiting the planet, it might be baffling to try to understand why this set of beliefs didn’t die out long ago, and what constituted the glue holding the Jewish people together.

I think that the Cult structure is really undervalued in the circles I hang out in, which tend to focus on the irrationality that is often associated with a Cult. This irrationality seems natural enough; a Cult forms around a deeply held belief or set of beliefs, and strong identification with beliefs leads to dogmatism and denial of evidence. I wonder if you could have a “Cult of Rationality”, in which the “sacred beliefs” include explicit dedication to open-mindedness and non-dogmatic thinking, or if this would be in some sense self-defeating. There’s also the memetic aspect of this, which is that not just any idea is apt to become a sacred belief. It might be that the type of person that is deeply invested in rationality is exactly the type that would typically scoff at the idea of a Cult of Rationality, for instance.

Broad strokes: Tribes play on our loyalty and kinship intuitions. States play on our respect for authority. Markets play on our self-interest. And Cults play on our sense of reverence, awe, and sacredness.

Societal Failure Modes

(Nothing original, besides potentially this specific way of framing the concepts. This post started off short and ended up wayyy too long, and I don’t have the proper level of executive control to make myself shorten it significantly. So sorry, you’re stuck with this!)

Noam Chomsky in a recent interview said about the Republican Party:

I mean, has there ever been an organization in human history that is dedicated, with such commitment, to the destruction of organized human life on Earth? Not that I’m aware of. Is the Republican organization – I hesitate to call it a party – committed to that? Overwhelmingly. There isn’t even any question about it.

And later in the same interview:

… extermination of the species is very much an – very much an open question. I don’t want to say it’s solely the impact of the Republican Party – obviously, that’s false – but they certainly are in the lead in openly advocating and working for destruction of the human species.

In Chomsky’s mind, members of the Republican Party apparently sit in dark rooms scheming about how best to destroy all that is good and sacred.

I just watched the most recent Star Wars movie, and was struck by a sense of some relationship between the sentiment being expressed by Chomsky here and a statement made by Supreme Leader Snoke:

The seed of the Jedi Order lives. As long as he does, hope lives within the galaxy. I thought you would be the one to snuff it out.

There’s a really easy pattern of thought to fall into, which is something like “When things go wrong, it’s because of evil people doing evil things.”

It’s a really tempting idea. It diagnoses our societal problems as a simple “good guys vs bad guys” story – easy to understand and to convince others of. And it comes with an automatic solution, one that is very intuitive, simple, and highly self-gratifying: “Get rid of the bad guys, and just let us good guys make all the decisions!”

I think that the prevalence of this sort of story in the entertainment industry gives us some sort of evidence of its memetic power as a go-to explanation for problems. Think about how intensely the movie industry is optimizing for densely packed megadoses of gratifying storylines, visual feasts, appealing characters, and all the rest. The degree to which two and a half hours can be packed with constant intense emotional stimulation is fairly astounding.

Given this competitive market for appealing stories, it makes sense that we’d expect to gain some level of insight into the types of memes that we are most vulnerable to by looking at those types of stories and plot devices that appear over and over again. And this meme in particular, the theme of “social problems are caused by evil people,” is astonishingly universal across entertainment.


That this meme is wrong is the first of two big insights that I’ve been internalizing more and more in the past year. These are:

  1. When stuff goes wrong, or the world seems like it’s stuck in shitty and totally repairable ways, the only explanation is not evil people. In fact, this is often the least helpful explanation.
  2. Talking about the “motives” of an institution can be extremely useful. These motives can overpower the motives of the individuals that make up that institution, making them more or less irrelevant. In this way, we can end up with a description of institutions with weird desires and inclinations that are totally distinct from those of the people that make them up, and yet the institutions are in charge of what actually happens in the world.

On the second insight first: this is a sense in which institutions can be very very powerful. It’s not just the sense of powerful that means “able to implement lots of large-scale policies and cause lots of big changes”. It’s more like “able to override the desires of individuals within your range of influence, manipulating and bending them to your will.”

I was talking to my sister and her fiancé, both law students, about the US judicial system, and Supreme Court justices in particular. I wanted to understand what it is that really constrains the decisions of these highest judicial authorities; what are the forces that result in Justice Ginsberg writing the particular decision that she ends up writing.

What they ended up concluding is that there are essentially no such external forces.

Sure, there are ways in which Supreme Court justices can lose their jobs in principle, but this has never actually happened. And Congress can and does sometimes ignore Supreme Court decisions on statutory issues, but this doesn’t generally give the Justices any less reason to write their decision any differently.

What guides Justice Ginsberg is what she believes is right – her ideology – and perhaps legacy. In other words, purely internal forces. I wanted to think of other people in positions that allow similar degrees of power in ability to enact social change, and failed.

The first sense of power as ‘able to cause lots of things to happen” really doesn’t align with the second sense of ‘free from external constraints on your decision-making‘. An autocratic ruler might be plenty powerful in terms of ability to decide economic policy or assassinate journalists or wage war on neighboring states, but is highly constrained in his decisions by a tight incentive structure around what allows him to keep doing these things.

On the other hand, a Supreme Court justice could have total power to do whatever she personally desires, but never do anything remarkable or make any significant long-term impact on society.

The fact that this is so rare – that we could only think of a single example of a position like this – tells us about the way that powerful institutions are able to warp and override the individual motivations of the humans that compose them.

The rest of this post is on the first insight, about the idea that social problems are often not caused by evil people. There are two general things to say about evil people:

  1. I think that it’s often the case that “evil people” is a very surface-level explanation, able to capture some aspects of reality and roughly get at the problem, but not touching anywhere near the roots of the issue. One example of this may be when you ask people what the cause of the 2007 financial crisis was, and they go on about greedy bankers destroying America with their insatiable thirst for wealth.
    While they might be landing on some semblance of truth there, they are really missing a lot of important subtlety in terms of the incentive structures of financial institutions, and how they led the bankers to behave in the way that they did. They are also very naturally led to unproductive “solutions” to the problems – what do we do, ban greed? No more bankers? Chuck capitalism? (Viva la revolución?) If you try to explain things on the deeper level of the incentive structures that led to “greedy banker” behavior, then you stand a chance of actually understanding how to solve the root problem and prevent it from recurring.
  2. Appeals to “evil people” can only explain a small proportion of the actual problems that we actually see in the world. There are a massive number of ways in which groups of human beings, all good people not trying to cause destruction and chaos or extinguish the last lights of hope in the universe, can end up steering themselves into highly suboptimal and unfortunate states.

My main goal in this post is to try to taxonomize these different causes of civilizational failure.

Previously I gave a barebones taxonomy of some of the reasons that low-hanging policy fruits might be left unplucked. Here I want to give a more comprehensive list.


I think a useful way to frame these issues is in terms of Nash equilibria. The worst-case scenario is where there are Pareto improvements all around us, and yet none of these improvements correspond to worlds that are in a Nash equilibrium. These are cases where the prospect of improvement seems fairly hopeless without a significant restructuring of our institutions.

Slightly better scenarios are where we have improvements that do correspond to a world in a Nash equilibrium, but we just happen to be stuck in a worse Nash equilibrium. So to start with, we have:

  • The better world is not in a Nash equilibrium
  • The better world is in a Nash equilibrium

I think that failures of the first kind are very commonly made amongst bright-eyed idealists trying to imagine setting up their perfect societies.

These types of failures correspond to questions like “okay, so once you’ve set up your perfect world, how will you assure that it stays that way?” and can be spotted in plans that involve steps like “well, I’m just assuming that all the people in my world are kind enough to not follow their incentives down this obvious path to failure.”

Nash equilibria correspond to stable societal setups. Any societal setup that is not in a Nash equilibrium can fairly quickly be expected to degenerate into some actually stable societal set-up.

The ways in which a given societal set up fails to be stable can be quite subtle and non-obvious, which I suspect is why this step is so often overlooked by reformers that think they see obvious ways to improve the world.

One of my favorite examples of this is the make-up problem. It starts with the following assumptions: (1) makeup makes people more attractive (which they want to be), and (2) an individual’s attractiveness is valued relative to the individuals around them.

Let’s now consider two societies, a make-up free society and a makeup-ubiquitous society. In both societies, everybody’s relative attractiveness is the same, which means that nobody is better off or worse off in one society over another on the basis of their attractiveness.

But the society in which everybody wears makeup is worse for everybody, because everybody has to spend a little bit of their money buying makeup. In other words, the makeup-free world represents a Pareto improvement over the makeup-ubiquitous world.

What’s worse; the makeup-free world is not in a Nash equilibrium, and the makeup-ubiquitous society is!

We can see this by imagining a society that starts makeup-free, and looking at the incentives of an individual within that society. This individual only stands to gain by wearing makeup, because she becomes more attractive relative to everybody else. So she buys makeup. Everybody else reasons the same way, so the make-up free society quickly degenerates into its equilibrium version, the makeup-ubiquitous society.

Sure, she can see that if everybody reasoned this way, then she will be worse off (she would have spent her money and gained nothing from it). But this reasoning does not help her. Why? Because regardless of what everybody else does, she is still better off wearing makeup.

If nobody wears makeup, then her relative attractiveness rises if she wears makeup. And if everybody else wears makeup, then her relative attractiveness rises if she wears makeup. It’s just that it’s rising from a lower starting point.

So no matter what society we start in, we end up in the suboptimal makeup-ubiquitous society. (I have to point out here that this is assuming a standard causal decision theory framework, which I think is wrong. Timeless decision theory will object to this line of reasoning, and will be able to maintain a makeup free equilibrium.)

We want to say “but just in this society assume that everybody is a good enough person to recognize the problem with makeup-wearing, and doesn’t do so!“

But that’s missing the entire point of civilization building – dealing with the fact that we will end up leaving non-Nash-equilibrium societal setups and degenerating in unexpected ways.

This failure mode arises because of the nature of positional goods, which are exactly what they sound like. In our example, attractiveness is a positional good, because your attractiveness is determined by looking at your position with respect to all other individuals (and yes this is a bit contrived and no I don’t think that attractiveness is purely positional, though I think that this is in part an actual problem).

To some degree, prices are also a positional good. If all prices fell tomorrow, then everybody would quickly end up with the same purchasing power as they had yesterday. And if everybody got an extra dollar to spend tomorrow, then prices would rise in response, the value of their money would decrease, and nobody would be better off (there are a lot of subtleties that make this not actually totally true, but let’s set that aside for the sake of simplicity).

Positional goods are just one example where we can naturally end up with our desired societies not being Nash equilibria.

The more general situation is just bad incentive structures, whereby individuals are incentivized to defect against a benevolent order, and society tosses and turns and settles at the nearest Nash equilibrium.

  • The better world is not a Nash equilibrium
    • Positional goods
    • Bad incentive structures
  • The better world is a Nash equilibrium


If the better world is in a Nash equilibrium, then we can actually imagine this world coming into being and not crumbling into a degenerate cousin-world. If a magical omniscient society-optimizing God stepped in and rearranged things, then they would likely stay that way, and we’d end up with a stable and happier world.

But there are a lot of reasons why all of us that are not magical society-optimizing Gods can do very little to make the changes that we desire. Said differently, there are many ways in which current Nash equilibria can do a great job of keeping us stuck in the existing system.

Three basic types of problems are (1) where the decision makers are not incentivized to implement this policy, (2) where valuable information fails to reach decision makers, and (3) where decision makers do have the right incentives and information, but fail because of coordination problems.

  • The better world is not a Nash equilibrium
    • Positional goods
    • Bad incentive structures
  • The better world is a Nash equilibrium
    • You can’t reach it because you’re stuck in a lesser Nash equilibrium.
      • Lack of incentives in decision makers
      • Asymmetric information
      • Coordination problems

Lack of incentives in decision makers can take many forms. The most famous of these occurs when policies result in externalities. This is essentially just where decision-makers do not absorb some of the consequences of a policy.

Negative externalities help to explain why behaviors that are net negative to society exist and continue (resulting in things like climate change and overfishing, for example), and positive externalities help to explain why some behaviors that would be net positive for society are not happening.

An even worse case of misalignment of incentives would be where the positive consequences on society would be negative consequences on decision-makers, or vice-versa. Our first-past-the-post voting system might be an example of this – abandoning FPTP would be great exactly because it allows us to remove the current set of decision-makers and replace them with a better set. This would great for us, but not so great for them.

I’m not aware of a name for this class of scenarios, and will just call it ‘perverse incentives.’

I think that this is also where the traditional concept of “evil people” would lie – evil people are those whose incentives are dramatically misaligned. This could mean that they are apathetic towards societal improvements, but typically fiction’s common conception of villains is individuals actively trying to harm society.

Lack of liquidity is another potential source of absent incentives. This is where there are plenty of individuals that do have the right incentives, but there is not enough freedom for them to actually make significant changes.

An example of this could be if a bunch of individuals all had the same idea for a fantastic new app that would perform some missing social function, and all know how to make the app, but are barred by burdensome costs of actually entering the market and getting the app out there.

The app will not get developed and society will be worse off, as a result of the difficulty in converting good app ideas to cash.

  • Lack of incentives in decision makers
    • Misalignment of incentives
      • Externalities
      • Perverse incentives
        • Evil people
      • Lack of liquidity


Asymmetric information is a well-known phenomenon that can lead societies into ruts. The classic example of this is the lemons problem. There are versions of asymmetric information problems in the insurance market, the housing market, the health care market and the charity market.

This deserves its own category because asymmetric information can bar progress, even when decision-makers have good incentives and important good policy ideas are out there.

  • Lack of incentives in decision makers
    • Misalignment of incentives
      • Externalities
      • Perverse incentives
        • Evil people
      • Lack of liquidity
    • Asymmetric information

And of course, there are coordination problems. The makeup example given earlier is an example of a coordination problem – if everybody could successfully coordinate and avoid the temptation of makeup, then they’d all end up better off. But since each individual is incentivized to defect, the coordination attempts will break down.

Coordination problems generally occur when you have multi-step or multi-factor decision processes. I.e. when the decision cannot be unilaterally made by a single individual, and must be done as a cooperative effort between groups of individuals operating under different incentive structures.

A nice clear example of this comes from Eliezer Yudkowsky, who imagines a hypothetical new site called Danslist, designed to be a competitor to Craigslist.

Danslist is better than Craigslist in every way, and everybody would prefer that it was the site in use. The problem is that Craigslist is older, so everybody is already on that site.

Buyers will only switch to Danslist if there are enough sellers there, and sellers will only switch to Danslist if there are enough buyers there. This makes the decision to switch to Danslist a decision that is dependent on two factors, the buyers and the sellers.

In particular, an N-factor market is one where there are N different incentive structures that must interact for action to occur. In N-factor markets, the larger N is, the more difficult it is to make good decisions happen.

This is really important, because when markets are stuck in this way, inefficiencies arise and people can profit off of the sub-optimality of the situation.

So Craigslist can charge more than Danslist, while offering a worse service, as long as this doesn’t provide sufficient incentive for enough people to switch over.

Yudkowsky also talks about Elsevier as an instance of this. Elsevier is a profiteer that captured several large and prestigious scientific journals and jacked up subscription prices. While researchers, universities, and readers could in principle just unanimously switch their publication patterns to non-Elsevier journals, this involves solving a fairly tough coordination problem. (It has happened a few times)

One solution to coordination problems is an ability to credibly pre-commit. So if everybody in the makeup-ubiquitous world was able to sign a magical agreement that truly and completely credibly bound their future actions in a way that they couldn’t defect from, then they could end up in a better world.

When individuals cannot credibly pre-commit, then this naturally results in coordination problems.

And finally, there are other weird reasons that are harder to categorize for why we end up stuck in bad Nash equilibria.

For instance, a system in which politicians respond to the wills of voters and are genuinely accountable to them seems like a system with a nicely aligned incentive structure.

But if for some reason, the majority of the public resists policies that will actually improve their lives, or push policies that will hurt them, then this system will still end up in a failure mode. Perhaps this failure mode is not best expressed as a Nash equilibrium, as there is a sense in which voters do have the incentive to switch to a more sensible view, but I will express it as such regardless.

This looks to me like what is happening with popular opinion about minimum wage laws.

Huge amounts of people support minimum wage laws, including those that may actually lose their jobs as the result of those laws. While I’m aware that there isn’t a strong consensus among economists as to the real effects of a moderate minimum-wage increase, it is striking to me that so many people are so convinced that it can only be net positive for them, when there is plenty of evidence that it may not be.

Another instance of this is the idea of “wage stickiness”.

This is the idea that employers are more likely to fire their workers than to lower their wages, resulting in an artificial “stickiness” to the current wages. The proposed reason for why this is so is that worker morale is hurt more by decreased wages than by coworkers being fired.

Sticky wages are especially bad when you take into account inflation effects. If an economy has an inflation rate of 10%, then an employer that keeps her employees’ wages constant is in effect cutting their wages by 10%. Even if she raises their wages by 5%, they’re still losing money!

And if the economy enters a recession, with say an inflation rate of -5%, then an employer will have to cut wages by 5% in order to stay at the market equilibrium. But since wages are sticky and her workers won’t realize that they are actually not losing any money despite the wage cut, she will be more likely to fire workers instead.

A friend described to me an interaction he had had with a coworker at a manufacturing plant. My friend had been recently hired in the same position as this man, and was receiving the minimum wage at 5 dollars an hour.

His coworker was telling him about how he was being paid so much, because he had been working there so many years and was constantly getting pay raises. He was mortified when he compared wages with my friend, and found that they were receiving the exact same amount.

Status quo bias is another important effect to keep in mind here. Individuals are likely to favor the current status quo, for no reason besides that it is the status quo. This type of effect can add to political inertia and further entrench society in a suboptimal Nash equilibrium.

I’ll just lump all of these effects in as “Stupidity & cognitive biases.”


I want to close by adding a third category that I’ve been starting to suspect is more important than I previously realized. This is:

  • The better world is in a Nash equilibrium, and you can reach it, and you will reach it, just WAIT a little bit.

I add this because I sometimes forget that society is a massive complicated beast with enormous inertia behind its existing structure, and that just because some favored policy of yours has not yet been fully implemented everywhere, this does not mean that there is a deep underlying unsolvable problem.

So, for instance, one time I puzzled for a couple weeks about why, given the apparently low cost of ending global poverty forever, it still exists.

Aren’t there enough politicians that are aware of the low cost? And aren’t they sufficiently motivated to pick up the windfall of public support and goodwill that they would surely get? (To say nothing of massively improving the world)

Then I watched Hans Rosling’s 2008 lecture “Don’t Panic” (which, by the way, should be required watching for everyone) and realized that global poverty is actually being ended, just slowly and gradually.

The UN set a goal in 2000 to completely end all world poverty by 2030. They’ve already succeeded in cutting it in half, and are five years ahead of their plan.

We’re on course to see the end of extreme poverty; it’ll just take a few more years. And after all, it should be expected that raising an entire segment of the world’s population above the poverty line will take some time.

So in this case, the answer to my question of “Why is this problem not being solved, if solutions exist?” was actually “Um, it is being solved, you’re just impatient.”

And earlier I wrote about overfishing and the ridiculously obvious solutions to the problem. I concluded by pessimistically noting that the fishing lobby has a significant influence over policy makers, which is why the problem cannot by solved.

While the antecedent of this is true, it is in fact the case that ITQ policies are being adopted in more and more fisheries, the Atlantic Northwest cod fisheries are being revived as a result of marine protection policies, and governments are making real improvements along this front.

This is a nice optimistic note to end on – the idea that not everything is a horrible unsolvable trap and that we can and do make real progress.


So we have:

  • The better world is not a Nash equilibrium
    • Positional goods
    • Bad incentive structures
  • The better world is a Nash equilibrium
    • You can’t reach it because you’re stuck in a lesser Nash equilibrium.
      • Lack of incentives in decision makers
        • Misalignment of incentives
          • Externalities
          • Perverse incentives
          • Lack of liquidity
      • Asymmetric information
      • Coordination problems
        • Multi-factor markets
        • Multi-step decision processes
        • Inability to pre-commit
      • Stupidity & cognitive biases
    • You can and will reach it, just be patient.

I don’t think that this overall layout is perfect, or completely encompasses all failure modes of society. But I suspect that it is along the right lines of how to think about these issues. I’ve had conversations where people will say things like “Society would be better if we just got rid of all money” or “If somebody could just remove all those darned Republicans from power, imagine how much everything would improved” or “If I was elected dictator-for-life, I could fix all the world’s problems.”

I think that people that think this way are often really missing the point. It’s dead easy to look at the world’s problems, find somebody or something to point at and blame, and proclaim that removing them will fix everything. But the majority of the work you need to do to actually improve society involves answering really hard questions like “Am I sure that I haven’t overlooked some way in which my proposed policy degenerates into a suboptimal Nash equilibrium? What types of incentive structures naturally arise if I modify society in this way? How could somebody actually make this societal change from within the current system?”

That’s really the goal of this taxonomy – is to try to give a sense of what the right questions to be asking are.

(More & better reading along these same lines here and here.)

Opt-out organ donation

(Mostly interested in this for two reasons: (1) the research in cognitive science about default effects and other unintuitive cognitive biases and (2) the adequacy implications of the lack of implementation of this policy)

In the United States, around 95% of the population approves of organ donation, while only 54% have granted permission for their organs to be used after death. Surveys in the UK indicate that the percentage that approve organ donation is around 90%, but only 25% of the population is registered on the Organ Donation Registry. Many other countries have similar patterns.

When polled, the reasons given for not explicitly registering for organ donation are things like laziness, confusion about the process and unwillingness to think about death.

And it’s actually worse than this – many countries have ‘soft’ organ-donation policies, meaning that family members can override the wishes of the deceased. Families are more likely to veto the decision to donate than the decision to not donate, further decreasing the number of organs available for transplant.

And this number really really matters. There are over 100,000 people in need of a life saving organ transplant in the United States, and over seven thousand people died last year while waiting. This amounts to 20 people every day. And in the UK and the US, the gap between available organs and patients awaiting transplantation is only growing.


Psychologists have studied the effects of default options on expressed preferences. One experiment told subjects to imagine that they had just moved to a new state, and that they had to decide whether or not to be organ donors. Some subjects were told that the default was to be an organ donor, and their choice was to confirm or change that status. Others were told the opposite – that the default was to not be an organ donor. The results were dramatic: about two times more people became donors when this was the default than when it was not. The simple framing effect of “confirm the default or change?” had the power to cut organ donations in half.

The real-world equivalent of this is whether a country has an opt-in or opt-out organ donation system. The UK and the US have an opt-in system, which means that the default choice is to not be an organ donor. Other countries, like Austria, Belgium, Spain and Sweden, have an opt-out system.

This difference in policy has huge differences in the percentage of the population that consents to organ donation. When Austria and Belgium changed from an opt-in to an opt-out system, donation rates more than doubled. When Singapore changed to opt-out, their donation rates more than sextupled. And comparisons between countries that have different policies are similarly impressive. Germany and Austria, similar countries in many ways except for their donation policy showed an almost 88% difference in effective consent rates.

Consider for a moment how strange this is. In the United States, all it requires to become an organ donor is to check a box when registering for a driver’s license at the DMV. Can it really be that a simple difference in whether the box means “become an organ donor” or “stop being an organ donor” is preventing millions of people from becoming organ donors? Classical economics would certainly not predict this – it is presumed that if somebody has a preference about whether or not to be an organ donor, a tiny difference in framing should not have such huge effects on their behavior.

But apparently the answer is that yes, these tiny differences do matter. And our strange little human quirks can be hugely important in deciding on how to make effective policy.


Ultimately, we are left with an adequacy question. Opt-out organ donation policies seem to me like low-hanging policy fruit. If policy-makers care to eliminate thousands of needless deaths, and are aware of these policies, then why aren’t they already implemented in the US and the UK?

Low-hanging policy fruit

(Note: none of this is original, just a repackaging of others’ ideas)

A question of great practical importance is “What can I do to improve the world?”

In this post I want to talk about a different but related question: How confident should I be that my ideas for how to improve the world are actually good ideas?

The cynic says something like: “Don’t be naive. The real world is complicated, and there’s almost surely some complex reason that you don’t understand that would make your idea fail spectacularly upon attempted implementation. Besides, there are millions of people out there that are smarter and more knowledgeable than you, and some of them have most likely already thought of your idea. Maybe, if you’re really lucky or really really bright, you might have one or two truly original and not terrible ideas in your life, but I wouldn’t bet on it.”

I don’t want to straw man this perspective, because I think it is right in some really important ways. There is a sense in which perceived low hanging policy fruit is similar to perceived $100 bills lying in the middle of the sidewalk – if it wasn’t some type of trick, you’d better believe that somebody else would have picked them up by now.

And yet…


Overfishing removes tens of billions of dollars from global GDP every year. It permanently destroys fish stocks, the livelihoods of fishermen, and seaside communities. And, well… we’ve known how to solve this problem for decades. It’s a classic tragedy of the commons. The standard solutions that you’ll find in an introductory economics textbook are: privatize the common resource, regulate the market through legally enforceable agreements, or tax/subsidize the market to incentivize sustainable fishing.

These are not just good in theory – they actually work. Catch share programs like an individual transferable quota (ITQ) are clever combinations of privatization and regulation – there is a legally enforced fishing quota and individual fishermen own percentages of this quota. These policies have been tried in about a hundred fisheries, and when they are tried they not only stop the trend of overfishing but even reverse it.

Regulation through marine protection programs that temporarily halt activity in heavily fished areas to let them recover could save up to $920 billion of otherwise lost value by 2050. And in 2010, researchers studying subsidies to fisheries found that “the single action of eliminating fuel subsidies could potentially be the most influential factor in stemming the trend of overfishing”.

All of these solutions are perfectly obvious and commonsensical. Want to end overfishing? Stop subsidizing the overfishers and tax them instead, enforce sustainable fishing practices, and protect overfished areas. So if you thought that by applying some basic economics and common sense, you could do better than most of the world’s governments and fisheries over the last century, you would be completely correct.

But then we come back to our $100 bill thought experiment and the cynical argument. Surely the world consists of people that are plenty incentivized to save marine ecosystems, bring in billions of dollars to the country’s economy, and save the livelihoods of fishermen. And surely some of those people know about the policies I’ve just described and have the power to implement them. But overfishing continues as ever, destroying fish populations and draining money from the economy. So what gives?


It’s not that the cynic is wrong, it’s just that there is more to be said.

I want to develop the $100 bill analogy some more. When in fact should we expect that you could successfully discover a real $100 bill lying on the floor? Here are some questions whose answers would be important to know:

  1. Are there other people that can see the $100 bill?
  2. Do they realize what it is (that is, money)?
  3. Do they want money?
  4. Could they take the $100 bill?

If the answer is “yes” to all four questions, then the bill is probably a realistic sidewalk painting, or a hallucination, or a prank bill on a fishing line held by some impish teenagers in the nearby bushes. If others can see the $100 bill, know what it is, want it, and are capable of taking it, then it’s probably going to be picked up very quickly.

On the other hand, if the answer is “no” to any of the questions, then the bill is likely real and soon to be yours. All you need is one break in the chain of conditions for the conclusion to not obtain. So, for instance, if everybody is blind, it’s not too surprising that the bill is lying there. Similarly for a society in which nobody has ever seen paper money, or the people are all ascetics, or they are all incapable of bending over to pick it up.


Let’s bring this back to our starting question. Say that you’ve thought of an apparently brilliant policy P that solves an important issue I. When should we expect that P is actually a solution to I? Here are the analogous four questions you should ask yourself:

  1. Could other people have thought of P?
  2. Would they be able to tell if it were a solution to I?
  3. Do they want to solve I?
  4. Could they implement P?

We can call this our taxonomy of inadequacy, if we feel fancy. If all of these questions are answered in the affirmative, then we should expect that the policy would have already been implemented if it were actually a solution to I.

At the risk of being redundant, here’s an image:

Adequacy pic

The intersection of these four circles is the set of people that you’d expect to have implemented P, if it were actually a good solution to I. The larger this set is, the more suspicious you should be of your idea.


So let’s apply this!

Why is overfishing not solved? Probably because of #4.

People in positions of influence know how to stop overfishing and some would even like to do so. But they have to worry about the influence of the fishing lobby, as well as their approval ratings among the coastal communities that would be temporarily disadvantaged by policies like fishing quotas, marine protected areas, and higher taxes. Sure it’d be better for everybody in the long run, but voters have a hard time accepting short-term losses for long-term gains. So although we can all see ridiculously low-hanging policy fruit, and enough of us care about solving the problem, nobody in power is actually able to implement these policies due to the nature of the system they exist in.

Another example! Everybody agrees that first-past-the-post (FTPT) voting is about the worst voting system out there. It encourages gerrymandering, dooms third parties, and forces smart voters to vote against their preferences. And we know of more sane voting systems! So why are we still stuck with our horrible system?

Well, those that are currently in power are exactly those that have benefited from FPTP. And if a third party came into being that wanted to change the voting system… well FPTP dooms third parties. So we’re stuck. In terms of our taxonomy of inadequacy, this is a combination of #3 and #4 – those that are in power don’t want to change the voting system, and those that want to change the voting system are unable to get in power.


What I like about this way of thinking is that you can start with “Hey, this sounds like a good way to solve problem X!” and end up understanding the deep structure of our society, seeing the way that this gigantic beast we call civilization functions and the inadequacies that result.

There’s a lot more to be said about this, but I will leave it for future posts.