In defense of collateralized debt obligations (CDOs)

If you’ve watched some of the popular movies out there about the 2008 financial crisis, chances are that you’ve been misled about one or two things. (I’m looking at you, Big Short.) For example:

Entertaining? Sure! Accurate? No, very much not so. This analogy is very much off the mark, as you’ll see in a minute.

Here’s a quote from Inside Job, often described as the most rigorous and well-researched of the popular movies on the crisis:

In the early 2000s, there was a huge increase in the riskiest loans, called subprime. But when thousands of subprime loans were combined to create CDOs, many of them still received AAA ratings.

The tone that this is stated in is one of disbelief at the idea that by combining subprime loans you can create extremely safe loans. And maybe this idea does sound pretty crazy if you haven’t studied much finance! But it’s actually correct. You can, by combining subprime loans, generate enormously safe investments, and thus the central conceit of a CDO is actually entirely feasible.

The overall attitude taken by many of these movies is that the financial industry in the early 2000s devoted itself to the scamming of investors for short-term profits through creation of complicated financial instruments like CDOs. As these movies describe, the premise of a CDO is that by combining a bunch of risky loans and slicing-and-dicing them a bit, you can produce a mixture of new investment opportunities including many that are extremely safe. This is all described in a tone that is supposed to convey a sense that this premise is self-evidently absurd.

I want to convince you that the premise of CDOs is not self-evidently absurd, and that in fact it is totally possible to pool risky mortgages to generate extremely safe investments.

So, why think that it should be possible to pool risky investments and decrease overall risk? Well first of all, that’s just what happens when you pool assets! Risk always decreases when you pool assets, with the only exception being the case where the assets are all perfectly correlated (which never happens in real life anyway).

As an example, imagine that we have two independent and identical bets, each giving a 90% chance of a $1000 return and a 10% chance of nothing.

CDOs 1

Now put these two together, and split the pool into two new bets, each an average of the original two:

CDOs 2

Take a look at what we’ve obtained. Now we have only a 1% chance of getting nothing (because both bets have to fail for this to happen). We do, however, have only a 81% chance of getting $1000, as opposed to the 90% we had earlier. But what about risk? Are we higher or lower risk than before?

The usual way of measuring risk is to look at standard deviations. So what are the standard deviations of the original bet and the new one?

Initial Bet
Mean = 90% ($1000) + 10% ($0) = $900
Variance = 90% (100^2) + 10% (900^2) = 90,000
Standard deviation = $300

New Bet
Mean = 81% ($1000) + 18% ($500) + 10% ($0) = $900
Variance = 81% (100^2) + 18% (400^2) + 1% (900^2) = 45,000
Standard deviation = $216.13

And look at what we see: risk has dropped, and fairly dramatically so, just by pooling independent bets! This concept is one of the core lessons of financial theory, and it goes by the name of diversification. The more bets we pool, the further the risk goes down, and in the limit of infinite independent bets, the risk goes to zero.

5 Pooled Bets10 Pooled Bets20 Pooled Bets100 Pooled Bets200 Pooled Bets

So if you’re watching a movie and somebody says something about combining risky assets to produce safe assets as if that idea is self-evidently absurd, you know that they have no understanding of basic financial concepts, and especially not a complex financial instrument like a CDO.

In fact, let’s move on to CDOs now. The setup I described above of simply pooling bets does decrease risk, but it’s not how CDOs work. At least, not entirely. CDOs still take advantage of diversification, but they also incorporate an additional clever trick to distribute risk.

The idea is that you take a pool of risky assets, and you create from it a new pool of non-identical assets with a spectrum of risk profiles. Previously all of the assets that we generated were identical to each other, but what we’ll do now with the CDO is that we’ll split up our assets into non-identical assets in such a way as to allocate the risk, so that some of the assets that we get will have very high risk (they’ll have more of the risk allocated to them), and some of them will have very little risk.

Alright, so that’s the idea: from a pool of equally risky assets, you can get a new pool of assets that have some variation in riskiness. Some of them are actually very safe, and some of them are very, very risky.  How do we do this? Well let’s go back to our starting example where we had two identical bets, each with 90% chance of paying out $1000, and put them together in a pool. But this time, instead of creating two new identical bets, we are going to differentiate the two bets by placing an order priority of payout on them. In other words, one bet will be called the “senior tranche”, and will be paid first. And the other bet will be called the “junior tranche”, and will be paid only if there is still money left over after the senior tranche has been paid. What do the payouts for these two new bets look like?

CDOs (1)

The senior tranche gets paid as long as at least one of the two bets pays out, which happens with 99% probability. Remember, we started with only a 90% probability of paying out. This is a dramatic change! In terms of standard deviation, this is $99.49, less than a third of what we started with!

And what about the junior tranche? Its probability of getting paid is just the probability that both people don’t default, which is 81%. And its risk has gone up, with a standard deviation of $392.30. So essentially, all we’ve done is split up our risk. We originally had 90%/90%, and now we have 99%/81%. In the process, what we’ve done is we’ve created a very very safe bet and a very very risky bet.

Standard Deviations
Original: $300
Simple pooling: $216.13
CDO senior tranche: $99.49
CDO junior tranche: $392.30

The important thing is that these two bets have to both be sold. You can’t just sell the senior tranche to people who want safe things (pensions funds), you have to also sell the junior tranche. So how do you do that? Well, you just lower its price! A higher rate of return awaits the taker of the junior tranche in exchange for taking on more risk.

Now if you think about it, this new lower level risk we’ve obtained, this 1% chance of defaulting that we got out of two bets that had a 10% chance of defaulting each, that’s a real thing! There really is a 1% chance that both bets default if they are independent, and so the senior tranche really can expect to get paid 99% of the time! There isn’t a lie or a con here, a pension funds that gets sold these senior tranches of CDOs is actually getting a safe bet! It’s just a clever way of divvying up risk among two assets.

I think the idea of a CDO is cool enough by itself, but I think that the especially cool thing about CDOs is that they open up the market to new customers. Previously, if you wanted to get a mortgage, then you had to find basically a bank that was willing to accept your level of risk, whatever it happens to be. And it could be that if you’re too high risk, then nobody wants to give you a mortgage, and you’d just be out of luck. Even prior to CDOs, when you mortgage pooling but no payment priority, you have to have investors that are interested in the level of risk of your pool. The novelty of CDOS is in allowing you to alter the risk profile of your pool of mortgages at will.

:Let’s say that you have 100 risky loans, and there’s only enough demand for you to sell 50 of them. What you can do is create a CDO with 50 safe loans and 50 risky loans. Now you get to not only sell your risky loans, but you can also sell your safe loans to interested customers like pension funds! This is the primary benefit of the new financial technology of CDOs: it allows banks to generate tailor-made risk levels for the set of investors that are interested in buying, so that they can sell more mortgage-backed securities and get more people homes. And if everything is done exactly as I described it, then everything should work out fine.

But of course, things weren’t done exactly as I described them. The risk levels of individual mortgages were given increasingly optimistic ratings with stated-income loans, no-down-payment loans, and no-income no-asset loans. CDOs were complex and their risk level was often difficult to assess, resulting in less transparency and more ability for banks to over-report their safety. And crucially, the different tranches of any given CDO are highly dependent on each other, even after they’ve been sold to investors that have nothing to do with each other.

Let’s go back to our simple example of the two $1000 bets for illustration. Suppose that one of the two bets doesn’t pay out (which could correspond to one home-owner defaulting on their monthly payment). Now the senior tranche owner’s payment is entirely dependent on how the other bet performs. The senior tranche owner will get $1000 only if that remaining bet pays out, which happens with just 90% probability. So his chance of getting $1000 has dropped from 99% to 90%.

CDOs 4

That’s a simple example of a more general point: that in a CDO, once the riskier tranches fail, the originally safe tranches suddenly become a lot riskier (so that what was originally AA is now maybe BBB). This helps to explain why once the housing bubble had popped, all levels of CDOs began losing value, not just the junior levels. Ordinary mortgage backed securities don’t behave this way! A AA-rated mortgage is rated that way because of some actual underlying fact about the reliability of the homeowner, which doesn’t necessarily change when less reliable homeowners start defaulting. A AA-rated CDO tranche might be rated that way entirely because it has payment priority, even though all the mortgages in its pool are risky.

Another way to say this: An ordinary mortgage backed security decreased risk just because of diversification (many mortgages pooled together make for a less risky bet than a single mortgage). But a CDO gets decreased risk because of both diversification and (in the upper tranches) the order priority (getting paid first). In both cases, as some of the mortgages in the pool fail, you lose some of the diversification benefit. But in the CDO case, you also lose the order priority benefit in the upper tranches (because, for example, if it takes 75 defaults in your pool for you to lose your money and 50 have already failed, then you are at a much higher risk of losing your money than if none of them have failed). Thus there is more loss of value in safe CDOs than in safe MBSs as default rates rise.

Six Case Studies in Consequentialist Reasoning

Consequentialism is a family of moral theories that say that an act is moral or immoral based on its consequences. If an act has overall good consequences then it is moral, and if it has bad consequences then it is immoral. What precisely counts as a “good” or “bad” consequence is what distinguishes one consequentialist theory from another. For instance, act utilitarians say that the only morally relevant feature of the consequences of our actions is the aggregate happiness and suffering produced, while preference utilitarians say that the relevant feature of the consequences is the number and strength of desires satisfied. Another form of consequentialism might strike a balance between aggregate happiness and social equality.

What all these different consequentialist theories have in common is that the ultimate criteria being used to evaluate the moral status of an action is only a function of the consequences of that action, as opposed to, say, the intentions behind the action, or whether the action is an instance of a universalizable Kantian rule.

In this essay, we’ll explore some puzzles in consequentialist theories that force us to take a more nuanced and subtle view of consequentialism. These puzzles are all adapted from Derek Parfit’s Reasons and Persons, with very minor changes.

First, we’ll consider a simple puzzle regarding how exactly to evaluate the consequences of one’s actions, when one is part of a collective that jointly accomplishes some good.

Case 1: There are 100 miners stuck in a mineshaft with flood waters rising. These men can be brought to the surface in a lift raised by weights on long levers. The leverage is such that just four people can stand on a platform and provide sufficient weight to raise the lift and save the lives of the hundred men. But if any fewer than four people stand on the platform, it will not be enough to raise the lift. As it happens, you and three other people happen to be standing there. The four of you stand on the platform, raising the lift and saving the lives of the hundred men.

The question for us to consider is, how many lives did you save by standing on the platform? The answer to this question matters, because to be a good consequentialist, each individual needs to be able to compare their contribution here to the contribution they might make by going elsewhere. As a first thought, we might say that you saved 100 lives by standing on the platform. But the other three people were in the same position as you, and it seems a little strange to say that all four of you saved 100 lives each (since there weren’t 400 lives saved total). So perhaps we want to say that each of you saved one quarter of the total: 25 lives each.

Parfit calls this the Share-of-the-Total View. We can characterize this view as saying that in general, if you are part of a collective of N people who jointly save M lives, then your share of lives saved is M/N.

There are some big problems with this view. To see this, let’s amend Case 1 slightly by adding an opportunity cost.

Case 2: Just as before, there are 100 miners stuck in a mineshaft with flood waters rising, and they can be saved by four or more people standing on a platform. This time though, you and four other people happen to be standing there. The other four are going to stand on the platform no matter what you do. Your choice is either to stand on the platform, or to go elsewhere to save 10 lives. What should you do?

The correct answer here is obviously that you should leave to save the 10 lives. The 100 miners will be saved whether you stay or leave, and the 10 lives will be lost if you stick around. But let’s consider what the Share-of-the-Total View says. According to this view, if you stand on the platform, your share of the lives saved is 100/5 = 20. And if you leave to go elsewhere, you only save 10 lives. So you save more lives by staying and standing on the platform!

This is a reductio of the Share-of-the-Total View. We must revise this view to get a sensible consequentialist theory. Parfit’s suggestion is that we say that when you join others who are doing good, the good that you do is not just your own share of the total benefit. You should also add to your share the change that you caused in the shares of the benefits produced by each other by joining. On their own, the four would each have a share of 25 lives. So by joining, you have a share of 20 lives, minus the 5 lives that have been reduced from the share of each of the other four. In other words, by joining, you have saved 20 – 5(4) lives, in other words, 0 lives. And of course, this is the right answer, because you have done nothing at all by stepping onto the platform!

Applying our revised view to Case 1, we see that if you hadn’t stepped onto the platform, zero lives would be saved. By stepping onto the platform, 100 lives are saved. So your share of those lives is 25, plus 25 lives for each of the others that would have had zero without you. So your share is actually 100 lives! The same applies to the others, so in our revised view, each of the four is responsible for saving all 100 lives. Perhaps on reflection this is not so unintuitive; after all, it’s true for each of them that if they change their behavior, 100 lives are lost.

Case 3: Just as in Case 2, there are 100 miners stuck in a mineshaft. You and four others are standing on the platform while the miners are slowly being raised up. Each of you know of an opportunity to save 10 lives elsewhere (a different 10 lives for each of you), but to successfully save the lives you have to leave immediately, before the miners are rescued. The five of you have to make your decision right away, without communicating with each other.

We might think that if each of the five of you reasons as before, each of you will go off and save the other 10 lives (as by staying, they see that they are saving zero lives). In the end, 50 lives will be saved and 100 lost. This is not good! But in fact, it’s not totally clear that this is the fault of our revised view. The problem here is lack of information. If each of the five knew what the other four planned on doing, then they would make the best decision (if all four planned to stay then the fifth would leave, and if one of the other four planned to leave then the fifth would stay). As things stand, perhaps the best outcome would be that all five stay on the platform (losing the opportunity to save 10 extra lives, but ensuring the safety of the 100). If they can use a randomized strategy, then the optimal strategy is to each stay on the platform with probability 97.2848% (saving an expected 100.66 lives)

Miners Consequentialism

Let’s move on to another type of scenario.

Case 4: X and Y simultaneously shoot and kill me. Either shot, by itself, would have killed.

The consequence of X’s action is not that I die, because if X had not shot, I would have died by Y’s bullet. And the same goes for Y. So if we’re evaluating the morality of X or Y’s action based on its consequences, it seems that we have to say that neither one did anything immoral. But of course, the two of them collectively did do something immoral by killing me. What this tells us that the consequentialist’s creed cannot be “an act is immoral if its consequences are bad”, as an act can also be immoral if it is part of a set of acts whose collective consequences are bad.

Inheriting immorality from group membership has some problems, though. X and Y collectively did something immoral. But what about the group X, Y, and Barack Obama, who was napping at home when this happened? The collective consequences of their actions were bad as well. So did Obama do something immoral too? No. We need to restrict our claim to the following:

“When some group together harm or benefit other people, this group is the smallest group of whom it is true that, if they had all acted differently, the other people would not have been harmed, or benefited.” -Parfit

A final scenario involves the morality of actions that produce imperceptible consequences.

Case 5: One million torturers stand in front of one million buttons. Each button, if pushed, induces a tiny stretch in each of a million racks, each of which has a victim on it. The stretch induced by a single press of the button is so minuscule that it is imperceptible. But the stretch induced by a million button presses produces terrible pain in all the victims.

Clearly we want to say that each torturer is acting immorally. But the problem is that the consequences of each individual torturer’s action are imperceptible! It’s only when enough of the torturers press the button that the consequence becomes perceptible. So what we seem to be saying is that it’s possible to act immorally, even though your action produces no perceptible change in anybody’s conscious experience, if your action is part of a collection of actions that together produce negative changes in conscious experiences.

This is already unintuitive. But we can make it even worse.

Case 6: Consider the final torturer of the million. At the time that he pushes his button, the victims are all in terrible agony, and his press doesn’t make their pain any perceptibly worse. Now, imagine that instead of there being 999,999 other torturers, there are zero. There is just the one torturer, and the victims have all awoken this morning in immense pain, caused by nobody in particular. The torturer presses the button, causing no perceptible change in the victims’ conditions. Has the torturer done something wrong?

It seems like we have to say the same thing about the torturer in Case 6 as we did in Case 5. The only change is that Nature has done the rest of the harm instead of other human beings, but this can’t matter for the morality of the torturer’s action. But if we believe this, then the scope of our moral concerns is greatly expanded, to a point that seems nonsensical. My temptation here is to say “all the worse for consequentialism, then!” and move to a theory that inherently values intentions, but I am curious if there is a way to make a consequentialist theory workable in light of these problems.