How does securitization work?

How can I transform a portfolio of BBB securities into a AAA security?

How does fractional reserve banking work?

How does the insurance industry work?

Before doing so, let me reprise my usual complaint against our shoddy liberal arts education system that leaves so many (including journalists, pundits, politicians and even most public intellectuals) ignorant of basic mathematical and scientific results -- in this case, probability and statistics. Many primitive peoples lack crucial, but simple, cognitive tools that are useful to understand the world around us. For example, the Amazonian Piraha have no word for the number ten. Similarly, the mathematical concepts related to the current financial crisis leave over 95 percent of our population completely baffled. If your Ivy League education didn't prepare you to understand the following, please ask for your money back.

Now on to our discussion...

Suppose you loan $1 to someone who has a probability p of default (not paying back the loan). For simplicity, assume that in event of default you lose the entire $1 (no collateral). Then, the expected loss on the loan is p dollars, and you should charge a fee (interest rate) r > p.

Will you make a profit? Well, with only a single loan you will either make a profit of r or a loss of (1-r) with probabilities (1-p) and p, respectively. There is no

*guarantee*of profit, particularly if p is non-negligible.

But we can improve our situation by making N identical loans, assuming that the probabilities p of default in each case are

*uncorrelated*-- i.e., truly independent of each other. The central limit theorem tells us that, as N becomes large, the probability distribution of total losses is the normal or Gaussian distribution. The expected return is (r - p) times the total amount loaned, and, importantly, the

*variance*of returns goes to zero as 1/N. The probability of a rate of return which is substantially different from (r-p) goes to zero exponentially fast.

There is a simple analogy with coin flips. If you flip a coin a few times, the fraction of heads might be far from half. However, as the number of flips goes to infinity, the fraction of heads will approach half with certainty. The probability that the heads fraction deviates from half is governed by a Gaussian distribution with width that goes to zero as the number of flips goes to infinity. The figure below shows the narrowing of the distribution as the number of trials grows -- eventually the

*uncertainty*in the fraction of heads goes to zero.

We see that aggregating many independent risks into a portfolio allows a reduction in uncertainty in the total outcome. An insurance company can forecast its claims payments much more accurately when the pool of insured is large. A bank has less uncertainty in the expected losses on its loan portfolio as the number of (uncorrelated) loans increases. Charging a sufficiently high interest rate r almost guarantees a profit. Banks with a large number of depositors can also forecast what fraction of deposits will be necessary to cover withdrawals each day.

Now to the magic of tranching, slicing and dicing (financial engineering). Suppose BBB loans have a large probability of default: e.g., p = .1 = 1/10. How can we assemble a less risky security from a pool of BBB loans? An aggregation of many BBB loans will still have an expected loss rate of .1, but the

*uncertainty in this loss rate*can be made quite small if the individual default probabilities are independent of each other. The CDO repackager can create AAA tranches by artificially separating the first chunk of losses from the rest -- i.e., pay someone to take the expected loss (p times the total value of the loan pool) plus some additional cushion. Holders of the remaining AAA tranches are only responsible for losses beyond this first chunk. It is very improbable that fractional losses will significantly exceed p, so the chance of any AAA security suffering a loss is very low.

Problems: did we estimate p properly, or did we use recent bubble data? (Increasing home prices masked high default rates in subprime mortgages.) Are the default probabilities actually uncorrelated? (Not if there was a nationwide housing bubble!) See my talk on the financial crisis for related discussion.

Deeper question: why could Wall Street banks generate such large profits merely by slicing and dicing pools of loans? Is it exactly analogous to the ordinary insurance or banking business, which makes its money by taking r to be a bit higher than p? (Hint: regulation often requires entities like pension funds and banks to hold AAA securities... was there a regulatory premium on AAA securities above and beyond the risk premium?)

## 12 comments:

The central limit theorem tells us that, as N becomes large, the probability distribution of expected losses is governed by the normal or Gaussian distribution. The expected return is (r - p) times the total amount loaned, and, importantly, the variance of returns goes to zero as 1/N.You say this in a very weird way for the point that you are trying to make.

Hmmm... I tried to tweak it but it makes sense to me... perhaps I am missing some confusing aspect of the way I am expressing it?

Thanks for explaining the fundamental structure of CDOs.

I do have one question. You mentioned selling the first tranch = p* value of loan to somebody else.

Now buying something like this first tranch sounds extremely risky IF I knew what tranch I was getting. As a buyer what information do I get about this tranch before buying? Thanks for the answer in advance.

"But we can improve our situation by making N identical loans, assuming that the probabilities p of default in each case are uncorrelated -- i.e., truly independent of each other...."

why would you do that?

the easiest thing in the world to replicate is a portfolio of many N (large number ) loans : buy t-bills.

Hmm, the more I think about it, your CLT descriptions sounds more like a Law of Large Numbers description.

The CLT says to me, if I sample N things, and average them, I get a single average. If I execute this sampling M times, then the distribution of these averages is normally distributed.

In the case of the securities and tranches. Securitizing a set of mortgages is taking a sample of size N. Building a tranch is putting together M securities. The CLT then tells us how the tranch will perform, and let's us price it accurately [heheh].

That's the thing I think is not clear, that the application of the CLT is layered [and synthetic; we get around thinking about the underlying distribution by focusing on average effects; so cue some Taleb]

The principle of insurance is that many hands make light work, but that fails when carrying aquariums under waterfalls. The critical assumption is that mortgage loan defaults are independent. If that assumption is faulty, then there is a problem.

Okay, now for some behavioral economics: Spell out how the probability of default becomes correlated.

Some answers: Borrower notices that other people are getting these no-money-down ARMs, and figures it must be a great deal. (See "bandwagon" and "sheep".) Or, large mortgage lender makes a habit of lending to people whose credit-worthiness is unknown or known to be poor. Or, economy falls off a cliff, and lots of people lose their jobs, and housing prices plummet...

..and for 3-4 years lots of people say, "so, is that a problem, with housing prices going up and up?"

Diversification argument based on CLT is valid under assumptions of a finite variance. In the crisis underlying BBB's will have fat tail, more like power law, distributions.

In this situation diversification makes it worse.

I don't think you need any thing as strong as the CLT. The weaker Chebyshev inequality should be enough.

The problem is clearly to find the joint distribution. (akin to the correlation, if those are gaussian)

Actually, the underlying idea that the problem is to find the distribution of the default is also wrong, as it might be completely (path?) dependant of other variable, like the equity market. so the joint distribution to find is even bigger.

In the end, we are completely out of the "many sample, same distribution" that prevails in the insurance business.

Mostly, it is one sample, many distribution that we are talking about...

Finally, there is clearly no stationarity, so calibration is just impossible, even if the model is minimal (which it is not)

Hi -- great comments. Unfortunately I am traveling right now and can't make any substantive replies...

My treatment of CDO pricing is of course only the simplest of caricatures.

Post a Comment