Saturday, November 29, 2008

Human genetic variation, Fst and Lewontin's fallacy in pictures

In an earlier post European genetic substructure, I displayed the following graphic, illustrating the genetic clustering of human populations.




Figure: The three clusters shown above are European (top, green + red), Nigerian (light blue) and E. Asian (purple + blue).

The figure seems to contradict an often stated observation about human genetic diversity, which has become known among experts as Lewontin's fallacy: genetic variation between two random individuals in a given population accounts for 80% or more of the total variation within the entire human population. Therefore, according to the fallacy, any classification of humans into groups ("races") based on genetic information is impossible. ("More variation within groups than between groups.")

To understand this statement better, consider the F statistic of population genetics, introduced by Sewall Wright:

Fst = 1 - Dw / Db

Db and Dw represent the average number of pairwise differences between two individuals sampled from different populations (Db = "difference between") or the same population (Dw = "difference within"). Even in the most widely separated human populations Fst < .2 so Dw / Db > .8 (roughly). This may not sound like very much genetic diversity, but it is more than in many other animal species. See here for recent high statistics Fst values by nationality.

Dw / Db > .8 means that the average genetic distance measured in number of base pair differences between two members of a group (e.g., two randomly selected Europeans) is at least 80 percent of the average distance between distant groups (e.g., Europeans and Asians or Africans). In other words, if two individuals from very distant groups (e.g., a Japanese and a Nigerian) have on average N base pair differences, then two from the same group (e.g., two Nigerians or two Japanese) will on average have roughly .8 N base pair differences.

How can the Fst result ("more variation within groups than between groups") be consistent with the clusters shown in the figure? I've had to explain this on numerous occasions, always with great difficulty because the explanation requires a little mathematics. In order to make the point more accessible, I've created the figures below, which show two population clusters, each represented by an ellipsoid (blob). The different figures depict the same pair of objects, just viewed from different angles.

The blobs are constructed and arranged so that the average distance between two points (individuals) within the same cluster is almost as big as the average distance between two points (individuals) in different clusters. This is easy to achieve if the ellipsoids are big and flat (like pancakes) and placed close to each other along the flat directions. The figure is meant to show how one can have small Fst, as in humans, yet easily resolved clusters. The direction in which the gap between the clusters appears is one of the principal components in the space of human genetic variation, as recently found by bioinformaticists. The figure at the top of this post plots individuals as points in the space generated by the two largest principal components extracted from the combination of data from HapMap and from large statistics sampling of Europeans. Exhibited this way, isolated clusters ("races") are readily apparent.

The real space of genetic variation has many more than 3 dimensions, so it can't be easily visualized. But some aspects of the figures below still apply: there will be particular directions of variation over which different populations are more or less identical (orthogonal to the principal component; i.e. along the flat directions of each pancake), and there will be directions in which different populations differ radically and have little or no overlap. Note, however, that we are specifically referring to genetic variation, which may or may not translate into phenotypic variation.







Related posts: "no scientific basis for race" , metric on the space of genomes.

The existence of this clustering has been known for 40 years.

The best and the brightest: McGeorge Bundy


Bundy was the first National Security Advisor under Kennedy and Johnson (at the time the position was special assistant to the president for national security affairs), and perhaps the most infamous of Harvard Junior Fellows. Bundy, with McNamara, played a key role in shaping America's war in Vietnam.

Below, Richard Holbrooke reviews Gordon Goldstein's new book on McGeorge Bundy in the Times. (Interview with Goldstein.)

How long will it be before the architects of our war in Iraq can admit their mistake? I suspect they are surpassed by Bundy not just in intelligence but, ultimately, integrity.
NYTimes: ...Bundy was the quintessential Eastern Establishment Republican, a member of a family that traced its Boston roots back to 1639. His ties to Groton (where he graduated first in his class), Yale and then Harvard were deep. At the age of 27, he wrote, to national acclaim, the ‘memoirs” of former Secretary of War Henry L. Stimson. In 1953, Bundy became dean of the faculty at Harvard — an astonishing responsibility for someone still only 34. Even David Halberstam, who would play so important a role in the public demolition of Bundy’s reputation in his classic, “The Best and the Brightest,” admitted that “Bundy was a magnificent dean” who played with the faculty “like a cat with mice.”

As he chose his team, Kennedy was untroubled by Bundy’s Republican roots —the style, the cool and analytical mind, and the Harvard credentials were more important. “I don’t care if the man is a Democrat or an Igorot,” he told the head of his transition team, Clark Clifford. “I just want the best fellow I can get for the particular job.” And so McGeorge Bundy entered into history — the man with the glittering résumé for whom nothing seemed impossible.

Everyone knows how this story ends: Kennedy assassinated, Lyndon B. Johnson trapped in a war he chose to escalate, Nixon and Kissinger negotiating a peace agreement and, finally, the disastrous end on April 30, 1975, as American helicopters lifted the last Americans off the roof of the embassy.

...Bundy spoke only occasionally about Vietnam after he left government, but when he did, he supported the war. Yet it haunted him. He knew his own performance in the White House had fallen far short of his own exacting standards, and Halberstam’s devastating portrait of him disturbed him far more deeply than most people realized. After remaining largely silent, — except for an occasional defense of the two presidents he had served — for 30 years, Bundy finally began, in 1995, to write about Vietnam. He chose as his collaborator Gordon Goldstein, a young scholar of international affairs. Together they began mining the archives, and Goldstein conducted a series of probing interviews. Bundy began writing tortured notes to himself, often in the margins of his old memos — a sort of private dialogue with the man he had been 30 years earlier — something out of a Pirandello play. Bundy would scribble notes: “the doves were right”; “a war we should not have fought”; “I had a part in a great failure. I made mistakes of perception, recommendation and execution.” “What are my worst mistakes?” For those of us who had known the self-confident, arrogant Brahmin from Harvard, these astonishing, even touching, efforts to understand his own mistakes are far more persuasive than the shallow analysis McNamara offers in his own memoir, “In Retrospect.”

...As it happens, I was part of a small group that dined with Bundy the night before Pleiku at the home of Deputy Ambassador William J. Porter, for whom I then worked. Bundy quizzed us in his quick, detached style for several hours, not once betraying emotion. I do not remember the details of that evening — how I wish I had kept a diary! — but by then I no longer regarded Bundy as a role model for public service. There was no question he was brilliant, but his detachment from the realities of Vietnam disturbed me. In Ambassador Porter’s dining room that night were people far less intelligent than Bundy, but they lived in Vietnam, and they knew things he did not. Yet if they could not present their views in quick and clever ways, Bundy either cut them off or ignored them. A decade later, after I had left the government, I wrote a short essay for Harper’s Magazine titled “The Smartest Man in the Room Is Not Always Right.” I had Bundy — and that evening — in mind.
See also A Memory of McGeorge Bundy:
...In February, he and I overlapped briefly in Saigon, and we had one quiet talk. On my return to Washington, I learned that Mac had told the NSC staff he was optimistic about the war, but, much to my astonishment, that they they should wait to hear my very different views.

In 1968, after I wrote a critique of Vietnam policy in The Atlantic, Mac chastised me for betraying LBJ's trust. We didn't make up for eight years. By then I was running Harvard's Nieman Fellowships for journalists, and Mac came to talk to the fellows.

He was crisply articulate, but there was one persistent young man, who resembled Trotsky, needling Mac with questions about the war. Mac finally cut him off saying, "Your problem, young man, is not your intellect but your ideology."

Later, as we were clinking highballs, the Trotsky look-alike cornered Mac: " What about Vietnam?"

Bundy: "I don't understand your question."

Trotsky: "Mac, what about (italics)you(end italics) and Vietnam?"

Bundy: "I still don't understand."

Trotsky: "But Mac, you screwed it up, didn't you?"

Glacial silence. Then Bundy suddenly smiled and replied: "Yes, I did. But I'm not going to waste the rest of my life feeling guilty about it."

When he died, McGeorge Bundy was working on a book about the war whose main message was that Vietnam was a terrible mistake.

It's a loss that he did not live to write in full what he had learned from the Vietnam calamity.
I recommend The Color of Truth, Kai Bird's biography of the Bundy brothers. Bird wrote the recent biography of Oppenheimer, American Prometheus.

Thursday, November 27, 2008

Atlas Shrugged, updated



Read the whole thing at McSweeney's! The original is discussed here. Via Naked Capitalism.

Anyone willing to own up to their Objectivist philosophical leanings? :-)

"I heard the thugs in Washington were trying to take your Rearden metal at the point of a gun," she said. "Don't let them, Hank. With your advanced alloy and my high-tech railroad, we'll revitalize our country's failing infrastructure and make big, virtuous profits."

"Oh, no, I got out of that suckers' game. I now run my own hedge-fund firm, Rearden Capital Management."

"What?"

He stood and adjusted his suit jacket so that his body didn't betray his shameful weakness. He walked toward her and sat informally on the edge of her desk. "Why make a product when you can make dollars? Right this second, I'm earning millions in interest off money I don't even have."

He gestured to his floor-to-ceiling windows, a symbol of his productive ability and goodness.

"There's a whole world out there of byzantine financial products just waiting to be invented, Dagny. Let the leeches run my factories into the ground! I hope they do! I've taken out more insurance on a single Rearden Steel bond than the entire company is even worth! When my old company finally tanks, I'll make a cool $877 million."

...Dagny and Hank searched through the ruins of the 21st Century Investment Bank. As they stepped through the crumbling cubicles, a trampled legal pad with a complex column of computations captured Dagny's attention. She fell to her hands and knees and raced through the pages and pages of complex math written in a steady hand. Her fingers bled from the paper cuts, and she did not care.

"What is it, Dagny?"

"Read this."

"Good God!"

"Yes, it's an experimental formula for a financial strategy that could convert static securities into kinetic profits that would increase at an almost exponential rate."

Hank studied the numbers. "The amount of debt you would need to make this work would be at least 30-to-1, but a daring, rational man who lives by his mind would be willing to take that risk!"

"Yes, and it's so complex the government could never regulate it."

Wednesday, November 26, 2008

Tuesday, November 25, 2008

East Asian genetic substructure

Below are some results on East and Southeast Asian genetic substructure. As you can see, Koreans are (sort of) midway between Chinese and Japanese. It will almost certainly be possible to differentiate between different regional origins based on DNA, once larger statistics studies become available. (See European results.)

Figures: Each point is an individual, and the axes are two principal components in the space of genetic variation. Colors correspond to individuals of different Asian ancestry.






Thanks to Chao Tian of UC Davis for sending me an early draft of the paper.

Analysis of East Asia Genetic Substructure: Population Differentiation and PCA Clusters Correlate with Geographic Distribution.

C. Tian1, R. Kosoy1, A. Lee2, P. Gregersen2, J. Belmont2, M. Seldin1

1) Rowe Program Human Genetics, Univ California Sch Medicine, Davis, CA; 2) North Shore-LIJ Res Inst, Manhasset, NY, Baylor Col Med., Houston TX.

Accounting for genetic substructure within European populations has been important in reducing type 1 errors in genetic studies of complex disease. As efforts to understand complex genetic disease are expanded to other continental populations an understanding of genetic substructure within these continents will be useful in design and execution of association tests. In this study, population differentiation(Fst) and Principal Components Analyses(PCA) are examined using >200K genotypes from multiple populations of East Asian ancestry(total 298 subjects). The population groups included those from the Human Genome Diversity Panel[Cambodian(CAMB), Yi, Daur, Mongolian(MGL), Lahu, Dai, Hezhen, Miaozu, Naxi, Oroqen, She, Tu, Tujia, Naxi, and Xibo], HapMap(CHB and JPT), and East Asian or East Asian American subjects of Vietnamese(VIET), Korean(KOR), Filipino(FIL) and Chinese ancestry. Paired Fst(Wei and Cockerham) showed close relationships between CHB and several large East Asian population groups(CHB/KOR, 0.0019; CHB/JPT, 00651; CHB/VIET, 0.0065) with larger separation with FIL(CHB/FIL, 0.014). Low levels of differentiation were also observed between DAI and VIET(0.0045) and between VIET and CAMB(0.0062). Similarly, small Fst's were observed among different presumed Han Chinese populations originating in different regions of mainland of China and Taiwan(Fst < 0.0025 with CHB). For PCA, the first two PC's showed a pattern of relationships that closely followed the geographic distribution of the different East Asian populations. For example, the four "corner" groups were JPT, FIL, CAMB and MGL with the CHB forming the center group, and KOR was between CHB and JPT. Other small ethnic groups were also in rough geographic correlation with their putative origins. These studies have also enabled the selection of a subset of East Asian substructure ancestry informative markers(EASTASAIMS) that may be useful for future genetic association studies in reducing type 1 errors and in identifying homogeneous groups.


Related posts: "no scientific basis for race" , metric on the space of genomes

The value of trust

In no-arb efficient market fairy tale land, investors are assumed to be able to value a company by simply looking at its balance sheet, researching its market and business model and projecting into the future. Sound difficult? Why, yes, it's almost impossible to do, and even after a lengthy research project executed by a team of brilliant analysts there is a huge remaining uncertainty.

So what happens in the real world? Well, we apes with limited cognitive power and limited information rely on simple heuristics -- rules of thumb -- to guess what will happen in the future. That is, we say "Robert Rubin seems like a smart, careful guy, and top management at Citi must know what they are doing, and surely the market knows what it's doing, so, yeah, $40 a share seems ok with me..."

Of course, after a while we might notice some data suggesting that the leadership at Citi has been dishonest ("we are adequately capitalized" -- CEO Vikram Pandit) and ignorant of their own business operations ("what's a SIV?" -- Chairman Robert Rubin, November, 2007), and suddenly decide that NO, they DON'T KNOW WHAT THEY ARE DOING.

Yikes!: as reported on this blog, Rubin comments from November 2007.

[SIV = Structured Investment Vehicle = (roughly, see link) CDO]

"I think the problem with this SIV issue is that it's been substantially misunderstood in the press," said Rubin, who has a considerable personal stake in the fate of Citigroup. The banking firm paid him $17.3 million last year.

"The banks appear to be in fine shape," he said. "That's not a problem."

The SIV issue isn't critical for the economy, he insisted.

"It's massively less important that it's been presented," Rubin said. "It's been presented as a sort of centerpiece of what's going on. I just don't think that's right."

The cost of the evaporation of trust? $200 billion dollars in lost market capitalization in the last year. The real reason Citi melted down is that people no longer trust their senior management to meet future obligations. This senior management was left in place in Treasury's latest sweetheart deal.

Tell me an efficient market story that explains Citi's recent history and I'll sell you a slightly devalued "Nobel Prize" in financial economics along with the Brooklyn bridge. (Click below for larger image.)



Thanks to Mark Thoma for links to the articles quoted below.

Bronte Capital:

...due to the losses and the lack of risk control people stopped believing in Citigroup – and hence Citigroup dies without a bailout. It was however pretty easy to stop believing in Citigroup because nobody (at least nobody normal) can understand their accounts. I can not understand them and I am a pretty sophisticated bank analyst. I know people I think are better than me – and they can’t understand Citigroup either. So Citigroup was always a “trust us” thing and now we do not trust.

The cause of the crisis

This is a wholesale funding crisis and the cause of the crisis is plain. It is lies told by financial institutions. Financial institutions sold AAA rated paper which they almost certainly – deep in their bowels – knew was crap. They sold it to people who provide wholesale funding.

Now they need to roll their own debt. The people who would normally wholesale fund them are the same people who have had a large dose of defaulting AAAs. They no longer believe. It is “fool me once, shame on me, fool me twice, shame on you”. As I have put it the lies that destroyed Bear Stearns were not told by short sellers. They were told by Bear Stearns.

Now the problem is that no matter how many times Pandit says that Citigroup is well capitalised nobody will believe him. In answer to the Brad DeLong question – the company told lies about its mortgage book – which compounded the lies about the dodgy CDO product they sold. The lies about the mortgage book totalled $20 billion on say $43 billion of optimistically valued assets – and those lies reduced the value of Citigroup by $200 billion because they removed the trust in Citigroup.

It is one of those ironic things that when financial institutions lied in 2006 the market seemed to believe them. When they tell the truth now, nobody will listen.

Robert Rubin racks his brain about how he would have done things differently. Well one thing he would have done differently is get Citigroup to remove the culture of obfuscation – the culture that allowed it to be perceived as if it were lying even when it was telling the truth. The problem is that even Robert Rubin doesn’t have enough uncashed integrity to save Citigroup. Even Robert Rubin.

The US government is now selling systemic risk insurance.

Finally, System-Risk Insurance, by Laurence Kotlikoff and Perry Mehrling, FT

As we advocated two months back (Bagehot plus RFC: The Right Financial Fix), Uncle Sam is finally starting to sell systematic risk insurance on high-grade securities in exchange for preferred stock. This is a critical function for the U.S. government; Uncle Sam is the only player capable of hedging systemic risk because he’s the only player capable of taking actions that keep the overall economic system on the right course.

The real question now is whether the U.S. government will begin selling system-risk insurance on a routine basis and, thereby, help refloat trillions of dollars in high-grade mortgage-related securities owned by banks and other financial institutions - institutions that are in desperate need of more capital to support new lending.

Writing one-off insurance deals with a few large players, like Citigroup, is not the same as standing ready to write system-risk insurance to all players that issue conforming high-grade paper - something that’s needed to support ongoing securitization of such obligations. We stress the word “conforming,” because it’s vital for the government to begin stipulating which securities are “safe” under normal conditions and which are “toxic” and, thus, no longer to be held by financial intermediaries.

Like any insurance underwriter, Uncle Sam needs not only to know and approve what he’s insuring; he also needs to make sure there are appropriate deductibles and co-insurance provisions to limit moral hazard on the part of the insured. The moral hazard in this case is that financial institutions try to pass off low-grade loans as high-grade.

The weekend deal with Citigroup is instructive in clarifying the nature of the insurance the government should sell on an ongoing basis. The deal to support $306bn of Citigroup’s mortgage-related securities puts a floor under the value of the best such securities at about 90 cents on the dollar. This deal represents the first use of the insurance capability authorized by Section 102 of the TARP.

[90 cents on the dollar? WTF!?! Incompetent management is left in place while the taxpayer foots the bill. Why not bail out Detroit while we're at it? Note: this is clarified by a commenter -- it's 90% of current market value, not face value.]

The structure of the deal is convoluted, so it takes some probing to see precisely what insurance is being sold and for what price. We are told that Citigroup itself is on the hook for the first loss of $29bn (plus whatever loss reserves are already on its books) on the cash flows due on the $306bn in mortgages. This amounts to roughly a 10 percent deductible.

Any losses beyond $29bn will be shared by the government (90 per cent) and Citigroup (10 per cent). This is the co-insurance (co-pay) element. This insurance runs for the next 10 years, and Citigroup is paying a one-time $7bn premium for it, using preferred stock.

Sunday, November 23, 2008

European genetic substructure



Figure: Each point is an individual, and the axes are two principal components in the space of genetic variation. Colors correspond to individuals of different European ancestry.

The figure above is from the Nature paper: European Journal of Human Genetics (2008) 16, 1413–1429; doi:10.1038/ejhg.2008.210

Abstract: An investigation into fine-scale European population structure was carried out using high-density genetic variation on nearly 6000 individuals originating from across Europe. The individuals were collected as control samples and were genotyped with more than 300 000 SNPs in genome-wide association studies using the Illumina Infinium platform. A major East–West gradient from Russian (Moscow) samples to Spanish samples was identified as the first principal component (PC) of the genetic diversity. The second PC identified a North–South gradient from Norway and Sweden to Romania and Spain. ...

Some interesting points:

1) Significant East-West and North-South substructure is apparent already from the figure. The resolution of the study is sufficiently high that Swedes and Norwegians can be distinguished with 90 percent accuracy (Table 4). Crime scene forensics will never be the same -- "the Swede did it!" ;-)

In conclusion, we have shown that using PCA techniques it is possible to detect fine-level genetic variation in European samples. The genetic and geographic distances between samples are highly correlated, resulting in a striking concordance between the scatter plot of the first two components from a PCA of European samples and a geographic map of sample origins. We have shown how this information can be used to predict the origin of unknown samples in a rapid, precise and robust manner, and that this prediction can be performed without requiring access to the individual genotype data on the original samples of known origin. ...


2) Genetic distances between population clusters are roughly as follows: the distance between two neighboring western European populations is of order one in units of standard deviations and the distance to the Russian cluster is several times larger than that -- say, 3 or 4. From HapMap data, the distance from Russian to Chinese and Japanese clusters is about 18, and the distance of southern Europeans to the Nigerian cluster is about 19. The chance of mis-identifying a European as an African or E. Asian is exponentially small! (Table 5)

...The distance measure is a measure of the distance in standard deviations from a sample to the center of the closest matching population.

...For the other HapMap populations, the classification procedure assigned 100% of the YRI [Yoruban = Nigerian] samples to France, and almost 100% of the CHB and JPT [Chinese and Japanese] samples to Russia. However, the distribution of the distance measure for the four populations was quite different. For the CEU [HapMap European] samples, the median and 95% CI of the distance measure were 0.41 (0.11–1.01), whereas for the YRI, CHB and JPT populations, the median and 95% CIs were 19.3 (18.0–20.6), 17.7 (15.9–19.3) and 18.0 (15.4–19.6), respectively.

...The Yoruban [Nigerian] and Asian samples were identified as belonging to the countries on the south and east edges, respectively, of the European cluster, and the distance measure clearly indicates that they do not fit well into any of the proposed populations. ...



Figure: The three clusters shown above are European (top, green + red), Nigerian (light blue) and E. Asian (purple + blue).


See additional discussion at gnxp (the modified figure is from Razib), Dienekes

Related posts: "no scientific basis for race" , metric on the space of genomes

Friday, November 21, 2008

Physics, complex systems and economics

It's a weird shock (but all too common in theoretical physics) to find other people who have been thinking along exactly the same lines as I have. Yesterday after my seminar relativist Kristin Schliech told me that Wheeler had given a similar talk 20 years ago at Maryland on the problem posed for interpretations of black hole entropy by configurations he called "bags of gold" -- large curved spaces glued to an asymptotically flat universe, which appear from the outside to be black holes.

Earlier in the day I had a nice (2 hour!) meeting with a group (including Lee Smolin and Sabine Hossenfelder) at Perimeter who are thinking about complex systems, agent simulations and economics. They referred me to the following paper, which is excellent, and again expresses many thoughts I've had over the years in thinking about markets, financial economics, etc.

Geanakoplos is a "real" economist (James Tobin Professor at Yale) and Farmer is a Sante Fe guy who ran a hedge fund called the Prediction Company. If you are a physicist trying to understand the thinking of traditional economists, or an economist who wants to understand why physicists are often dubious about neoclassical economics, read this paper.

The virtues and vices of equilibrium and the future of financial economics

J. Doyne Farmer, John Geanakoplos

http://arxiv.org/abs/0803.2996

The use of equilibrium models in economics springs from the desire for parsimonious models of economic phenomena that take human reasoning into account. This approach has been the cornerstone of modern economic theory. We explain why this is so, extolling the virtues of equilibrium theory; then we present a critique and describe why this approach is inherently limited, and why economics needs to move in new directions if it is to continue to make progress. We stress that this shouldn't be a question of dogma, but should be resolved empirically. There are situations where equilibrium models provide useful predictions and there are situations where they can never provide useful predictions. There are also many situations where the jury is still out, i.e., where so far they fail to provide a good description of the world, but where proper extensions might change this. Our goal is to convince the skeptics that equilibrium models can be useful, but also to make traditional economists more aware of the limitations of equilibrium models. We sketch some alternative approaches and discuss why they should play an important role in future research in economics.

Wednesday, November 19, 2008

Perimeter photos

Having a great time here -- the Perimeter Institute is its own little world of theoretical physics!

Thanks to Blackberry founder Mike Lazarides, whose generous donations were largely responsible for creating this place.











Deflation

Global bond markets are forecasting deflation. The spread between inflation protected and ordinary bonds is negative. (I'm not sure how the graphs below are calculated -- the negative values seem too big.)



Via Paul Kedrosky.

Tuesday, November 18, 2008

Bill Janeway interview

Via The Big Picture, this wonderful interview with Bill Janeway. Janeway was trained as an academic economist (PhD Cambridge), but spent his career on Wall Street, most recently in private equity. I first met Bill at O'Reilly's foo camp; we've had several long conversations about finance and the markets. The interview is long, but read the whole thing! Topics covered include: physicists and quants in finance, mark to market, risk, regulatory and accounting regimes, market efficiency.

The IRA: How did we get into this mess?

Janeway: It took two generations of the best and the brightest who were mathematically quick and decided to address themselves to the issues of capital markets. They made it possible to create the greatest mountain of leverage that the world has ever seen. In my own way, I do track it back to the construction of the architecture of modern finance theory, all the way back to Harry Markowitz writing a thesis at the University of Chicago which Milton Friedman didn’t think was economics. He was later convinced to allow Markowitz to get his doctorate at the University of Chicago in 1950. Then we go on through the evolution of modern finance and the work that led to the Nobel prizes, Miller, Modigliani, Scholes and Merton. The core of this grand project was to reconstruct financial economics as a branch of physics. If we could treat the agents, the atoms of the markets, people buying and selling, as if they were molecules, we could apply the same differential equations to finance that describe the behavior of molecules. What that entails is to take as the raw material, time series data, prices and returns, and look at them as the observables generated by processes which are stationary. By this I mean that the distribution of observables, the distribution of prices, is stable over time. So you can look at the statistical attributes like volatility and correlation amongst them, above all liquidity, as stable and mathematically describable. So consequently, you could construct ways to hedge any position by means of a “replicating portfolio” whose statistics would offset the securities you started with. There is a really important book written by a professor at the University of Edinburgh named Donald MacKenzie. He is a sociologist of economics and he went into the field, onto the floor in Chicago and the trading rooms, to do his research. He interviewed everybody and wrote a great book called An Engine Not a Camera. It is an analytical history of the evolution of modern finance theory. Where the title comes from is that modern finance theory was not a camera to capture how the markets worked, but rather an engine to transform them.

...

Janeway: Yes, but here the agents were principals! I think something else was going on. It was my son, who worked for Bear, Stearns in the equity department in 2007, who pointed out to me that Bear, Stearns and Lehman Brothers had the highest proportion of employee stock ownership on Wall Street. Many people believed, by no means only the folks at Bear and Lehman, that the emergence of Basel II and the transfer to the banks themselves of responsibility for determining the amount of required regulatory capital based upon internal ratings actually reduced risk and allowed higher leverage. The move by the SEC in 2004 to give regulatory discretion to the dealers regarding leverage was the same thing again.

The IRA: And both regimes falsely assume that banks and dealers can actually construct a viable ratings methodology, even relying heavily on vendors and ratings firms. There are still some people at the BIS and the other central banks who believe that Basel II is viable and effective, but none of the risk practitioners with whom we work has anything but contempt for the whole framework. It reminds us of other utopian initiatives such as fair value accounting or affordable housing, everyone sells the vision but misses the pesky details that make it real! And the same religious fervor behind the application of physics to finance was behind the Basel II framework and complex structured assets.

Janeway: That’s my point. It was a kind of religious movement, a willed suspension of disbelief. If we say that the assumptions necessary to produce the mathematical models hold in the real world, namely that markets are efficient and complete, that agents are rational, that agents have access to all of the available data, and that they all share the same model for transforming that data into actionable information, and finally that this entire model is true, then at the end of the day, leverage should be infinite. Market efficiency should rise to the point where there isn’t any spread left to be captured. The fact that a half a percent unhedged swing in your balance sheet can render you insolvent, well it doesn’t fit with this entire constructed intellectual universe that goes back 50 years.

...

Janeway: There are a couple of steps along the way here that got us to the present circumstance, such as the issue of regulatory capture. When you talk about regulatory capture and risk, the capture here of the regulators by the financial industry was not the usual situation of corrupt capture. The critical moment came in the early 1980s, which is very well documented in MacKenzie’s book, when the Chicago Board appealed to academia because it was then the case that in numerous states, cash settlement futures were considered gambling and were banned by law.

...

Janeway: The point here is that the regulators were captured intellectually, not monetarily. And the last to be converted, to have the religious conversion experience, were the accountants, leading to fair value accounting rules. I happen to be the beneficiary of a friendship with a wonderful man, Geoff Whittington, who is a professor emeritus of accounting at Cambridge, who was chief accountant of the British Accounting Standards Board and was a founder of the International Accounting Standards Board. He is from the inside an appropriately knowledgeable, balanced skeptic, who has done a wonderful job of parsing out what is involved in this discussion in a paper called “Two World Views.” Basically, he says that if you really do believe that we live in a world of complete and efficient markets, then you have no choice but to be an advocate of fair value, mark-to-market accounting. If, on the other hand, you see us living in a world of incomplete, but reasonably efficient markets, in which the utility of the numbers you are trying to generate have to do with stewardship of a business through real, historical time rather than a snapshot of “truth,” then you are in a different world. And that is a world where the concept of fair value is necessarily contingent.

Previous posts on Donald MacKenzie's work. MacKenzie is perhaps the most insightful of academics working on the history and development of modern finance.

Kakutani on Gladwell

Michiku Kakutani of the Times reviews Malcolm Gladwell's new book Outliers. She finds it poorly reasoned -- my usual complaint about Gladwell's work.

Much of what Mr. Gladwell has to say about superstars is little more than common sense: that talent alone is not enough to ensure success, that opportunity, hard work, timing and luck play important roles as well. The problem is that he then tries to extrapolate these observations into broader hypotheses about success. These hypotheses not only rely heavily on suggestion and innuendo, but they also pivot deceptively around various anecdotes and studies that are selective in the extreme: the reader has no idea how representative such examples are, or how reliable — or dated — any particular study might be.

Gladwell highlights the claim of psychologist Anders Ericsson, that effort dominates ability (the 10,000 hours of practice thesis). My opinion on this can be found here, deep in the comments. The evidence is pretty strong in the case of science that native cognitive ability is a prerequisite for success. Practice (effort) is necessary also, but neither alone are sufficient.

...that quote sounds like it could be from Anders Ericsson's research on expertise. I disagree with his conclusions. His studies only show that effortful practice (about 10 years worth) is typically required to reach the highest level of capability. But he then confuses the logic and asserts that practice alone is *sufficient*, when in fact it is only necessary. You need raw ability *and* lengthy practice to reach expertise.

Of course it is appealing for most people to think that Ericsson's model is correct and that effort is all that is required to produce capability, but this claim is very controversial in the psychology community, and I think implausible to anyone who has been around gifted kids/adults.

The Roe study, combined with other studies showing the age stability of IQ (certainly once adulthood is reached), also serves to refute Ericsson. There's clearly some measurable quality, usually present already at an early age, that is advantageous for intellectual achievement. Most people don't have it.

Anders is refuted quite well in papers by leading psychologists like Sternberg (Yale) and in Eysenck's book Genius.

By the way, also contra Ericsson, there are many credible examples of supreme raw talent that didn't require development through 10 years of practice (e.g., Mozart).

Perimeter talk: monsters




I'm traveling today to the Perimeter Institute in icy Canada. I love modern architecture -- can't wait to see their funky building.

[Video and audio of seminar available here -- I'm always afraid to listen to or watch myself giving a talk, but I should probably do so at some point to improve my presentations...]

Curved space, monsters and black hole entropy

slides

Abstract: I discuss a class of compact objects ("monsters") with more entropy than a black hole of the same ADM mass. Such objects are problematic for AdS/CFT duality and the conventional interpretation of black hole entropy as counting of microstates. Nevertheless, monster initial data can be constructed in semi-classical general relativity without requiring large curvatures or energy densities.

Sunday, November 16, 2008

Central limit theorem and securitization: how to build a CDO

I thought I would address the following related questions, on topics which are integral to the current financial crisis.

How does securitization work?

How can I transform a portfolio of BBB securities into a AAA security?

How does fractional reserve banking work?

How does the insurance industry work?

Before doing so, let me reprise my usual complaint against our shoddy liberal arts education system that leaves so many (including journalists, pundits, politicians and even most public intellectuals) ignorant of basic mathematical and scientific results -- in this case, probability and statistics. Many primitive peoples lack crucial, but simple, cognitive tools that are useful to understand the world around us. For example, the Amazonian Piraha have no word for the number ten. Similarly, the mathematical concepts related to the current financial crisis leave over 95 percent of our population completely baffled. If your Ivy League education didn't prepare you to understand the following, please ask for your money back.

Now on to our discussion...

Suppose you loan $1 to someone who has a probability p of default (not paying back the loan). For simplicity, assume that in event of default you lose the entire $1 (no collateral). Then, the expected loss on the loan is p dollars, and you should charge a fee (interest rate) r > p.

Will you make a profit? Well, with only a single loan you will either make a profit of r or a loss of (1-r) with probabilities (1-p) and p, respectively. There is no guarantee of profit, particularly if p is non-negligible.

But we can improve our situation by making N identical loans, assuming that the probabilities p of default in each case are uncorrelated -- i.e., truly independent of each other. The central limit theorem tells us that, as N becomes large, the probability distribution of total losses is the normal or Gaussian distribution. The expected return is (r - p) times the total amount loaned, and, importantly, the variance of returns goes to zero as 1/N. The probability of a rate of return which is substantially different from (r-p) goes to zero exponentially fast.

There is a simple analogy with coin flips. If you flip a coin a few times, the fraction of heads might be far from half. However, as the number of flips goes to infinity, the fraction of heads will approach half with certainty. The probability that the heads fraction deviates from half is governed by a Gaussian distribution with width that goes to zero as the number of flips goes to infinity. The figure below shows the narrowing of the distribution as the number of trials grows -- eventually the uncertainty in the fraction of heads goes to zero.





We see that aggregating many independent risks into a portfolio allows a reduction in uncertainty in the total outcome. An insurance company can forecast its claims payments much more accurately when the pool of insured is large. A bank has less uncertainty in the expected losses on its loan portfolio as the number of (uncorrelated) loans increases. Charging a sufficiently high interest rate r almost guarantees a profit. Banks with a large number of depositors can also forecast what fraction of deposits will be necessary to cover withdrawals each day.

Now to the magic of tranching, slicing and dicing (financial engineering). Suppose BBB loans have a large probability of default: e.g., p = .1 = 1/10. How can we assemble a less risky security from a pool of BBB loans? An aggregation of many BBB loans will still have an expected loss rate of .1, but the uncertainty in this loss rate can be made quite small if the individual default probabilities are independent of each other. The CDO repackager can create AAA tranches by artificially separating the first chunk of losses from the rest -- i.e., pay someone to take the expected loss (p times the total value of the loan pool) plus some additional cushion. Holders of the remaining AAA tranches are only responsible for losses beyond this first chunk. It is very improbable that fractional losses will significantly exceed p, so the chance of any AAA security suffering a loss is very low.

Problems: did we estimate p properly, or did we use recent bubble data? (Increasing home prices masked high default rates in subprime mortgages.) Are the default probabilities actually uncorrelated? (Not if there was a nationwide housing bubble!) See my talk on the financial crisis for related discussion.

Deeper question: why could Wall Street banks generate such large profits merely by slicing and dicing pools of loans? Is it exactly analogous to the ordinary insurance or banking business, which makes its money by taking r to be a bit higher than p? (Hint: regulation often requires entities like pension funds and banks to hold AAA securities... was there a regulatory premium on AAA securities above and beyond the risk premium?)

Saturday, November 15, 2008

More Soros

Soros' central observation is that markets do not necessarily function properly, even when left to themselves. He is not referring to the usual causes cited for market failure, such as imperfect competition, externalities, information asymmetry, etc. Instead, he is attacking the fundamental assumption that markets are reliable processors of information, that they can be depended on to generate price signals which indicate how resources should be allocated within society.

New York Review of Books: ...This remarkable sequence of events can be understood only if we abandon the prevailing theory of market behavior. As a way of explaining financial markets, I propose an alternative paradigm that differs from the current one in two respects. First, financial markets do not reflect prevailing conditions accurately; they provide a picture that is always biased or distorted in one way or another. Second, the distorted views held by market participants and expressed in market prices can, under certain circumstances, affect the so-called fundamentals that market prices are supposed to reflect. This two-way circular connection between market prices and the underlying reality I call reflexivity.

More excerpts below.

...the current crisis differs from the various financial crises that preceded it. I base that assertion on the hypothesis that the explosion of the US housing bubble acted as the detonator for a much larger "super-bubble" that has been developing since the 1980s. The underlying trend in the super-bubble has been the ever-increasing use of credit and leverage. Credit—whether extended to consumers or speculators or banks—has been growing at a much faster rate than the GDP ever since the end of World War II. But the rate of growth accelerated and took on the characteristics of a bubble when it was reinforced by a misconception that became dominant in 1980 when Ronald Reagan became president and Margaret Thatcher was prime minister in the United Kingdom.

The misconception is derived from the prevailing theory of financial markets, which, as mentioned earlier, holds that financial markets tend toward equilibrium and that deviations are random and can be attributed to external causes. This theory has been used to justify the belief that the pursuit of self-interest should be given free rein and markets should be deregulated. I call that belief market fundamentalism and claim that it employs false logic. Just because regulations and all other forms of governmental interventions have proven to be faulty, it does not follow that markets are perfect.

Although market fundamentalism is based on false premises, it has served well the interests of the owners and managers of financial capital. The globalization of financial markets allowed financial capital to move around freely and made it difficult for individual states to tax it or regulate it. Deregulation of financial transactions also served the interests of the managers of financial capital; and the freedom to innovate enhanced the profitability of financial enterprises. The financial industry grew to a point where it represented 25 percent of the stock market capitalization in the United States and an even higher percentage in some other countries.

Since market fundamentalism is built on false assumptions, its adoption in the 1980s as the guiding principle of economic policy was bound to have negative consequences. Indeed, we have experienced a series of financial crises since then, but the adverse consequences were suffered principally by the countries that lie on the periphery of the global financial system, not by those at the center. The system is under the control of the developed countries, especially the United States, which enjoys veto rights in the International Monetary Fund.

Whenever a crisis endangered the prosperity of the United States—as for example the savings and loan crisis in the late 1980s, or the collapse of the hedge fund Long Term Capital Management in 1998—the authorities intervened, finding ways for the failing institutions to merge with others and providing monetary and fiscal stimulus when the pace of economic activity was endangered. Thus the periodic crises served, in effect, as successful tests that reinforced both the underlying trend of ever-greater credit expansion and the prevailing misconception that financial markets should be left to their own devices.

It was of course the intervention of the financial authorities that made the tests successful, not the ability of financial markets to correct their own excesses. But it was convenient for investors and governments to deceive themselves. The relative safety and stability of the United States, compared to the countries at the periphery, allowed the United States to suck up the savings of the rest of the world and run a current account deficit that reached nearly 7 percent of GNP at its peak in the first quarter of 2006. Eventually even the Federal Reserve and other regulators succumbed to the market fundamentalist ideology and abdicated their responsibility to regulate. ...

Financial engineering involved the creation of increasingly sophisticated instruments, or derivatives, for leveraging credit and "managing" risk in order to increase potential profit. An alphabet soup of synthetic financial instruments was concocted: CDOs, CDO squareds, CDSs, ABXs, CMBXs, etc. This engineering reached such heights of complexity that the regulators could no longer calculate the risks and came to rely on the risk management models of the financial institutions themselves. The rating companies followed a similar path in rating synthetic financial instruments, deriving considerable additional revenues from their proliferation. The esoteric financial instruments and techniques for risk management were based on the false premise that, in the behavior of the market, deviations from the mean occur in a random fashion. But the increased use of financial engineering set in motion a process of boom and bust. ...

It should be emphasized that this interpretation of the current situation does not necessarily follow from my model of boom and bust. Had the financial authorities succeeded in containing the subprime crisis—as they thought at the time they would be able to do—this would have been seen as just another successful test instead of the reversal point.

...Sophisticated financial engineering of the kind I have mentioned can render the calculation of margin and capital requirements extremely difficult if not impossible. In order to activate such requirements, financial engineering must also be regulated and new products must be registered and approved by the appropriate authorities before they can be used. Such regulation should be a high priority of the new Obama administration. It is all the more necessary because financial engineering often aims at circumventing regulations.

Take for example credit default swaps (CDSs), instruments intended to insure against the possibility of bonds and other forms of debt going into default, and whose price captures the perceived risk of such a possibility occurring. These instruments grew like Topsy because they required much less capital than owning or shorting the underlying bonds. Eventually they grew to more than $50 trillion in nominal size, which is a many-fold multiple of the underlying bonds and five times the entire US national debt. Yet the market in credit default swaps has remained entirely unregulated. AIG, the insurance company, lost a fortune selling credit default swaps as a form of insurance and had to be bailed out, costing the Treasury $126 billion so far. Although the CDS market may be eventually saved from the meltdown that has occurred in many other markets, the sheer existence of an unregulated market of this size has been a major factor in increasing risk throughout the entire financial system.

Since the risk management models used until now ignored the uncertainties inherent in reflexivity, limits on credit and leverage will have to be set substantially lower than those that were tolerated in the recent past. This means that financial institutions in the aggregate will be less profitable than they have been during the super-bubble and some business models that depended on excessive leverage will become uneconomical. The financial industry has already dropped from 25 percent of total market capitalization to 16 percent. This ratio is unlikely to recover to anywhere near its previous high; indeed, it is likely to end lower. This may be considered a healthy adjustment, but not by those who are losing their jobs.

In view of the tremendous losses suffered by the general public, there is a real danger that excessive deregulation will be succeeded by punitive reregulation. That would be unfortunate because regulations are liable to be even more deficient than the market mechanism. As I have suggested, regulators are not only human but also bureaucratic and susceptible to lobbying and corruption. It is to be hoped that the reforms outlined here will preempt a regulatory overkill.

Friday, November 14, 2008

IQ and longevity

IQ predicts longevity, even after controlling for social class.

A 12 minute test administered to an 11 year old predicts longevity better than (adult) body mass index, total cholesterol, blood pressure or blood glucose, and at a similar level to smoking.

If you don't like the 12 minute Wonderlic test ("culturally biased!"), you can try the entirely abstract Raven's Progressive Matrices, which are even more g-loaded than the Wonderlic.



In the studies described below they most likely didn't use either the Wonderlic or RPM, but the results of all well-designed IQ tests are highly correlated.

Nature: Ten years ago, on 16 October 1998, I presented findings that people from Aberdeen with higher childhood IQs — measured at age 11 in the Scottish Mental Survey of 1932 — were significantly more likely to survive to age 76. It was at a psychology seminar at Glasgow Caledonian University, UK. For one audience member, the finding did not go down well. "So, you're saying that the thick die quick?" It was not a point of clarification; it was an accusation. The temperature in the room rose as the questioner railed against a result he found insulting and wanted to invalidate. Hadn't intelligence tests been discredited?

Actually, no. Scores from cognitive-ability tests (also known as intelligence tests or IQ tests) have validity that is almost unequalled in psychology [1]. A general cognitive-ability factor emerges from measures of diverse mental tasks, something that hundreds of data sets since 1904 have replicated. People's rankings on intelligence tests show high stability across almost the whole lifespan, are substantially heritable and are associated with important life outcomes — including educational achievements, occupational success and morbidity and mortality. More thumping confirmatory studies of the link between intelligence and mortality have appeared since our first work. One of these contains nearly a million Swedish men tested at around age 19 during military induction and followed for almost 20 years [2]. It shows a clear association: as intelligence test scores go up the scale, so too does the likelihood of survival over those two decades.

When we attempted to publish our original study, we came across a different complaint: the journals to which we submitted our initial findings said they found the link obvious. It was already well known that health inequalities are associated with different social backgrounds. That was deemed the likely explanation for the finding. But it has since been shown that childhood social class does not account for the association between childhood intelligence and later mortality [3].

Intelligence can predict mortality more strongly than body mass index, total cholesterol, blood pressure or blood glucose, and at a similar level to smoking
[4]. But the reasons for this are still mysterious. That needs to change. Reducing health inequalities is a priority, and to do that we need to determine their causes.

It's plausible that intelligence might have positive benefits for longevity, but it could also have turned out the other way around -- controlled experiments have shown in species such as C. Elegans (worms) and E. Coli (bacteria) that increased learning ability comes with fitness costs such as decreased disease resistance or physical capability. I wonder if the positive IQ--longevity correlation in humans persists even in the high tail.


(Via GNXP.)

Bay area housing market has cracked

To all my friends in the bay area: I told you so, I told you so, I told you so... :-/

SF Chronicle:

“Twenty percent of Bay Area homeowners owe more on their mortgages than their homes are worth, according to a study being released today. This dubious distinction has entered the American lexicon as an all-too-familiar term - being underwater.

As home values continue to plunge, the real estate valuation service Zillow.com said that 20.76 percent of all homes in the nine-county Bay Area are underwater. The rate is much higher than the national average of 1 in 7 homes, or 14.3 percent. That’s because the Bay Area - like most of California - was a classic bubble market, where buyers in recent years paid overinflated prices for homes that now are rapidly losing value in the market downturn.”








(Via Barry Ritholtz.)

Thursday, November 13, 2008

Venn diagram for economics



A = set of people who can do math and are, perhaps, sympathetic to the idea of optimizing rational agents.

B = set of people with psychological insight into human decision making and group behavior.


(A and B) = the intersection of A and B = the people who can understand markets.

A - (A and B) = autistic people who like math - they make crazy assumptions about efficient markets.

B - (A and B) = average (non-economist) social scientist who is intimidated by math but can see economists are making crazy assumptions.


Most physicists start in A - (A and B) but have been trained to recognize when models fail to reflect reality, so they eventually migrate into (A and B). Most successful traders and hedge fund managers are in (A and B). Alan Greenspan only recently migrated there after 40 years ;-)


Related links: heterodox economics, Mirowski's Machine Dreams, intellectual honesty, confessions of an economist

Wednesday, November 12, 2008

Money men congressional testimony

Five big hedgies testified today before Congress, including Paulson, Griffin, Soros and Simons. Simons gave a very succinct and accurate summary of the causes of the current crisis.

Mr. Simons, the founder of Renaissance Technologies and a former mathematics professor who devises complex computer models to predict market moves, says there is plenty of blame to go around for the current financial crisis.

“In my view, the crisis has many causes: The regulators who took a hands-off position on investment bank leverage and credit default swaps; everyone along the mortgage-backed securities chain who should have blown a whistle rather than passing the problem on; and, in my opinion the most culpable, the rating agencies, which allowed sows’ ears to be sold as silk purses,” Mr. Simons says.

See my talk for related comments.

Before the money men testified, four finance professors had their say. From Andy Lo's (MIT Sloan School) testimony:

...[Government funding for training quants!]

6. All technology-focused industries run the risk of technological innovations temporarily exceeding our ability to use those technologies wisely. In the same way that government grants currently support the majority of Ph.D. programs in science and engineering, new funding should be allocated to major universities to greatly expand degree programs in financial technology.

...[Behavioral finance!]

Economists do not naturally gravitate toward behavioral explanations of economic phenomena, preferring, instead, the framework of rational deliberation by optimizing agents in a free-market context. And the ineluctable logic of neoclassical economics is difficult to challenge. However, recent research in the cognitive neurosciences has provided equally compelling experimental evidence that human decisionmaking consists of a complex blend of logical calculation and emotional response (see, for example, Damaso, 1994, Lo and Repin, 2002, and Lo, Repin, and Steenbarger, 2005). Under normal circumstances, that blend typically leads to decisions that work well in free markets. However, under extreme conditions, the balance between logic and emotion can shift, leading to extreme behavior such as the recent gyrations in stock markets around the world in September and October 2008.

This new perspective implies that preferences may not be stable through time or over circumstances, but are likely to be shaped by a number of factors, both internal and external to the individual, i.e., factors related to the individual's personality, and factors related to specific environmental conditions in which the individual is currently situated. When environmental conditions shift, we should expect behavior to change in response, both through learning and, over time, through changes in preferences via the forces of natural selection. These evolutionary underpinnings are more than simple speculation in the context of financial market participants. The extraordinary degree of competitiveness of global financial markets and the outsize rewards that accrue to the “fittest” traders suggest that Darwinian selection is at work in determining the typical profile of the successful investor. After all, unsuccessful market participants are eventually eliminated from the population after suffering a certain level of losses. For this reason, the hedge-fund industry is the Galapagos Islands of the financial system in that the forces of competition, innovation, natural selection are so clearly discernible in that industry.

This new perspective also yields a broader interpretation of free-market economics (see, for example, Lo, 2004, 2005), and presents a new rationale for regulatory oversight. Left to their own devices, market forces generally yield economically efficient outcomes under normal market conditions, and regulatory intervention is not only unnecessary but often counter-productive. However, under atypical market conditions—prolonged periods of prosperity, or episodes of great uncertainty—market forces cannot be trusted to yield the most desirable outcomes, which motivates the need for regulation. Of course, the traditional motivation for regulation—market failures due to externalities, natural monopolies, and public-goods characteristics—is no less compelling, and the desire to prevent sub-optimal behavior under these conditions provides yet another role for government intervention.

A simple example of this dynamic is the existence of fire codes enacted by federal, state, and local governments requiring all public buildings to have a minimum number of exits, well-lit exit signs, a maximum occupancy, and certain types of sprinklers, smoke detectors, and fire alarms. Why are fire codes necessary? In particular, given the costs associated with compliance, why not let markets determine the appropriate level of fire protection demanded by the public? Those seeking safer buildings should be willing to pay more to occupy them, and those willing to take the risk need not pay for what they deem to be unnecessary fire protection. A perfectly satisfactory outcome of this free-market approach should be a world with two types of buildings, one with fire protection and another without, leaving the public free to choose between the two according to their risk preferences.

But this is not the outcome that society has chosen. Instead, we require all new buildings to have extensive fire protection, and the simplest explanation for this state of affairs is the recognition— after years of experience and many lost lives—that we systematically under-estimate the likelihood of a fire.5 In fact, assuming that improbable events are impossible is a universal human trait (see, for example, Plous, 1993, and Slovic, 2000), hence the typical builder will not voluntarily spend significant sums to prepare for an event that most individuals will not value because they judge the likelihood of such an event to be nil. Of course, experience has shown that fires do occur, and when they do, it is too late to add fire protection. What free-market economists interpret as interference with Adam Smith’s invisible hand may, instead, be a mechanism for protecting ourselves from our own behavioral blind spots.

Quants, speak!

An AP science reporter would like to interview people trained in science now working as quants. Let me know if you are willing to talk to her.

Tuesday, November 11, 2008

Michael Lewis on the subprime bubble



Michael Lewis nails it in this long Portfolio article. It's the single best piece I've read on the subject.

...In the two decades since then, I had been waiting for the end of Wall Street. The outrageous bonuses, the slender returns to shareholders, the never-ending scandals, the bursting of the internet bubble, the crisis following the collapse of Long-Term Capital Management: Over and over again, the big Wall Street investment banks would be, in some narrow way, discredited. Yet they just kept on growing, along with the sums of money that they doled out to 26-year-olds to perform tasks of no obvious social utility. The rebellion by American youth against the money culture never happened. Why bother to overturn your parents’ world when you can buy it, slice it up into tranches, and sell off the pieces?

At some point, I gave up waiting for the end. There was no scandal or reversal, I assumed, that could sink the system.

...Enter Greg Lippman, a mortgage-bond trader at Deutsche Bank. He arrived at FrontPoint bearing a 66-page presentation that described a better way for the fund to put its view of both Wall Street and the U.S. housing market into action. The smart trade, Lippman argued, was to sell short not New Century’s stock but its bonds that were backed by the subprime loans it had made. Eisman hadn’t known this was even possible—because until recently, it hadn’t been. But Lippman, along with traders at other Wall Street investment banks, had created a way to short the subprime bond market with precision.

Here’s where financial technology became suddenly, urgently relevant. The typical mortgage bond was still structured in much the same way it had been when I worked at Salomon Brothers. The loans went into a trust that was designed to pay off its investors not all at once but according to their rankings. The investors in the top tranche, rated AAA, received the first payment from the trust and, because their investment was the least risky, received the lowest interest rate on their money. The investors who held the trusts’ BBB tranche got the last payments—and bore the brunt of the first defaults. Because they were taking the most risk, they received the highest return. Eisman wanted to bet that some subprime borrowers would default, causing the trust to suffer losses. The way to express this view was to short the BBB tranche. The trouble was that the BBB tranche was only a tiny slice of the deal.

But the scarcity of truly crappy subprime-mortgage bonds no longer mattered. The big Wall Street firms had just made it possible to short even the tiniest and most obscure subprime-mortgage-backed bond by creating, in effect, a market of side bets. Instead of shorting the actual BBB bond, you could now enter into an agreement for a credit-default swap with Deutsche Bank or Goldman Sachs. It cost money to make this side bet, but nothing like what it cost to short the stocks, and the upside was far greater.

...But he couldn’t figure out exactly how the rating agencies justified turning BBB loans into AAA-rated bonds. “I didn’t understand how they were turning all this garbage into gold,” he says. He brought some of the bond people from Goldman Sachs, Lehman Brothers, and UBS over for a visit. “We always asked the same question,” says Eisman. “Where are the rating agencies in all of this? And I’d always get the same reaction. It was a smirk.” He called Standard & Poor’s and asked what would happen to default rates if real estate prices fell. The man at S&P couldn’t say; its model for home prices had no ability to accept a negative number. “They were just assuming home prices would keep going up,” Eisman says.

As an investor, Eisman was allowed on the quarterly conference calls held by Moody’s but not allowed to ask questions. The people at Moody’s were polite about their brush-off, however. The C.E.O. even invited Eisman and his team to his office for a visit in June 2007. By then, Eisman was so certain that the world had been turned upside down that he just assumed this guy must know it too. “But we’re sitting there,” Daniel recalls, “and he says to us, like he actually means it, ‘I truly believe that our rating will prove accurate.’ And Steve shoots up in his chair and asks, ‘What did you just say?’ as if the guy had just uttered the most preposterous statement in the history of finance. He repeated it. And Eisman just laughed at him.”

“With all due respect, sir,” Daniel told the C.E.O. deferentially as they left the meeting, “you’re delusional.”
This wasn’t Fitch or even S&P. This was Moody’s, the aristocrats of the rating business, 20 percent owned by Warren Buffett. And the company’s C.E.O. was being told he was either a fool or a crook by one Vincent Daniel, from Queens.

...That’s when Eisman finally got it. Here he’d been making these side bets with Goldman Sachs and Deutsche Bank on the fate of the BBB tranche without fully understanding why those firms were so eager to make the bets. Now he saw. There weren’t enough Americans with shitty credit taking out loans to satisfy investors’ appetite for the end product. The firms used Eisman’s bet to synthesize more of them. Here, then, was the difference between fantasy finance and fantasy football: When a fantasy player drafts Peyton Manning, he doesn’t create a second Peyton Manning to inflate the league’s stats. But when Eisman bought a credit-default swap, he enabled Deutsche Bank to create another bond identical in every respect but one to the original. The only difference was that there was no actual homebuyer or borrower. The only assets backing the bonds were the side bets Eisman and others made with firms like Goldman Sachs. Eisman, in effect, was paying to Goldman the interest on a subprime mortgage. In fact, there was no mortgage at all.

Fear and loathing of the plutocracy

Did Paulson and Treasury deliberately overpay for shares in Goldman and eight other banks? What did taxpayers get for their $125 billion, versus what Buffet got for his $5 billion Goldman investment just a few weeks earlier? See Black-Scholes analysis here.

Will banks like Citi pay out bonuses using bailout funds? Citi plans to distribute $26 billion after receiving $25 billion from you and me. Will the public let them get away with it? Given the tremendous value destruction they've caused, on Wall Street and beyond, how can top executives at these companies justify any bonus compensation for themselves?

After leaving the Clinton administration, how did Rahm Emanuel make $16 million in two years (at Wasserstein Perella) with no prior business experience? Did you really think Obama was going to be a radical left president?

Michael Lewis, Bloomberg commentary:

It may still take awhile before Wall Street finally accepts that it won't get paid.

At the moment, as their bony fingers fondle the new taxpayer loot, the firms appear to believe that they might still fool the public into thinking that bonus money isn't taxpayer money.

``We've responded appropriately to the attorney general's request for information about 2008 bonus pools,'' a Citigroup Inc. spokeswoman told Bloomberg News recently, ``and confirmed that we will not use TARP funds for compensation.'' But as the Bloomberg report noted, ``she declined to elaborate.''

As well she might! For if the Citigroup spokeswoman had elaborated she would have needed to say something like this: ``We're still trying to figure out how the $25 billion we've already taken of taxpayers' money has nothing to do with the $26 billion we're planning to hand out to our highly paid employees in 2008 (up 4 percent from 2007!). But it's a tricky problem because, when you think about it, it's all the same money.''

...If you are one of those people currently sitting inside a big Wall Street firm praying for some kind of bonus it may already have dawned on you that you need to rethink your approach. It's no longer any use to hint darkly that they had better fork over serious sticks or you'll bolt for Morgan Stanley. There's no point even in thinking up clever ways to make profits for your firm: who cares how much money you bring into Goldman Sachs if the U.S. Congress doesn't allow Goldman Sachs to pay bonuses?

The moment your firm accepted taxpayer money, you lost control of your money machine. ...

More from Michael Lewis at Portfolio Magazine (via Paul Kedrosky).


Figure from earlier post: Is the finance boom over?

Monday, November 10, 2008

AIG watch

Treasury is setting up a special vehicle to buy up (face value) $70 billion in "troubled" CDOs insured by AIG CDS. At 50 cents on the dollar they intend to spend $30 billion in taxpayer dollars and $5 billion of AIG's money. This should reduce the collateral calls on AIG, although it's not clear that all the entities holding AIG CDS actually own the referenced CDO. Who at Treasury did the calculation to confirm that the ultimate value of the CDOs in question is over 50 percent of face value? If I'm holding the CDO and corresponding CDS contract, and I'm confident that the government is behind AIG, why should I sell at a 50 percent loss?

Kashkari remarks on TARP.

WSJ: ...The government's initial intervention was driven by concern that AIG's failure to meet it obligations in the credit default swap market would create a global financial meltdown. ...

Under the revised deal, AIG is expected to transfer the troubled holdings into two separate entities.

The first such vehicle is to be capitalized with $30 billion from the government and $5 billion from AIG. That money will be used to acquire the underlying securities with a face value of $70 billion that AIG agreed to insure with the credit default swaps. These securities, known as collateralized debt obligations, are thinly traded investments that include pools of loans. The vehicle will seek to acquire the securities from their trading partners on the CDS contracts for about 50 cents on the dollar.

The securities in question don't account for all of AIG's credit default swap exposure but are connected to the most troubled assets. The government may be betting that its involvement will encourage AIG's trading partners to sell the securities tied to the CDS contracts to the new entity.

Once it holds the securities, AIG could cancel the credit default swaps and take possession of the collateral it had posted to back the contracts. The total collateral at stake is about $30 billion.

It may also have some unintended consequences across the markets. For the plan to work, AIG's trading partners -- the banks and financial institutions that are on the other side of its credit-default-swap contracts -- may have to agree to any changes in the terms of their agreements with AIG.

Sunday, November 09, 2008

Wealth effect, consumer spending and recession

How much is consumer spending likely to fall as a consequence of stock and home price declines? If we assume a $10 trillion decline in housing and equity wealth, and a wealth effect of .04, we arrive at a decline in consumer spending of $400 billion. So, roughly 2-3 percent of GDP.

Even if we get the financial crisis fixed, we can expect a serious recession.



The figure (click for larger version) is from this 2005 paper by Case, Quigley and Shiller. Their analysis suggests a (housing) wealth effect of about .04

Friday, November 07, 2008

Catastrophe bonds and the investor's choice problem

Consider the following proposition. You put up an amount of capital X for one year. There is a small probability p (e.g., p = .01) that you will lose the entire amount. With probability (1-p) you get the entire amount back. What interest rate (fee) should you charge to participate?

What I've just described is a catastrophe bond. A catastrophe bond allows an insurer to transfer the tail risk from a natural disaster (hurricane, earthquake, fire, etc.) to an investor who is paid appropriately. How can we decide the appropriate fee for taking on this risk? It's an example of the fundamental investor's choice problem. That is, what is the value of a gamble specified by a given probability distribution over a set of payoffs? (Which of two distributions do you prefer?) One would think that the answer depends on individual risk preferences or utility functions.

Our colloquium speaker last week was John Seo of Fermat Capital, a hedge fund that trades catastrophe bonds. Actually, John pioneered the business at Lehman Brothers before starting Fermat. He's yet another deep thinking physicist who ended up in finance. Indeed, he claims to have made some fundamental progress on the investor's choice problem. His approach involves a kind of discounting in probability space, as opposed to the now familiar discounting of cash flows in time. I won't discuss the details further, since they are slightly proprietary.

I can discuss aspects of the cat bond market. Apparently the global insurance industry cannot self-insure against 1 in 100 year risks. That is, disasters which have occurred historically with that frequency are capable of taking down the whole industry (e.g., huge earthquakes in Japan or California). Therefore, it is sensible for insurers to sell some of that risk. Who wants to buy a cat bond? Well, pension funds, which manage the largest pools of capital on the planet, are always on the lookout for sources of return whose risks are uncorrelated with those of stocks, bonds and other existing financial instruments. Portfolio theory suggests that a pension fund should put a few percent of its capital into cat bonds, and that's how John has raised the $2 billion he currently has under management. The market answer to the question I posed in the first paragraph is roughly LIBOR plus (4-6) times the expected loss. For a once in a century disaster, this return is LIBOR plus (4-6) percent or so. Sounds like a good trade for the pension fund as long as the event risk is realistically evaluated.

Note there is no leverage or counterparty risk in these transactions. An independent vehicle is created which holds the capital X, invested in AAA securities (no CDOs, please :-). If the conditions of the contract are triggered, this entity turns the capital over to the insurance company. Otherwise, the assets are returned at the end of the term.

In the colloquium, John reviewed the origins of present value analysis, going back to Fibonacci, Fermat and Pascal. See Mark Thoma, who also attended, for more discussion.