Wednesday, December 31, 2008

Paul Graham on cities

While I broadly agree with Paul's points he is thinking too much like a VC when he talks about Silicon Valley. For many entrepreneurs and geeks in the valley, the message is build something cool -- not you should be more powerful. It's true that after having some ups and downs in the valley most people would want to be more powerful -- would want to be able to ensure funding for a startup or technology that was really deserving -- but I think that power is more a means than an end for the people who are the real heart of the place.

I'd still rather live in Berkeley :-) Via Educating Silicon.

Cities and Ambition

Great cities attract ambitious people. You can sense it when you walk around one. In a hundred subtle ways, the city sends you a message: you could do more; you should try harder.

The surprising thing is how different these messages can be. New York tells you, above all: you should make more money. There are other messages too, of course. You should be hipper. You should be better looking. But the clearest message is that you should be richer.

What I like about Boston (or rather Cambridge) is that the message there is: you should be smarter. You really should get around to reading all those books you've been meaning to.

When you ask what message a city sends, you sometimes get surprising answers. As much as they respect brains in Silicon Valley, the message the Valley sends is: you should be more powerful.

That's not quite the same message New York sends. Power matters in New York too of course, but New York is pretty impressed by a billion dollars even if you merely inherited it. In Silicon Valley no one would care except a few real estate agents. What matters in Silicon Valley is how much effect you have on the world. The reason people there care about Larry and Sergey is not their wealth but the fact that they control Google, which affects practically everyone.


How much does it matter what message a city sends? Empirically, the answer seems to be: a lot. You might think that if you had enough strength of mind to do great things, you'd be able to transcend your environment. Where you live should make at most a couple percent difference. But if you look at the historical evidence, it seems to matter more than that. Most people who did great things were clumped together in a few places where that sort of thing was done at the time.

...I'd always imagined Berkeley would be the ideal place—that it would basically be Cambridge with good weather. But when I finally tried living there a couple years ago, it turned out not to be. The message Berkeley sends is: you should live better. Life in Berkeley is very civilized. It's probably the place in America where someone from Northern Europe would feel most at home. But it's not humming with ambition.

In retrospect it shouldn't have been surprising that a place so pleasant would attract people interested above all in quality of life. Cambridge with good weather, it turns out, is not Cambridge. The people you find in Cambridge are not there by accident. You have to make sacrifices to live there. It's expensive and somewhat grubby, and the weather's often bad. So the kind of people you find in Cambridge are the kind of people who want to live where the smartest people are, even if that means living in an expensive, grubby place with bad weather.

As of this writing, Cambridge seems to be the intellectual capital of the world. I realize that seems a preposterous claim. What makes it true is that it's more preposterous to claim about anywhere else. American universities currently seem to be the best, judging from the flow of ambitious students. And what US city has a stronger claim? New York? A fair number of smart people, but diluted by a much larger number of neanderthals in suits. The Bay Area has a lot of smart people too, but again, diluted; there are two great universities, but they're far apart. Harvard and MIT are practically adjacent by West Coast standards, and they're surrounded by about 20 other colleges and universities. [1]

Cambridge as a result feels like a town whose main industry is ideas, while New York's is finance and Silicon Valley's is startups.


When you talk about cities in the sense we are, what you're really talking about is collections of people. For a long time cities were the only large collections of people, so you could use the two ideas interchangeably. But we can see how much things are changing from the examples I've mentioned. New York is a classic great city. But Cambridge is just part of a city, and Silicon Valley is not even that. (San Jose is not, as it sometimes claims, the capital of Silicon Valley. It's just 178 square miles at one end of it.)

Maybe the Internet will change things further. Maybe one day the most important community you belong to will be a virtual one, and it won't matter where you live physically. But I wouldn't bet on it. The physical world is very high bandwidth, and some of the ways cities send you messages are quite subtle.

One of the exhilarating things about coming back to Cambridge every spring is walking through the streets at dusk, when you can see into the houses. When you walk through Palo Alto in the evening, you see nothing but the blue glow of TVs. In Cambridge you see shelves full of promising-looking books. Palo Alto was probably much like Cambridge in 1960, but you'd never guess now that there was a university nearby. Now it's just one of the richer neighborhoods in Silicon Valley. [2]

A city speaks to you mostly by accident—in things you see through windows, in conversations you overhear. It's not something you have to seek out, but something you can't turn off. One of the occupational hazards of living in Cambridge is overhearing the conversations of people who use interrogative intonation in declarative sentences. But on average I'll take Cambridge conversations over New York or Silicon Valley ones.

A friend who moved to Silicon Valley in the late 90s said the worst thing about living there was the low quality of the eavesdropping. At the time I thought she was being deliberately eccentric. Sure, it can be interesting to eavesdrop on people, but is good quality eavesdropping so important that it would affect where you chose to live? Now I understand what she meant. The conversations you overhear tell you what sort of people you're among.

...What cities provide is an audience, and a funnel for peers. These aren't so critical in something like math or physics, where no audience matters except your peers, and judging ability is sufficiently straightforward that hiring and admissions committees can do it reliably. In a field like math or physics all you need is a department with the right colleagues in it. It could be anywhere—in Los Alamos, New Mexico, for example.

It's in fields like the arts or writing or technology that the larger environment matters. In these the best practitioners aren't conveniently collected in a few top university departments and research labs—partly because talent is harder to judge, and partly because people pay for these things, so one doesn't need to rely on teaching or research funding to support oneself. It's in these more chaotic fields that it helps most to be in a great city: you need the encouragement of feeling that people around you care about the kind of work you do, and since you have to find peers for yourself, you need the much larger intake mechanism of a great city.

Tuesday, December 30, 2008

LSE public lectures

A treasure trove of interesting podcasts from the London School of Economics here. Recent lectures are also available at iTunes UChannel. Some specific recommendations below. See also talks by Robert Shiller, Mohamed El Erian, Martin Wolf, Paul Kennedy, Mearsheimer-Walt, Craig Venter and many others.

I'm always on the hunt for high quality data feeds ;-)

The China Challenge as Myth and Reality
Speaker: Professor Chen Jian

Few countries have experienced changes as dramatic as did China in the past century - and the past quarter century in particular. From a "revolutionary country" to a "status quo power," and from an "outsider" to an "insider" of the existing international system, the realities of the grand transformation in China's state, society and international outlook have often been obscured by all kinds of myths. For the purpose of highlighting the realities and deconstructing the myths, Professor Chen discusses the origins, processes and implications of China's rise from the perspective of a historian of China's international relations. Chen Jian is the Philippe Roman Chair in History and International Affairs at LSE.

The War for Wealth: The true story of globalization and how Western society can survive
Speaker: Gabor Steingart

Globalization is the defining force of our lifetime, but most politicians have not understood the complexity of the process. Thus argues Gabor Steingart, in his controversial and thought-provoking new book The War for Wealth: The True Story of Globalization.

Skills, Rights and Resources in the East Asian Path to Development
Speaker: Professor Kenneth Pomeranz

This lecture traces evolving relationships among skills, bargaining power, and East Asian economic development. Kenneth Pomeranz is UCI Chancellor's Professor of History at the University of California-Irvine.

Monday, December 29, 2008

Globalization and supercomputing

From the NYTimes, 100 fastest supercomputers by location, and historical timeline of computing power. Click for larger images.

Friday, December 26, 2008

Friday Night Lights

Over the break I had some time to catch up on the television series Friday Night Lights (FNL), thanks to The show is loosely based on the book by H.G. Bissinger:

Friday Night Lights: A Town, a Team, and a Dream is a 1990 non-fiction book written by H. G. Bissinger. The book follows the story of the 1988 Permian High School Panthers football team from Odessa as they made a run towards the Texas state championship. While originally intended to be a Hoosiers-type chronicle of high school sports holding a small town together, the final book ended up being critical about life in the town of Odessa, Texas, complete with portraits of what Bissinger called "the ugliest racism" he has ever witnessed, as well as misplaced priorities, where football conquered most aspects of the town and academics were ignored for the sake of championships.

Bissinger was a sports writer for The Philadelphia Inquirer, when he decided to write a book about high school sports. After a search, he settled on Odessa, TX and their famous Permian Panthers. The Panthers had a long, rich history of winning in Texas' AAAA and AAAAA division, winning championships in 1965, 1972, 1980 and 1984 at the time when Bissinger and his family moved from Philadelphia to Odessa. He spent the entire football season with the Permian Panther players, their families, the coaches, and even many of the townspeople in an effort to understand the town and their football culture and what created such madness for their football team.

In 2002, Sports Illustrated named Friday Night Lights the fourth-greatest book ever written about sports.

Bissinger's book should be of interest to anyone who wants to understand small town American life and its microcosm, the local high school. If you like FNL, I also recommend The Courting of Marcus Dupree, by Willie Morris, about the recruiting of a superstar running back from Missisippi. (Link goes to Google Books version.) FNL didn't feature any real football talents -- although the team was very successful none of the players went on to big time college careers. Dupree on the other hand was one of the top high school backs of all time, breaking Herschel Walker's touchdown record. Read pages 34-44, which will teach you much more about outliers than anything written by Malcolm Gladwell. (Dupree, despite his small school background, had something that the Permian players, with their expensive facilities, highly paid coaching staff and unrivaled football mania, just couldn't match -- raw, god-given talent ;-)

NYTimes review: IN 1964, the town of Philadelphia, Miss., became the symbol of much that was wrong with America - if only by virtue of its having provided the setting for the murders of three young civil-rights workers, Michael Schwerner, Andrew Goodman and James Earl Chaney. Just 17 years later, in the autumn of 1981, Philadelphia became the focus of more benign attention. It was the site of a competition among the nation's leading college football powers to recruit the most highly touted high-school player in the country. ...

The FNL television series is fantastic, and has a devoted following despite mediocre ratings. To get a sense of it, have a look at the following clips. (Note, I am finding on Safari that the hulu embedded players don't work well, which is why I also link directly to the hulu pages where you can view the clips.)

The hard nosed side of big time high school sports. view

Family life in America today. view

Hardscrabble in Texas. view

If you liked those clips, watch this this full episode and this one.

Wednesday, December 24, 2008

Peace on Earth, good will to men 2008

For years, when asked what I wanted for Christmas, I've been replying peace on earth, good will toward men :-)

No one ever seems to recognize that this comes from the bible, Luke 2.14 to be precise!

Linus said it best in A Charlie Brown Christmas:

And there were in the same country shepherds abiding in the field, keeping watch over their flock by night.

And, lo, the angel of the Lord came upon them, and the glory of the Lord shone round about them: and they were sore afraid.

And the angel said unto them, Fear not: for, behold, I bring you good tidings of great joy, which shall be to all people.

For unto you is born this day in the city of David a Saviour, which is Christ the Lord.

And this shall be a sign unto you; Ye shall find the babe wrapped in swaddling clothes, lying in a manger.

And suddenly there was with the angel a multitude of the heavenly host praising God, and saying,

Glory to God in the highest, and on earth peace, good will toward men.

Merry Christmas!

Monday, December 22, 2008

More than this

I discovered today that one of my colleagues also loves this song. Bryan Ferry of Roxy Music had real artistic insight into chance and determinism. I like the 10,000 Maniacs version better, which is in the lower screen.

More Than This

I could feel at the time
There was no way of knowing
Fallen leaves in the night
Who can say where they're blowing

As free as the wind
And hopefully learning
Why the sea on the tide
Has no way of turning

More than this - there is nothing
More than this - tell me one thing
More than this - there is nothing

It was fun for a while
There was no way of knowing
Like dream in the night
Who can say where we're going

No care in the world
Maybe I'm learning
Why the sea on the tide
Has no way of turning

More than this - there is nothing
More than this - tell me one thing
More than this - there is nothing

Sunday, December 21, 2008

Gladwell on Outliers on Charlie Rose

The interview was better than I had expected, but then my expectations were not high. At least Gladwell stops short of completely embracing the "there is no talent, it's all effort" line.

However, at about 13 minutes in we get a huge dose of politically correct pseudoscience and poor logic: Asians are good at math because -- get this -- rice farming was labor intensive. Tolerance for hard work was transmitted culturally and had no impact on genes -- "we know this" says Gladwell :-) I don't suppose Gladwell has looked at any transnational adoption studies, which remove the cultural component...

At 23 and a half minutes we get the "IQ above 120 doesn't matter" claim -- see here for some pretty strong evidence against that.

See here for earlier comments on Outliers, and here for a discussion of success and talent (the meaning of correlation).

Saturday, December 20, 2008

Mickey Rourke returns

He was one of my favorite actors in the 80s. His new movie, The Wrestler, directed by Dan Aronofsky (theme song contributed by Bruce Springsteen, co-stars Marisa Tomei), is getting rave reviews.

Trailer for Diner:

Trailer for The Pope of Greenwich Village:

"It's all in da gene!"

"Dey took my tumb!" -- Eric Roberts steals the scene:

A great interview in which Rourke pulls no punches:

Wednesday, December 17, 2008

Recent evolution in humans

Did evolution stop once modern humans emerged in Africa? Or, to the contrary, did it speed up?

This question is addressed in the forthcoming book by Greg Cochran and Henry Harpending: The 10,000 Year Explosion. Harpending is an anthropogist and Cochran a physicist. Together they have produced a number of interesting research ideas in the area of human evolution (see below). I've read a pre-release draft of the book and recommend it highly. If you enjoyed Guns, Germs and Steel by Jared Diamond, then you owe it to yourself to read this book, which directly engages Diamond's thesis that geography (not DNA) is destiny.

I discussed research supporting accelerated recent human evolution by Cochran, Harpending and collaborators in an earlier post: We are all mutants now. The figure below is from a Times article by Nicholas Wade.

We are all mutants now: Some interesting new science suggests that human evolution has accelerated in the last tens of thousands of years. The study by Hawks, Wang, Cochran, Harpending and Moyzis (of UW Madison, Affymetrix, U Utah and UC Irvine) uses linkage disequilibrium tests on hapmap SNP data to determine that roughly 7% of all genes have undergone strong selection recently. The method looks for regions of DNA with similar SNP patterns. If an advantageous gene swept through a population in a relatively short time, replacing other variants, then the pattern of nucleotide polymorphisms in that area of the chromosome will be particularly uniform throughout the group. The results imply that we are all descended from mutants who, relatively recently, out-competed and replaced their contemporaries. The distribution of mutations is not uniform in different geographical populations (i.e., races). Recent evolution is causing genetic divergence, not convergence.

There is a good theoretical argument for why evolution may speed up due to population growth. Given a particular probability distribution for producing beneficial mutations, a large population implies a faster rate of incidence of such mutations. Because reproductive dynamics leads to exponential solutions (i.e., a slight increase in expected number of offspring compounds rapidly), the time required for an advantageous allele to sweep through a population only grows logarithmically with the population, while the rate of incidence grows linearly.

To elaborate on the last point, consider the set of mutations that are sufficiently advantageous that they would sweep through a population of N humans (i.e. reach fixation) in some specified period of time, such as 5000 years. If the probability of such a mutation is p, the rate of occurrence in the population is proportional to pN. Now imagine the population of the group increases to 100N. The rate of mutations is then much higher -- 100pN -- but the time necessary for fixation has only increased by the logarithm of 100 since selective advantage works exponentially: the population fraction with the mutant gene grows as exp( r t ), where r is the reproductive advantage and t is time. This rather obvious point -- that linear beats log -- suggests that the rate of evolution will speed up as population size increases. (A possible loophole is if the probability of mutations as a function of relative advantage is itself an exponential function, and falls off rapidly with increasing advantage.) If the Hawks et al. results are any guide, as many as 7% of all genes have been under intense selection in the last 10-50,000 years. (See here for another summary of the research with a nice illustration of how linkage disequilibrium arises due to favorable mutations.) Importantly, the variants that reached fixation over this period are different in different geographical regions.

Thus civilization, with its consequently larger populations supported by agriculture, enhanced rather than suppressed the rate of human evolution.

A related question is whether selection pressure remained strong after the development of civilization. Perhaps reproductive success became largely decoupled from genetic influences once humans became civilized? Not only is this implausible, but it seems to be directly contradicted by evidence. The graph below, based on English inheritance records, shows that the rich gradually out-reproduced the poor: the wealthy had more than twice as many surviving children as the poor. (Note the range of inheritances in the graph covers the middle class to moderately wealthy; the poor and very rich are not shown.) Thus, in this period of history wealth was a good proxy for reproductive success. Genes which were beneficial for the accrual of wealth (e.g., for intelligence, self-discipline, delayal of gratification, etc.) would have become more prevalent over time. In a simple population model, any lineage that remained consistently poor over a few hundred year period would contribute almost zero to today's population of Britons.

The graph is taken from this paper:

Survival of the Richest: The Malthusian Mechanism in Pre-Industrial England


Fundamental to the Malthusian model of pre-industrial society is the assumption that higher income increased reproductive success. Despite the seemingly inescapable logic of this model, its empirical support is weak. We examine the link between income and net fertility using data from wills on reproductive success, social status and income for England 1585–1638. We find that for this society, close to a Malthusian equilibrium, wealth robustly predicted reproductive success. The richest testators left twice as many children as the poorest. Consequently, in this static economy, social mobility was predominantly downwards. The result extends back to at least 1250 in England.

See also my review of Clark's A Farewell to Alms, and this video of a talk by Clark. When Clark wrote the book he wasn't sure whether it was genetic change or cultural change that led to the industrial revolution in England. In the video lecture he comments that he has since become convinced it was largely genetic. That doesn't jibe with the back of the envelope calculation I give below -- even in the optimistic case (largest effect) it would seem to take a thousand years to have a big shift in overall population characteristics.

Here's a very crude back of the envelope calculation: if, in a brutal Malthusian setting, the top 10% in wealth were to out-reproduce the average by 20% per generation, then after only 10 generations or so (say 2-300 years), essentially everyone in the population would trace their heritage in some way to this group. In our population the average IQ of the high income group is about +.5 SD relative to the average. If the heritability of IQ is .5, then in an ideal case we could see a selection-driven increase of +.25 SD every 2-300 years, or +1 SD per millenium. This is highly speculative, of course, and oversimplified, but it shows that there is (plausibly) no shortage of selection pressure to drive noticeable, even dramatic, change. If the estimate is too high by an order of magnitude (the rich group doesn't directly replace the others; there is inevitably a lot of intermarriage between descendants of the rich and non-rich), a change of +1 SD per 10,000 years would still be possible. There's clearly no shortage in genetic variation affecting intelligence: we see 1 SD variations not just within populations but commonly in individual families!

So where does this leave us?

1) The rate of positive mutations went up due to population growth. More importantly, the rate of mutations that were likely to sweep the entire population in a fixed period of time probably went up.

2) Natural selection did not abate: there is evidence for differential reproductive rates that are impacted by genes.

3) Humans living today are possibly quite different from our ancestors of 50,000 years ago. I would guess we are smarter and better suited to living in a complex society that requires cooperation and planning. We are also probably more likely to be lactose tolerant, nearsighted and bad at hunting ;-)

Cochran and Harpending's new book deserves wide attention and serious discussion.

More genetic substructure

Via Genetic Future, two more principal components analyses of SNP data. Click the genetics label for related posts.

European, Nigerian and East Asian samples from HapMap and 100 African-American samples clustered based on data from ~600,000 genetic markers. The bend toward the East Asian cluster is probably due to Native American admixture.

Alkes L. Price, Nick Patterson, et al. (2008). Effects of cis and trans Genetic Ancestry on Gene Expression in African Americans, PLoS Genetics, 4 (12) DOI: 10.1371/journal.pgen.1000294.

Resolving Finns and Swedes. Jakkula et al. (2008) The Genome-wide Patterns of Variation Expose Significant Substructure in a Founder Population, The American Journal of Human Genetics, 83 (6), 787-794 DOI: 10.1016/j.ajhg.2008.11.005.

Tuesday, December 16, 2008

Great MMA photos

If you like these photos, you might consider buying this book. (Or, even if you hate these pictures, you can buy the book for me -- shipping address here :-)

Genki Sudo after knocking out Royler Gracie.

Mark Coleman embraces his daughters after a loss to Fedor Emelianenko.

Wanderlei Silva after knocking out Rampage Jackson.

Rampage Jackson after knocking out Chuck Lidell.

Enson Inoue armbars Randy Couture.

Poster for Pride 10 featuring Kazushi Sakuraba.

Teaching effectiveness

The two figures below (click for larger versions) are taken from the Brookings report by Gordon, Kane and Staiger: Identifying Effective Teachers Using Performance on the Job. The report has received a lot of attention recently thanks to Malcolm Gladwell's New Yorker article. Both are worth a look if you are interested in education. The top figure shows that certification has no impact on teaching effectiveness. The second shows that effectiveness measured in the years 1 and 2 is predictive of effectiveness in the subsequent year. In this case effectiveness is defined by the average change in percentile ranking of students in the teacher's class. Good teachers help their students to improve their mastery, hence percentile ranking, relative to the average student studying the same material.

It's obvious to me that there is gigantic variation in effectiveness among teachers. Gladwell emphasizes how difficult it is to evaluate teaching capability in initial hiring, and how the single most important impact on overall school effectiveness is due to individual teachers (he also makes the analogy to scouting college QBs for pro football -- it's very hard to predict NFL performance based on college performance). The Brookings paper has many policy suggestions, but the basic idea is that if we were disciplined and data-driven we could easily determine which teachers are good and which ones are not.

New Yorker: ...One of the most important tools in contemporary educational research is “value added” analysis. It uses standardized test scores to look at how much the academic performance of students in a given teacher’s classroom changes between the beginning and the end of the school year. Suppose that Mrs. Brown and Mr. Smith both teach a classroom of third graders who score at the fiftieth percentile on math and reading tests on the first day of school, in September. When the students are retested, in June, Mrs. Brown’s class scores at the seventieth percentile, while Mr. Smith’s students have fallen to the fortieth percentile. That change in the students’ rankings, value-added theory says, is a meaningful indicator of how much more effective Mrs. Brown is as a teacher than Mr. Smith.

It’s only a crude measure, of course. A teacher is not solely responsible for how much is learned in a classroom, and not everything of value that a teacher imparts to his or her students can be captured on a standardized test.

Nonetheless, if you follow Brown and Smith for three or four years, their effect on their students’ test scores starts to become predictable: with enough data, it is possible to identify who the very good teachers are and who the very poor teachers are. What’s more—and this is the finding that has galvanized the educational world—the difference between good teachers and poor teachers turns out to be vast.

Eric Hanushek, an economist at Stanford, estimates that the students of a very bad teacher will learn, on average, half a year’s worth of material in one school year. The students in the class of a very good teacher will learn a year and a half’s worth of material. That difference amounts to a year’s worth of learning in a single year. Teacher effects dwarf school effects: your child is actually better off in a “bad” school with an excellent teacher than in an excellent school with a bad teacher. Teacher effects are also much stronger than class-size effects. You’d have to cut the average class almost in half to get the same boost that you’d get if you switched from an average teacher to a teacher in the eighty-fifth percentile. And remember that a good teacher costs as much as an average one, whereas halving class size would require that you build twice as many classrooms and hire twice as many teachers.

Hanushek recently did a back-of-the-envelope calculation about what even a rudimentary focus on teacher quality could mean for the United States. If you rank the countries of the world in terms of the academic performance of their schoolchildren, the U.S. is just below average, half a standard deviation below a clump of relatively high-performing countries like Canada and Belgium. According to Hanushek, the U.S. could close that gap simply by replacing the bottom six per cent to ten per cent of public-school teachers with teachers of average quality. After years of worrying about issues like school funding levels, class size, and curriculum design, many reformers have come to the conclusion that nothing matters more than finding people with the potential to be great teachers. But there’s a hitch: no one knows what a person with the potential to be a great teacher looks like. The school system has a quarterback problem.

In my experience as a university professor I find that most colleagues think of themselves as above-average teachers, even when they are not. Essentially no "value-added" analysis is ever done, so people can have a 30 year teaching career without ever realizing that they aren't effective in the classroom. I've done many dozens of business presentations, to venture capitalists, technology partners, customers, analysts and even potential M&A acquirers, which has helped me improve my own teaching and communication skills. Despite the business setting such meetings are 90 percent teaching -- trying to convey key points to the audience in a limited time. I'm usually there with a team and my team isn't shy about telling me afterwards what worked and what didn't work, so I've had a lot of honest feedback that most professors never get.

The New Yorker cartoon and article capture some essential aspects of teaching and communication that are not widely understood. The teacher has to be simultaneously on top of the material itself and aware of what the class is doing / thinking / confused about. The big neglected factors in teaching are the ability to be a kind of air traffic controller (or symphony conductor) for the class, and the ability to empathize with (read the mind of) an individual student, to see what, exactly, is confusing them.

Saturday, December 13, 2008


Robert Skidelsky, Keynes' biographer, writes in the Times magazine (excerpted below). Keynes had lived through the greatest of all bubbles and crashes, and saw through the convenient but deeply flawed idea of efficient markets.

As someone with a mathematical bent I was not initially drawn to Keynes' brand of economics -- my interests were in areas of modern finance like option pricing theory, volatility, stochastic models. But like Keynes I have seen a bubble up close -- first in Silicon Valley, and now, from a greater distance, the current credit crisis. What seemed to be reasonable rough approximations: efficient markets, no arbitrage conditions, stochastic processes, etc., have been revealed as terribly naive and dangerous. And so over time my views have come to resemble those described below. (See my talk on the financial crisis, and this Venn diagram.)

Although he is best known as an economist, Keynes' Treatise on Probability, written relatively early in his career, is quite good, and also stresses the idea of probability as a form of logic which goes beyond binary truth values. (See related post on E.T. Jaynes and Bayesian thinking.)

Note to commenters: I am not endorsing all "Keynsian" policy measures. I am endorsing Keynes' opinions on efficient markets, risk and the importance of psychological and sociological factors in economics -- i.e., what is discussed in the excerpt below.

NYTimes: Among the most astonishing statements to be made by any policymaker in recent years was Alan Greenspan’s admission this autumn that the regime of deregulation he oversaw as chairman of the Federal Reserve was based on a “flaw”: he had overestimated the ability of a free market to self-correct and had missed the self-destructive power of deregulated mortgage lending. The “whole intellectual edifice,” he said, “collapsed in the summer of last year.”

[Greenspan quote here.]

What was this “intellectual edifice”? As so often with policymakers, you need to tease out their beliefs from their policies. Greenspan must have believed something like the “efficient-market hypothesis,” which holds that financial markets always price assets correctly.

...By contrast, Keynes created an economics whose starting point was that not all future events could be reduced to measurable risk. There was a residue of genuine uncertainty, and this made disaster an ever-present possibility, not a once-in-a-lifetime “shock.” Investment was more an act of faith than a scientific calculation of probabilities. And in this fact lay the possibility of huge systemic mistakes.

The basic question Keynes asked was: How do rational people behave under conditions of uncertainty? The answer he gave was profound and extends far beyond economics. People fall back on “conventions,” which give them the assurance that they are doing the right thing. The chief of these are the assumptions that the future will be like the past (witness all the financial models that assumed housing prices wouldn’t fall) and that current prices correctly sum up “future prospects.” Above all, we run with the crowd. A master of aphorism, Keynes wrote that a “sound banker” is one who, “when he is ruined, is ruined in a conventional and orthodox way.” (Today, you might add a further convention — the belief that mathematics can conjure certainty out of uncertainty.)

But any view of the future based on what Keynes called “so flimsy a foundation” is liable to “sudden and violent changes” when the news changes. Investors do not process new information efficiently because they don’t know which information is relevant. Conventional behavior easily turns into herd behavior. Financial markets are punctuated by alternating currents of euphoria and panic.

Keynes’s prescriptions were guided by his conception of money, which plays a disturbing role in his economics. Most economists have seen money simply as a means of payment, an improvement on barter. Keynes emphasized its role as a “store of value.” Why, he asked, should anyone outside a lunatic asylum wish to “hold” money? The answer he gave was that “holding” money was a way of postponing transactions. The “desire to hold money as a store of wealth is a barometer of the degree of our distrust of our own calculations and conventions concerning the future. . . . The possession of actual money lulls our disquietude; and the premium we require to make us part with money is a measure of the degree of our disquietude.” The same reliance on “conventional” thinking that leads investors to spend profligately at certain times leads them to be highly cautious at others. Even a relatively weak dollar may, at moments of high uncertainty, seem more “secure” than any other asset, as we are currently seeing.

It is this flight into cash that makes interest-rate policy such an uncertain agent of recovery. If the managers of banks and companies hold pessimistic views about the future, they will raise the price they charge for “giving up liquidity,” even though the central bank might be flooding the economy with cash. That is why Keynes did not think that cutting the central bank’s interest rate would necessarily — and certainly not quickly — lower the interest rates charged on different types of loans. This was his main argument for the use of government stimulus to fight a depression. There was only one sure way to get an increase in spending in the face of an extreme private-sector reluctance to spend, and that was for the government to spend the money itself. Spend on pyramids, spend on hospitals, but spend it must.

This, in a nutshell, was Keynes’s economics. His purpose, as he saw it, was not to destroy capitalism but to save it from itself. He thought that the work of rescue had to start with economic theory itself. Now that Greenspan’s intellectual edifice has collapsed, the moment has come to build a new structure on the foundations that Keynes laid.


The Times recalls the 1970s cult novel Ecotopia, by Ernest Callenbach.

The Novel That Predicted Portland

SOMETIMES a book, or an idea, can be obscure and widely influential at the same time. That’s the case with “Ecotopia,” a 1970s cult novel, originally self-published by its author, Ernest Callenbach, that has seeped into the American groundwater without becoming well known.

The novel, now being rediscovered, speaks to our ecological present: in the flush of a financial crisis, the Pacific Northwest secedes from the United States, and its citizens establish a sustainable economy, a cross between Scandinavian socialism and Northern California back-to-the-landism, with the custom — years before the environmental writer Michael Pollan began his campaign — to eat local.

White bicycles sit in public places, to be borrowed at will. A creek runs down Market Street in San Francisco. Strange receptacles called “recycle bins” sit on trains, along with “hanging ferns and small plants.” A female president, more Hillary Clinton than Sarah Palin, rules this nation, from Northern California up through Oregon and Washington.

Note that Callenbach actually lives in Berkeley, where the climate is better :-(

On the other hand, Brad DeLong was impressed by our six kinds of recycling at U Oregon.

It's easy to forget that today's widely accepted environmentalism started as a crazy fringe social movement only 35 years ago. I can clearly remember during my childhood when it suddenly became not OK to just throw trash out the window of your moving car. (Remember the crying Indian chief TV spot? See below!) This development is captured nicely in an episode of the AMC TV series Mad Men (about 1960s Madison Ave. ad men), in which Don Draper and his lovely WASP upper class family have a nice picnic in the woods, and in the final shot leave behind a pile of rubbish and beer cans sitting in the grass. I think this means that there is hope for humanity -- we'll eventually figure out that preserving the environment is in our best interest as a species.

Incidentally, Mad Men is the only thing on TV I watch regularly, aside from ultimate fighting. At a holiday party earlier in the week the show came up in conversation and I found that randomly selected literature and film professors also love it :-) Sadly, I don't know anyone on the faculty who is excited about BJ Penn versus Georges St. Pierre in January.

Snow in Eugene

A rare snowfall -- it's already starting to melt away. These were all taken from inside :-)

Friday, December 12, 2008

Confidence matters

More on confidence via Barry Ritholtz:

Which raises the question: Why [no] runs on semis or software companies? The short answer is their business model does not depend upon a belief system — of solvency, liquidity, profitability or risk management.

It wasn’t a crisis of confidence that did the iBanks in, it was a crisis of competence.

That was the element CEOs like Dick Fuld, Hank Paulson, Stan O’Neal and Jimmy Cayne failed to consider: When you are a bank, your existence depends upon the confidence of your clients, investors and counter-parties. Anything you do that puts that at risk is extremely dangerous. If you want to run lots of leverage, push the envelope, well, then, you better hope nothing else goes wrong. At 35X, you do not leave any room for error.

It is inexcusable that the investment CEOs did not seem to realize this. It was unconsionable that the firms had been purposefully put into a risk taking position in extremis. That the CEOs blamed short sellers and rumors, but exonerated themselves, only serves to emphasize their own failures, their lack of comprehension of what they had dome to themselves. It was their own incompetent stewardship that purposefully and unknowingly placed these firms at such grave danger of destruction.

Macro modelers take note: no realistic results without accounting for ape psychology.

Here's a nice video feauring the confidence men (financial CEOs) and their recent payouts:

And this (both via Barry Ritholtz):

Related: Central limit theorem and securitization.

Thursday, December 11, 2008

Jaynes and Bayes

E.T. Jaynes, although a physicist, was one of the great 20th century proponents of Bayesian thinking. See here for a wealth of information, including some autobiographical essays. I recommend this article on probability, maximum entropy and Bayesian thinking, and this, which includes his recollections of Dyson, Feynman, Schwinger and Oppenheimer.

Here are the first three chapters of his book Probability Theory: the Logic of Science. The historical material in the preface is fascinating.

Jaynes started as an Oppenheimer student, following his advisor from Berkeley to Princeton. But Oppenheimer's mystical adherence to the logically incomplete Copenhagen interpretation (Everett's "philosophic monstrosity") led Jaynes to switch advisors, becoming a student of Wigner.
Edwin T. Jaynes was one of the first people to realize that probability theory, as originated by Laplace, is a generalization of Aristotelian logic that reduces to deductive logic in the special case that our hypotheses are either true or false. This web site has been established to help promote this interpretation of probability theory by distributing articles, books and related material. As Ed Jaynes originated this interpretation of probability theory we have a large selection of his articles, as well as articles by a number of other people who use probability theory in this way.
See Carson Chow for a nice discussion of how Bayesian inference is more like human reasoning than formal logic.
The seeds of the modern era could arguably be traced to the Enlightenment and the invention of rationality. I say invention because although we may be universal computers and we are certainly capable of applying the rules of logic, it is not what we naturally do. What we actually use, as coined by E.T. Jaynes in his iconic book Probability Theory: The Logic of Science, is plausible reasoning. Jaynes is famous for being a major proponent of Bayesian inference during most of the second half of the last century. However, to call Jaynes’s book a book about Bayesian statistics is to wholly miss Jayne’s point, which is that probability theory is not about measures on sample spaces but a generalization of logical inference. In the Jaynes view, probabilities measure a degree of plausibility.

I think a perfect example of how unnatural the rules of formal logic are is to consider the simple implication A -> B which means - If A is true then B is true. By the rules of formal logic, if A is false then B can be true or false (i.e. a false premise can prove anything). Conversely, if B is true, then A can be true or false. The only valid conclusion you can deduce from is that if B is false then A is false. ...

However, people don’t always (seldom?) reason this way. Jaynes points out that the way we naturally reason also includes what he calls weak syllogisms: 1) If A is false then B is less plausible and 2) If B is true then A is more plausible. In fact, more likely we mostly use weak syllogisms and that interferes with formal logic. Jaynes showed that weak syllogisms as well as formal logic arise naturally from Bayesian inference.

[Carson gives a nice example here -- see the original.]

...I think this strongly implies that the brain is doing Bayesian inference. The problem is that depending on your priors you can deduce different things. This explains why two perfectly intelligent people can easily come to different conclusions. This also implies that reasoning logically is something that must be learned and practiced. I think it is important to know when you draw a conclusion, whether you are using deductive logic or if you are depending on some prior. Even if it is hard to distinguish between the two for yourself, at least you should recognize that it could be an issue.
While I think the brain is doing something like Bayesian inference (perhaps with some kinds of heuristic shortcuts), there are probably laboratory experiments showing that we make a lot of mistakes and often do not properly apply Bayes' theorem. A quick look through the old Kahneman and Tversky literature would probably confirm this :-)

Map of science

This citation map of science (click for larger version) is from It shows the importance of basic sciences like physics and molecular biology. Applied fields still depend on advances in basic science.

Eigenfactor score is a PageRank-like algorithm, in which citations are links. See here for more.

Orange circles represent fields, with larger, darker circles indicating larger field size as measured by Eigenfactor score™. Blue arrows represent citation flow between fields. An arrow from field A to field B indicates citation traffic from A to B, with larger, darker arrows indicating higher citation volume.

The map was creating using our information flow method for mapping large networks. Using data from Thomson Scientific's 2004 Journal Citation Reports (JCR), we partitioned 6,128 journals connected by 6,434,916 citations into 88 modules. For visual simplicity, we show only the most important links, namely those that a random surfer traverses at least once in 5000 steps, and the modules that are connected by these links.

Wednesday, December 10, 2008

Steve Chu, Energy Secretary

Steve Chu, currently director of LBNL, is Obama's pick for Energy Secretary. Thank goodness! Finally a big brain will run the agency that funds our national labs and basic energy research.

Chu won the Nobel prize for his work on laser cooling of trapped atoms. This technique is now a fundamental tool in atomic physics. Chu did his PhD at Berkeley under brilliant experimentalist Eugene Commins (who was still around when I was a grad student). When Chu won the Nobel my mother received several phone calls from well wishers -- "I heard your son the Berkeley PhD won the Nobel Prize in physics!" :-/ (Hsu, Chu, what's the difference?) Sorry ma, don't get your hopes up!

Chu: "I told my boss .... `Guess what? I just trapped an atom.' He said, `Great. What are you going to do with it?' I said, `I don't know, but it's great!'"

DeLong on the $20 trillion dollar mystery

Brad can't understand how a $2 trillion mortgage loss can destroy $20 trillion in value in the world's capital markets. He does a good job of laying out the mystery here:

[First list 5 factors that affect market value of capital stock:]

(1) Savings and Investment
(2) News
(3) Default Discount
(4) Liquidity Discount
(5) Risk Discount

...In the past two years the wealth that is the global capital stock has fallen in value from $80 trillion to $60 trillion. Savings has not fallen through the floor. We have had little or no bad news about resource constraints, technological opportunities, or political arrangements. Thus (1) and (2) have not been operating. The action has all been in (3), (4), and (5).

As far as (3) is concerned, the recognition that a lot of people are not going to pay their mortgages and thus that a lot of holders of CDOs, MBSs, and counterparties, creditors, and shareholders of financial institutions with mortgage-related assets has increased the default discount by $2 trillion. And the fact that the financial crisis has brought on a recession has further increased the default discount — bond coupons that won’t be paid and stock dividends that won’t live up to firm promises — by a further $4 trillion. So we have a $6 trillion increase in the magnitude of (3) the default discount. The problem is that we have a $20 trillion decline in market values.

The problem is made bigger by the fact that for (4), the Federal Reserve, the European Central Bank, and the Bank of England have flooded the market with massive amounts of high-quality liquid claims on governments’ treasuries, and so have reduced the liquidity discount — not increased it — by an amount that I estimate to be roughly $3 trillion. Thus (3) and (4) together can only account for a $3 trillion decrease in market value. The rest of that decline in the value of global capital — all $17 trillion of it — thus comes by arithmetic from (5): a rise in the risk discount. There has been a massive crash in the risk tolerance of the globe’s investors.

Thus we have an impulse — a $2 trillion increase in the default discount from the problems in the mortgage market — but the thing deserving attention is the extraordinary financial accelerator that amplified $2 trillion in actual on-the-ground losses in terms of mortgage payments that will not be made into an extra $17 trillion of lost value because global investors now want to hold less risky portfolios than they wanted two years ago.

From my standpoint, the puzzle is multiplied by the fact that we economists have what we regard as pretty good theories about (4) and (5), and yet those theories do not seem to work at all....

...Things are even worse as far as the risk discount is concerned. Our models predict that in normal times, with the ability to diversify portfolios that exists today, the risk discount on assets like corporate equities should be around 1% per year. It is more like 5% per year in normal times — and more like 10% per year today. And our models for why the risk discount has taken such a huge upward leap in the past year and a half are little better than simple handwaving and just-so stories. Our current financial crisis remains largely a mystery: a $2 trillion impulse in lost value of securitized mortgages has set in motion a financial accelerator that we do not understand at any deep level but that has led to ten times the total losses in financial wealth of the impulse.

Short answer for physicists: phase transition in investor sentiment. People woke up one day and realized that the black box utility called "finance" (on which society relies so heavily) may not actually work properly. So they lost confidence, which is hard to regain. The importance of confidence is clear once we admit that most of the workings of financial markets are indeed a black box -- few people understand what is really going on. The same is true for individual companies -- we assume management knows what it is doing, until we realize otherwise. We might have a similar sudden shift in societal risk attitudes if, for instance, it were suddenly revealed (e.g., by the explosion of a submarine taking San Diego with it) that nuclear reactor and weapon designs were faulty and that random megaton explosions should be expected every decade or so.***

(Oh, and there's also the matter of CDS markets, a big amplifier of uncertainty and systemic risk which Brad doesn't mention at all.)

Some of this is explained in my talk. See also this paper for references to agent-based simulations which exhibit phase transitions in sentiment (bubbles and crashes). The intellectual toolkit of neoclassicals like Brad tends to focus on equilibrium ideas, which are unable to explain such phenomena. More discussion here on Arnold King's blog. I also recommend Bill Janeway.

Final technical point: it is very wrong to back out the implied total market capitalization from trades executed by a minority of distressed agents. A market cap extracted this way will inevitably exhibit wild fluctuations. Confidence in this quantity relies on particularly unwarranted efficient market assumptions: that markets (even in periods of dislocation) are the best forecasters of real economic value (discounted future cash flows).

*** You might try to accommodate events into a story of weakly efficient markets -- we received a shock or infusion of information (news) that caused a sudden revaluation. But the story doesn't work so when well the actual news is "financial markets are highly unreliable" or "nobody really understands this system as it is too complex" -- if that's the news, how efficient are / were markets? :-/

Monday, December 08, 2008

The fate of an honest intellectual

Good thing I am not as courageous as Norman Finkelstein. I do remember the Sokal hoax, though :-) See academic trends in pictures for more fun!

The basic lesson deserves emphasis: most people -- even intellectuals, professors and, yes, scientists -- are not careful thinkers. They are not good at overcoming the emotional and psychological barriers that prevent the falsification of cherished beliefs. Science is good training, but all too often not sufficient.

My Chomsky story. I accidentally came across a copy of At War with Asia in the Page House library (Caltech) when I was a student. I had no idea who Chomsky was, I knew nothing yet of linguistics, but the book was powerful and affecting. Years later as a Junior Fellow I emailed Chomsky (a former Junior Fellow) at MIT and invited him to one of our formal Monday dinners. He declined to come to dinner, as his relationship with some of the senior fellows was contentious, but wanted to come to lunch and meet some of the younger people. We had a wonderful time, and I discovered he has a pretty good sense of humor :-)

Note: I don't have any particular expertise on the matters related to Finkelstein's career or scholarship. But the story below rings true to me.

The Fate of an Honest Intellectual

Noam Chomsky

Excerpted from Understanding Power, The New Press, 2002, pp. 244-248

I'll tell you another, last case—and there are many others like this. Here's a story which is really tragic. How many of you know about Joan Peters, the book by Joan Peters? There was this best-seller a few years ago [in 1984], it went through about ten printings, by a woman named Joan Peters—or at least, signed by Joan Peters—called From Time Immemorial. It was a big scholarly-looking book with lots of footnotes, which purported to show that the Palestinians were all recent immigrants [i.e. to the Jewish-settled areas of the former Palestine, during the British mandate years of 1920 to 1948]. And it was very popular—it got literally hundreds of rave reviews, and no negative reviews: the Washington Post, the New York Times, everybody was just raving about it. Here was this book which proved that there were really no Palestinians! Of course, the implicit message was, if Israel kicks them all out there's no moral issue, because they're just recent immigrants who came in because the Jews had built up the country. And there was all kinds of demographic analysis in it, and a big professor of demography at the University of Chicago [Philip M. Hauser] authenticated it. That was the big intellectual hit for that year: Saul Bellow, Barbara Tuchman, everybody was talking about it as the greatest thing since chocolate cake.Well, one graduate student at Princeton, a guy named Norman Finkelstein, started reading through the book. He was interested in the history of Zionism, and as he read the book he was kind of surprised by some of the things it said. He's a very careful student, and he started checking the references—and it turned out that the whole thing was a hoax, it was completely faked: probably it had been put together by some intelligence agency or something like that. Well, Finkelstein wrote up a short paper of just preliminary findings, it was about twenty-five pages or so, and he sent it around to I think thirty people who were interested in the topic, scholars in the field and so on, saying: "Here's what I've found in this book, do you think it's worth pursuing?"

Well, he got back one answer, from me. I told him, yeah, I think it's an interesting topic, but I warned him, if you follow this, you're going to get in trouble—because you're going to expose the American intellectual community as a gang of frauds, and they are not going to like it, and they're going to destroy you. So I said: if you want to do it, go ahead, but be aware of what you're getting into. It's an important issue, it makes a big difference whether you eliminate the moral basis for driving out a population—it's preparing the basis for some real horrors—so a lot of people's lives could be at stake. But your life is at stake too, I told him, because if you pursue this, your career is going to be ruined.

Well, he didn't believe me. We became very close friends after this, I didn't know him before. He went ahead and wrote up an article, and he started submitting it to journals. Nothing: they didn't even bother responding. I finally managed to place a piece of it in In These Times, a tiny left-wing journal published in Illinois, where some of you may have seen it. Otherwise nothing, no response. Meanwhile his professors—this is Princeton University, supposed to be a serious place—stopped talking to him: they wouldn't make appointments with him, they wouldn't read his papers, he basically had to quit the program. ...

He's now living in a little apartment somewhere in New York City, and he's a part-time social worker working with teenage drop-outs. Very promising scholar—if he'd done what he was told, he would have gone on and right now he'd be a professor somewhere at some big university. ...

But let me just go on with the Joan Peters story. Finkelstein's very persistent: he took a summer off and sat in the New York Public Library, where he went through every single reference in the book—and he found a record of fraud that you cannot believe. Well, the New York intellectual community is a pretty small place, and pretty soon everybody knew about this, everybody knew the book was a fraud and it was going to be exposed sooner or later. The one journal that was smart enough to react intelligently was the New York Review of Books—they knew that the thing was a sham, but the editor didn't want to offend his friends, so he just didn't run a review at all. That was the one journal that didn't run a review.

...We approached the publishers and asked them if they were going to respond to any of this, and they said no—and they were right. Why should they respond? They had the whole system buttoned up, there was never going to be a critical word about this in the United States. But then they made a technical error: they allowed the book to appear in England, where you can't control the intellectual community quite as easily.

Well, as soon as I heard that the book was going to come out in England, I immediately sent copies of Finkelstein's work to a number of British scholars and journalists who are interested in the Middle East—and they were ready. As soon as the book appeared, it was just demolished, it was blown out of the water. Every major journal, the Times Literary Supplement, the London Review, the Observer, everybody had a review saying, this doesn't even reach the level of nonsense, of idiocy. A lot of the criticism used Finkelstein's work without any acknowledgment, I should say—but about the kindest word anybody said about the book was "ludicrous," or "preposterous." ...

Still, in the universities or in any other institution, you can often find some dissidents hanging around in the woodwork—and they can survive in one fashion or another, particularly if they get community support. But if they become too disruptive or too obstreperous—or you know, too effective—they're likely to be kicked out. The standard thing, though, is that they won't make it within the institutions in the first place, particularly if they were that way when they were young—they'll simply be weeded out somewhere along the line. So in most cases, the people who make it through the institutions and are able to remain in them have already internalized the right kinds of beliefs: it's not a problem for them to be obedient, they already are obedient, that's how they got there. And that's pretty much how the ideological control system perpetuates itself in the schools—that's the basic story of how it operates, I think.

Help! -- climate change

Some of the readers of this blog know much more about climate change than I do. Could someone please comment on this web page of Eric Baum's, in which he claims the evidence for human causation is weak and that state of the art climate models are shoddy? (Excerpts below.) Baum is a brilliant guy -- former theoretical physicist and AI researcher. I've recommended his book on AI here before.

Greenhouse Gas global warming (as opposed to other sources) should be measured in the tropical troposphere, because the models say that is the signature of greenhouse gas warming: the tropical troposphere should warm at roughly twice the surface rate. To verify this, see for example Figure 9.1, p675, Vol 1 IPCC Report. (The whole report can be found at .)
This was always an embarrassment for global warmists, because the troposphere has never warmed much, but in the last few years its cooled. The tropical troposphere has now not warmed at all. See for the graph of temperature according to three satellite series since 1978.

The Radiosonde (weather balloon) data series is an independent measurement of the tropical troposphere temperature. It goes back to 1958 and is presumably extremely reliable, because all they are doing is sending thermometers up in balloons. You can see the time series at: The graph is flat, and the most recent data point is the coldest.

...This shows that the IPCC's GCM's (Global Circulation Models) are wrong. Not that it can be too surprising that the GCM's are worthless since p 596 of the IPCC 4th report cautiously admitted they didn't know whether their GCM's had more data points or free parameters! Yet the GCM's are absolutely central to any argument for expecting warming by more than a few tenths of a degree by 2100, and to the amazingly porous argument the IPCC report gives to demonstrate man caused the alleged observed warming.

...The IPCC 4th report says "attribution of anthropogenic climate change is understood to mean demonstration that a detected change... is not consistent with alternative, physically plausible explanations."[p668] But the report contains several alternative possibilities that are said to be "not understood" or whose magnitude is said to be "largely unknown". For example, two are mentioned just in the last paragraph of 1.4.3. (p108): unknown large feedbacks from changes in solar irradiance, and the effects of galactic cosmic rays. Actually, as I point out in the above few paragraphs, cosmic rays seem to explain climate fluctuations extremely well. The IPCC devotes considerable space to the strawman that solar activity could directly affect the earth's temperature, but ignores the actual indirect means by which solar variation seems to affect temperature. Global Warmists routinely attack the strawman of direct solar effect any time the subject is raised.

Also, Mars, Jupiter, Triton, Neptune, and Pluto have recently been observed warming, suggesting some cause external to the earth, but none of them are mentioned anywhere in the 987 pages of the 4th Report. Another physically plausible explanation for recent warming (if indeed warming has actually occurred) as remarked by Lindzen would be thermal transfer from the deep oceans. The oceans and atmosphere are turbulent fluids prone to exchange heat in unpredictable ways over a wide range of time scales simply because chaotic systems do that kind of thing, which the computer models of the IPCC are completely inadequate to simulate.

Its also worth noting that intuitive physics (and pencil and paper calculation) says that greenhouse gas warming scales logarithmically. The theoretical reason for the effect is that CO2 molecules (for example) absorb and reflect certain wavelengths. But they only do it in certain wavelengths. Once you've got some molecules of CO2 in the air, the effect of each next molecule is less than the one before, because those wavelengths are already getting scattered, and mostly heat is already only getting out in other wavelengths. So even if you believed everything else, one's expectation would be that we've already seen the substantial majority of all the warming we will ever see, if we quintuple the CO2 from here. To believe otherwise, you have to rely in detail on the GCM's prediction of positive feedbacks, that they are not competent to calculate, to predict warming in the future that is several times greater than anything we've seen before.

Sunday, December 07, 2008

Resolution of population genetic structure

How much can we resolve the substructure of a population with a given amount of data? The paper below gives a quantitative answer. With current technology, we should have no problem resolving even small national populations (see italicized text in quote below), with nearest neighbor FST as small as .0001 (i.e., 99.99 percent of variation is within-group and only .01 percent between groups)! According to this Table of European, Nigerian and East Asian FSTs, the FST between France and Spain is .0008, whereas between Nigeria and Japan it is about .19 .

Within the European + HapMap sample analyzed here, over 100 statistically significant PCA vectors were identified. That is, there is a >100 dimensional space within which structure can be teased out. (However, the largest single vector accounts for only a percent of total variation, and the integral over all 100 vectors is probably only a few percent.) Norwegians and Swedes could be resolved with 90 percent accuracy. Note the Patterson et al. paper was written before this recent analysis, which confirms their theoretical predictions of sensitivity. (Figure below.)

The first author, Nick Patterson (profiled here), is a mathematician turned cryptographer turned quant (Renaissance) turned bioinformaticist.

Population Structure and Eigenanalysis

Nick Patterson et al. (Broad Institute of Harvard and MIT)

Current methods for inferring population structure from genetic data do not provide formal significance tests for population differentiation. We discuss an approach to studying population structure (principal components analysis) that was first applied to genetic data by Cavalli-Sforza and colleagues. We place the method on a solid statistical footing, using results from modern statistics to develop formal significance tests. We also uncover a general “phase change” phenomenon about the ability to detect structure in genetic data, which emerges from the statistical theory we use, and has an important implication for the ability to discover structure in genetic data: for a fixed but large dataset size, divergence between two populations (as measured, for example, by a statistic like FST) below a threshold is essentially undetectable, but a little above threshold, detection will be easy. This means that we can predict the dataset size needed to detect structure.

...Another implication is that these methods are sensitive. For example, given a 100,000 marker array and a sample size of 1,000, then the BBP threshold for two equal subpopulations, each of size 500, is FST = .0001. An FST value of .001 will thus be trivial to detect. To put this into context, we note that a typical value of FST between human populations in Northern and Southern Europe is about .006 [15]. Thus, we predict: most large genetic datasets with human data will show some detectable population structure.

Saturday, December 06, 2008

Be kind to your creditors

The Atlantic has a long interview with Gao Xiqing, president of the China Investment Corporation, which manages about $200 billion of the country’s foreign assets. CIC makes most of the high-visibility investments, like buying stakes in Blackstone and Morgan Stanley. Gao was a professor in China, then earned a law degree at Duke and practiced here before returning to China.

...His office, in one of the more tasteful new glass-walled high-rises in Beijing, itself seems less Chinese than internationally “fusion”-minded in its aesthetic and furnishings. Bonsai trees in large pots, elegant Japanese-looking arrangements of individual smooth stones on display shelves, Chinese and Western financial textbooks behind the desk, with a photo of Martin Luther King Jr. perched among the books. Two very large, very thin desktop monitors read out financial data from around the world. As we spoke, Western classical music played softly from a good sound system.

Gao dressed and acted like a Silicon Valley moneyman rather than one from Wall Street—open-necked tattersall shirt, muted plaid jacket, dark slacks, scuffed walking shoes. Rimless glasses. His father was a Red Army officer who was on the Long March with Mao. As a teenager during the Cultural Revolution, Gao worked on a railroad-building gang and in an ammunition factory. He is 55, fit-looking, with crew-cut hair and a jokey demeanor rather than an air of sternness. ...

About the financial crisis of 2008: We are not quite at the bottom yet. Because we don’t really know what’s going to happen next. Everyone is saying, “Oh, look, the dollar is getting stronger!” [As it was when we spoke.] I say, that’s really temporary. It’s simply because a lot of people need to cash in, they need U.S. dollars in order to pay back their creditors. But after a short while, the dollar may be going down again. I’d like to bet on that!

The overall financial situation in the U.S. is changing, and that’s what we don’t know about. It’s going to be changed fundamentally in many ways.

Think about the way we’ve been living the past 30 years. Thirty years ago, the leverage of the investment banks was like 4-to-1, 5-to-1. Today, it’s 30-to-1. This is not just a change of numbers. This is a change of fundamental thinking.

People, especially Americans, started believing that they can live on other people’s money. And more and more so. First other people’s money in your own country. And then the savings rate comes down, and you start living on other people’s money from outside. At first it was the Japanese. Now the Chinese and the Middle Easterners.

We—the Chinese, the Middle Easterners, the Japanese—we can see this too. Okay, we’d love to support you guys—if it’s sustainable. But if it’s not, why should we be doing this? After we are gone, you cannot just go to the moon to get more money. So, forget it. Let’s change the way of living. [By which he meant: less debt, lower rewards for financial wizardry, more attention to the “real economy,” etc.]

About Wall Street jobs, wealth, and the cultural distortion of America: I have to say it: you have to do something about pay in the financial system. People in this field have way too much money. And this is not right.

...Individually, everyone needs to be compensated. But collectively, this directs the resources of the country. It distorts the talents of the country. The best and brightest minds go to lawyering, go to M.B.A.s. And that affects our country, too! Many of the brightest youngsters come to me and say, “Okay, I want to go to the U.S. and get into business school, or law school.” I say, “Why? Why not science and engineering?” They say, “Look at some of my primary-school classmates. Their IQ is half of mine, but they’re in finance and now they’re making all this money.” So you have all these clever people going into financial engineering, where they come up with all these complicated products to sell to people.

About the $700 billion U.S. financial-rescue plan enacted in October: Finally, after months and months of struggling with your own ideology, with your own pride, your self-right-eousness … finally [the U.S. applied] one of the great gifts of Americans, which is that you’re pragmatic. Now our people are joking that we look at the U.S. and see “socialism with American characteristics.” [The Chinese term for its mainly capitalist market-opening of the last 30 years is “socialism with Chinese characteristics.”]

On what might make the Chinese government start taking its dollars out of America: (I began the question by saying that China would hurt itself by pulling out dollar assets—at which he interjected, “in the short term”—and then asked about the long-term view).

Today when we look at all the markets, the U.S. still is probably the most viable, the most predictable. I was trained as a lawyer, and predictability is always very important for me.

We have a PR department, which collects all the comments about us, from Chinese newspapers and the Web. Every night, I try to pick a time when I’m in a relatively good mood to read it, because most of the comments are very critical of us. Recently we increased our holdings in Blackstone a little bit. Now we’re increasing a little bit our holdings in Morgan Stanley, so as not to be diluted by the Japanese. People here hate it. They come out and say, “Why the hell are you trying to save those people? You are the representative of the poor people eating porridge, and you’re saving people eating shark fins!” It’s always that sort of thing.

...I have great admiration of American people. Creative, hard-working, trusting, and freedom-loving. But you have to have someone to tell you the truth. And then, start realizing it. And if you do it, just like what you did in the Second World War, then you’ll be great again!

If that happens, then of course—American power would still be there for at least as long as I am living. But many people are betting on the other side.

Monday, December 01, 2008

Frequentists vs Bayesians

Noted Berkeley statistician David Freedman recently passed away. I recommend the essay below if you are interested in the argument between frequentists (objectivists) and Bayesians (subjectivists). I never knew Freedman, but based on his writings I think I would have liked him very much -- he was clearly an independent thinker :-)

In everyday life I tend to be sympathetic to the Bayesian point of view, but as a physicist I am willing to entertain the possibility of true quantum randomness.

I wish I understood better some of the foundational questions mentioned below. In the limit of infinite data will two Bayesians always agree, regardless of priors? Are exceptions contrived?

Some issues in the foundation of statistics

Abstract: After sketching the conflict between objectivists and subjectivists on the foundations of statistics, this paper discusses an issue facing statisticians of both schools, namely, model validation. Statistical models originate in the study of games of chance, and have been successfully applied in the physical and life sciences. However, there are basic problems in applying the models to social phenomena; some of the difficulties will be pointed out. Hooke’s law will be contrasted with regression models for salary discrimination, the latter being a fairly typical application in the social sciences.

...The subjectivist position seems to be internally consistent, and fairly immune to logical attack from the outside. Perhaps as a result, scholars of that school have been quite energetic in pointing out the flaws in the objectivist position. From an applied perspective, however, the subjectivist position is not free of difficulties. What are subjective degrees of belief, where do they come from, and why can they be quantified? No convincing answers have been produced. At a more practical level, a Bayesian’s opinion may be of great interest to himself, and he is surely free to develop it in any way that pleases him; but why should the results carry any weight for others? To answer the last question, Bayesians often cite theorems showing "inter-subjective agreement:" under certain circumstances, as more and more data become available, two Bayesians will come to agree: the data swamp the prior. Of course, other theorems show that the prior swamps the data, even when the size of the data set grows without bounds-- particularly in complex, high-dimensional situations. (For a review, see Diaconis and Freedman, 1986.) Theorems do not settle the issue, especially for those who are not Bayesians to start with.

My own experience suggests that neither decision-makers nor their statisticians do in fact have prior probabilities. A large part of Bayesian statistics is about what you would do if you had a prior.7 For the rest, statisticians make up priors that are mathematically convenient or attractive. Once used, priors become familiar; therefore, they come to be accepted as "natural" and are liable to be used again; such priors may eventually generate their own technical literature. ...

It is often urged that to be rational is to be Bayesian. Indeed, there are elaborate axiom systems about preference orderings, acts, consequences, and states of nature, whose conclusion is-- that you are a Bayesian. The empirical evidence shows, fairly clearly, that those axioms do not describe human behavior at all well. The theory is not descriptive; people do not have stable, coherent prior probabilities.

Now the argument shifts to the "normative:" if you were rational, you would obey the axioms, and be a Bayesian. This, however, assumes what must be proved. Why would a rational person obey those axioms? The axioms represent decision problems in schematic and highly stylized ways. Therefore, as I see it, the theory addresses only limited aspects of rationality. Some Bayesians have tried to win this argument on the cheap: to be rational is, by definition, to obey their axioms. ...

How do we learn from experience? What makes us think that the future will be like the past? With contemporary modeling techniques, such questions are easily answered-- in form if not in substance.

·The objectivist invents a regression model for the data, and assumes the error terms to be independent and identically distributed; "iid" is the conventional abbreviation. It is this assumption of iid-ness that enables us to predict data we have not seen from a training sample-- without doing the hard work of validating the model.

·The classical subjectivist invents a regression model for the data, assumes iid errors, and then makes up a prior for unknown parameters.

·The radical subjectivist adopts an exchangeable or partially exchangeable prior, and calls you irrational or incoherent (or both) for not following suit.

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved; although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science. [!!!]

Blog Archive