Sunday, September 30, 2007

Blade Runner returns

It's the 25th anniversary of Blade Runner! Interestingly, Blade Runner was a money loser; the summer of 1982 was dominated by the Spielberg blockbuster E.T.

The director's cut came out 15 years ago. This new release is a lovingly crafted digitized version with improved special effects.

Wired Q&A with director Ridley Scott. Full transcript (long, with audio). Apparently Scott never finished the Philip K. Dick novel Do Androids Dream of Electric Sheep? on which the screenplay was based.

Deckard is a replicant (Wired interview):

Scott: The whole point of Gaff was — the guy who makes origami and leaves little matchstick figures around, right? The whole point of Gaff, the whole point in that direction at the very end, if Gaff is an operator for the department, then Gaff is also probably an exterminator. Gaff, at the end, doesn't like Deckard, and we don't really know why. And if you take for granted for a moment that, let's say, Deckard is Nexus 7, he probably has an unknown life span and therefore is starting to get awfully human. Gaff, just at the very end, leaves a piece of origami, which is a piece of silver paper you might find in a cigarette packet. And it's of a unicorn, right? So, the unicorn that's used in Deckard's daydream tells me that Deckard wouldn't normally talk about such a thing to anyone. If Gaff knew about that, it's Gaff's message to say, "I've basically read your file, mate." Right? So, that file relates to Deckard's first speech to Rachael when he says, "That isn't your imagination, that's Tyrell's niece's daydreams. And he describes a little spider on a bush outside the kitchen door. Do you remember that?

Wired: I don't remember the — oh, the spider. Yeah.

Scott: Well, the spider is an implanted piece of imagination. And therefore Deckard has imagination and even history implanted in his head. He even has memories of his mother and father in his head, maybe a brother or sister in his head. So if you want to make a Nexus that really believes they're human, then you're going to have to think about their past, and you're going to have to put that in their mind.

Wired: Why didn't the unicorn dream sequence appear in either the work print or the original release?

Scott: As I said, there was too much discussion in the room. I wanted it. They didn't want it. I said, "Well, it's a fundamental part of the story." And they said, "Well, isn't it obvious that he's a replicant here?" And I said, "No. No more obvious than he's not a replicant at the end. So, it's a matter of choice, isn't it?"

Wired: As a fan reading people's comments about this, I've come across statements of Harrison Ford saying that he was not a replicant.

Scott: I know.

Wired: And watching the director's cut, it seemed to me when Ford picks up the origami unicorn at the end of the movie —

Scott: And he nods.

Wired: The look on his face says, "Oh, so Gaff was here, and he let Rachael live." It doesn't say, "Oh my God! Am I a replicant?"

Scott: No? Yeah, but then you — OK. I don't know. Why is he nodding when he looks at this silver unicorn? It's actually echoing in his head when he has that drunken daydream at the piano, he's staring at the pictures that Roy Batty had in his drawer. And he can't fathom why Roy Batty's got all these pictures about. Why? Family, background, that's history. Roy Batty's got no history, so he's fascinated by the past. And he has no future. All those things are in there to tap into if you want it. But Deckard, I'm not going to have a balloon go up. Deckard's look on his face, look at it again now that I've told you what it was about. Deckard, again, it's like he had a suspicion that doing the job he does, reading the files he reads on other replicants, because — remember — he's, as they call them, a blade runner. He's a replicant moderator or even exterminator. And if he's done so many now — and who are the biggest hypochondriacs? Doctors. So, if he's a killer of replicants, he may have wondered at one point, can they fiddle with me? Am I human, or am I a replicant? That's in his innermost thoughts. I'm just giving the fully flushed-out possibility to justify that gleaming look at the end where he kind of glints and kind of looks angry, but it's like, to me, an affirmation. That look confirms something. And he nods, he agrees. "Ah hah, Gaff was here." And he goes for the elevator door. And he is a replicant getting into an elevator with another replicant.

Wired: And why does Harrison Ford think otherwise?

Scott: You mean that he may not be or that he is?

Wired: Well, he is on record saying that, as far as he's concerned, Deckard is not a replicant.

Scott: Yeah, but that was, like, probably 20 years ago.

Wired: OK, but —

Scott: He's given up now. He's said, "OK, mate. You win, you win. Anything, anything, just put it to rest."

Wednesday, September 26, 2007

Live in the UK...

Sorry for the lack of posts -- it's the first week of the fall term here and I've been too busy.

For those of you who are masochistic enough to want to view my seminar Curved space, monsters and black hole entropy at the Newton Institute, follow this link.

Saturday, September 22, 2007

Paul Graham against philosophy and literary theory

A good friend of mine did a PhD in philosophy at Stanford, specializing in language and mind and all things Wittgenstein. After several years as a professor at two leading universities, he left the field to earn a second PhD in neuroscience, working in a wet lab. I have a feeling he might agree with much that Paul Graham writes in the essay quoted below. In numerous conversations over the years about his dissertation research, I could never quite see the point...

Outside of math there's a limit to how far you can push words; in fact, it would not be a bad definition of math to call it the study of terms that have precise meanings. Everyday words are inherently imprecise. They work well enough in everyday life that you don't notice. Words seem to work, just as Newtonian physics seems to. But you can always make them break if you push them far enough.

I would say that this has been, unfortunately for philosophy, the central fact of philosophy. Most philosophical debates are not merely afflicted by but driven by confusions over words. Do we have free will? Depends what you mean by "free." Do abstract ideas exist? Depends what you mean by "exist."

Wittgenstein is popularly credited with the idea that most philosophical controversies are due to confusions over language. I'm not sure how much credit to give him. I suspect a lot of people realized this, but reacted simply by not studying philosophy, rather than becoming philosophy professors.

...Curiously, however, the works they produced continued to attract new readers. Traditional philosophy occupies a kind of singularity in this respect. If you write in an unclear way about big ideas, you produce something that seems tantalizingly attractive to inexperienced but intellectually ambitious students. Till one knows better, it's hard to distinguish something that's hard to understand because the writer was unclear in his own mind from something like a mathematical proof that's hard to understand because the ideas it represents are hard to understand. To someone who hasn't learned the difference, traditional philosophy seems extremely attractive: as hard (and therefore impressive) as math, yet broader in scope. That was what lured me in as a high school student.

This singularity is even more singular in having its own defense built in. When things are hard to understand, people who suspect they're nonsense generally keep quiet. There's no way to prove a text is meaningless. The closest you can get is to show that the official judges of some class of texts can't distinguish them from placebos. [10]

And so instead of denouncing philosophy, most people who suspected it was a waste of time just studied other things. That alone is fairly damning evidence, considering philosophy's claims. It's supposed to be about the ultimate truths. Surely all smart people would be interested in it, if it delivered on that promise.

Because philosophy's flaws turned away the sort of people who might have corrected them, they tended to be self-perpetuating. Bertrand Russell wrote in a letter in 1912:

Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject. [11]

His response was to launch Wittgenstein at it, with dramatic results.

I think Wittgenstein deserves to be famous not for the discovery that most previous philosophy was a waste of time, which judging from the circumstantial evidence must have been made by every smart person who studied a little philosophy and declined to pursue it further, but for how he acted in response. [12] Instead of quietly switching to another field, he made a fuss, from inside. He was Gorbachev.

The field of philosophy is still shaken from the fright Wittgenstein gave it. [13] Later in life he spent a lot of time talking about how words worked. Since that seems to be allowed, that's what a lot of philosophers do now. Meanwhile, sensing a vacuum in the metaphysical speculation department, the people who used to do literary criticism have been edging Kantward, under new names like "literary theory," "critical theory," and when they're feeling ambitious, plain "theory." The writing is the familiar word salad:

Gender is not like some of the other grammatical modes which express precisely a mode of conception without any reality that corresponds to the conceptual mode, and consequently do not express precisely something in reality by which the intellect could be moved to conceive a thing the way it does, even where that motive is not something in the thing as such. [14]

The singularity I've described is not going away. There's a market for writing that sounds impressive and can't be disproven. There will always be both supply and demand. So if one group abandons this territory, there will always be others ready to occupy it.


[10] Sokal, Alan, "Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity," Social Text 46/47, pp. 217-252.

Abstract-sounding nonsense seems to be most attractive when it's aligned with some axe the audience already has to grind. If this is so we should find it's most popular with groups that are (or feel) weak. The powerful don't need its reassurance.

[11] Letter to Ottoline Morrell, December 1912. Quoted in:

Monk, Ray, Ludwig Wittgenstein: The Duty of Genius, Penguin, 1991, p. 75.

[12] A preliminary result, that all metaphysics between Aristotle and 1783 had been a waste of time, is due to I. Kant.

[13] Wittgenstein asserted a sort of mastery to which the inhabitants of early 20th century Cambridge seem to have been peculiarly vulnerable—perhaps partly because so many had been raised religious and then stopped believing, so had a vacant space in their heads for someone to tell them what to do (others chose Marx or Cardinal Newman), and partly because a quiet, earnest place like Cambridge in that era had no natural immunity to messianic figures, just as European politics then had no natural immunity to dictators.

[14] This is actually from the Ordinatio of Duns Scotus (ca. 1300), with "number" replaced by "gender." Plus ca change.

Wolter, Allan (trans), Duns Scotus: Philosophical Writings, Nelson, 1963, p. 92.

Friday, September 21, 2007

The world is our laboratory

Here is a nice profile of Myron Scholes that originally appeared in the journal Quantitative Finance. It was written by, of all people, statistical physicist Cosma Shalizi.

Below is a compact summary of the Black-Scholes result for option pricing, emphasizing the importance of perfect hedging. With perfect hedging you can price the option as long as you know the future probability distribution for the underlying -- it doesn't have to be log-normal or have fixed variance.
The solution, in hindsight, is wonderfully simple. The proper price to put on an option should equal the expected value of exercising the option. If you have the option, right now, to sell one share of a stock for $10, and the current price is $8, the option is worth exactly $2, and the option price tracks the current stock price one-for-one. If you knew for certain that the share price would be $8 a year from now, the present value of the option would be $2, discounted by the cost of holding money, risklessly, for a year --- say $1. Every $2 change in the stock price a year hence changes the option price now by $1. If you knew the probability of different share prices in the future, you could calculate the expected present value of the option, assuming you were indifferent to risk, which few of us are. Here is the crucial trick: a portfolio of one share and two such options is actually risk-free, and so, assuming no arbitrage, must earn the same return as any other riskless asset. Since we're assuming you already know the probability distribution of the future stock price, you know its risk and returns, and so have everything you need to know to calculate the present value of the option! Of course, the longer the time horizon, the more we discount the future value of exercising the option, and so the more options we need to balance the risk out of our portfolio. This fact suffices to give the Black-Scholes formula, provided one is willing to assume that stock price changes will follow a random walk with some fixed variance, an assumption which "did not seem onerous" to him at the time, but would now be more inclined to qualify.
Scholes on models, mathematics and computers:
Starting from an economic issue and looking for a parsimonious models, rather than building mathematical tools and looking for a problem to solve with them, has been a hall-mark of Scholes's career. "The world is our laboratory", he says, and the key thing is that it confirm a model's predictive power. There is a delicate trade-off between realism and simplicity; "tact" is needed to know what is a first-order effect and what is a second-order correction, though that is ultimately an empirical point.

The evaluation of such empirical points has itself become a delicate issue, he says, especially since the rise of computerized data-mining. While by no means objecting to computer-intensive data analysis --- he has been hooked on programming since encountering it in his first year of graduate school --- it raises very subtle problems of selection bias. The world may be our laboratory, but it is an "evolutionary" rather than an "experimental" lab; "we have only one run of history", and it is all too easy to devise models which have no real predictive power. In this connection, he tells the story of a time he acted as a statistical consultant for a law firm. An expert for the other side presented a convincing-looking regression analysis to back up their claims; Scholes, however, noticed that the print-out said "run 89", and an examination of the other 88 runs quickly undermined the credibility of the favorable regression. Computerization makes it cheap to do more runs, to create more models and evaluate them, but it "burns degrees of freedom". The former cost and tedium of evaluating models actually imposed a useful discipline, since it encouraged the construction of careful, theoretically-grounded, models, and discouraged hunting for something which gave results you liked --- it actually enhanced the meaning and predictive power of the models people did use!

Thursday, September 20, 2007

Information theory, inference and learning algorithms

I'd like to recommend the book Information theory, inference and learning algorithms by David Mackay, a Cambridge professor of physics. I wish I'd had a course on this material from Mackay when I was a student! Especially nice are the introductory example on Bayesian inference (Ch. 3) and the discussion of Occam's razor from a Bayesian perspective (Ch. 28). I'm sure I'll find other gems in this book, but I'm still working my way through.

I learned about the book through Nerdwisdom, which I also recommend highly. Nerdwisdom is the blog of Jonathan Yedidia, a brilliant polymath (theoretical physicist turned professional chess player turned computer scientist) with whom I consumed a lot of French wine at dinners of the Harvard Society of Fellows.

Wednesday, September 19, 2007

Our brilliant leaders

In case you haven't followed, Greenspan has made a number of controversial comments in his new book and in related press interviews. I agree with his criticisms of the Bush administration for fiscal profligacy, but like everyone else I find his argument for the necessity of the Iraq war to be nutty. I like the following comments from ParaPundit, found via Steve Sailer.

ParaPundit: ...I try to be polite about individuals. But the invasion of Iraq was a huge mistake and any prominent figure who makes lame arguments about the invasion must not go unchallenged. Saddam was moving towards control of the Strait of Hormuz? I'd be embarrassed to say something so obviously wrong. One doesn't need to do fancy calculations or read tons of history books or follow complex theories to know that Saddam was not moving toward control of the Strait of Hormuz. That's nuts. But where is this coming from? If Greenspan had this view 20 years ago then one can't blame it on senility. So what is going on? Can someone explain this? Is Greenspan overrated in general? Or is he only good at some narrow specialty and foolish about much else?

Greenspan is another example of a general problem we face: We are poorly led. We give our elites - especially our political elites - far too much respect and deference. These people are nowhere near as competent as they make themselves out to be. The really talented people in America are in investment banks and Silicon Valley start-ups. [OK, this is an exaggeration, and I hope he means I-banks broadly defined.] They aren't in Washington DC in high government positions. Though I bet there are some smart people on K Street manipulating the yahoos in government.

We mostly are better off if the sharpest people are in venture capital-funded start-ups and investment banks. The private sector generates the wealth. But we need some small handful of sharpies in key positions of power who can recognize when nonsense is being spoken and say no to stupid policies.

Monday, September 17, 2007

Crisis in American Science

The Chronicle of Higher Education has a long article about the bleak job prospects facing academic scientists these days. I'm interviewed in the piece, which ran with the picture on the right. The Chronicle must have a big budget because they sent a photographer to my office for several hours to get the shot! The editor wanted a geeky guy reading the Wall Street Journal, and I guess I'm your man :-)

The article covers a lot of ground, but one thing that I think could have been emphasized more is that, no matter how dismal the career path becomes for US scientists, there will still be foreigners from India, China and eastern Europe willing to try their luck, as well as a sprinkling of American-born obsessives (like me) who should know better. However, a significant number of talented Americans will simply choose to do something else.

The graphic below from the article shows physics job prospects since 1979. I notice my postdoc career coincided with the global minimum -- the worst period in 30 years :-(

The Real Science Crisis: Bleak Prospects for Young Researchers

Tight budgets, scarce jobs, and stalled reforms push students away from scientific careers


It is the best of times and worst of times to start a science career in the United States.

Researchers today have access to powerful new tools and techniques — such as rapid gene sequencers and giant telescopes — that have accelerated the pace of discovery beyond the imagination of previous generations.

But for many of today's graduate students, the future could not look much bleaker.

They see long periods of training, a shortage of academic jobs, and intense competition for research grants looming ahead of them. "They get a sense that this is a really frustrating career path," says Thomas R. Insel, director of the National Institute of Mental Health.

So although the operating assumption among many academic leaders is that the nation needs more scientists, some of brightest students in the country are demoralized and bypassing scientific careers.

The problem stems from the way the United States nurtures its developing brainpower — the way it trains, employs, and provides grants for young scientists. For decades, blue-ribbon panels have called for universities to revise graduate doctoral programs, which produced a record-high 27,974 Ph.D.'s in science and engineering in 2005. No less a body than the National Academy of Sciences has, in several reports, urged doctoral programs to train science students more broadly for jobs inside and outside academe, to shorten Ph.D. programs, and even to limit the number of degrees they grant in some fields.

Despite such repeated calls for reform, resistance to change has been strong. Major problems persist, and some are worsening. Recent data, for example, reveal that:

Averaged across the sciences, it takes graduate students a half-year longer now to complete their doctorates than it did in 1987.

In physics nearly 70 percent of newly minted Ph.D.'s go into temporary postdoctoral positions, whereas only 43 percent did so in 2000.

The number of tenured and tenure-track scientists in biomedicine has not increased in the past two decades even as the number of doctorates granted has nearly doubled.

Despite a doubling in the budget of the National Institutes of Health since 1998, the chances that a young scientist might win a major research grant actually dropped over the same period.

...Stephen D.H. Hsu is just the type of scientist America hopes to produce. A professor of physics at the University of Oregon, Mr. Hsu is at the forefront of scholarship on dark energy and quantum chromodynamics. At the same time, he has founded two successful software companies — one of which was bought for $26-million by Symantec — that provide the sorts of jobs and products that the nation's economy needs to thrive.

Despite his successes, Mr. Hsu sees trouble ahead for prospective scientists. He has trained four graduate students so far, and none of them have ended up securing their desired jobs in theoretical physics. After fruitless attempts trying to find academic posts, they took positions in finance and in the software industry, where Mr. Hsu has connections. "They often ask themselves," he says, "Why did I wait so long to leave? Why did I do that second or third postdoc?" By and large, he says, the students are doing pretty well but are behind their peers in terms of establishing careers and families.

The job crunch makes science less appealing for bright Americans, and physics departments often find their applications for graduate slots dominated by foreign students who are in many cases more talented than the homegrown ones. "In the long run, I think it's bad for the nation," he says. "It will become a peripheral thought in the minds of Americans, that science is a career path."

Melinda Maris also sees hints of that dark future at the Johns Hopkins University. Ms. Maris, assistant director of the office of preprofessional programs and advising, says the brightest undergrads often work in labs where they can spot the warning signs: Professors can't get grants, and postdocs can't get tenure-track jobs.

Such undergraduates, she says, "are really weighing their professional options and realize that they're not going to be in a strong financial position until really their mid-30s." In particular, those dim prospects drive away Americans with fewer financial resources, including many minority students.

...Almost every project aimed at improving graduate education suggests that departments should expose students to the breadth of jobs beyond academe, but faculty members still resist. When Mr. Hsu, the University of Oregon physicist, brings his former students back to talk about their jobs in finance or the software industry, it rankles some other professors.

Doctoral students pick up on that bias. "It was kind of a taboo topic," says Ms. Maris, the career adviser at Johns Hopkins, who recently earned a Ph.D. in genetics at Emory University and did one year of a postdoc at Hopkins before she decided to leave research.

Bruce Alberts, a former president of the National Academy of Sciences, says universities and the nation must take better care of young scientists. Now a professor of biochemistry at the University of California at San Francisco, Mr. Alberts says the current system of demoralized and underemployed Ph.D.'s cannot be sustained. "We need to wake up to what the true situation is."

Students may be quietly starting to lead the way — to recognize that they need to look beyond traditional ways of using their Ph.D.'s. When Mr. Alberts's colleagues polled second-year doctoral students last year, a full quarter of them expressed interest in jobs such as patent law, journalism, and government — jobs that their professors would not consider "science."

Of course, students might not be willing to share those desires yet with their mentors. The poll was anonymous.

Technologists versus the money men

Via Marginal Revolution and a new book by Peter Bernstein, the following graph comparing California to New York in terms of Forbes 400 ultrarich. (The threshold for membership is roughly $1B in net worth, give or take.)

California has taken the lead, presumably thanks to the information technology revolution, and continues to hold it despite recent gains of hedge fund money men. Note that counting billionaire fortunes is probably a lagging indicator of wealth (especially in money management -- it takes time to earn your billion there as opposed to a quick tech IPO :-)

We pointed out before that four tech-hotbed counties on the west coast accounted for most of the aggregate increase in national income inequality between 1990-2000.

Sunday, September 16, 2007

Who needs an MBA?

If you can generate alpha, or do math, you might not need an MBA these days :-)

My view on this is that the most valuable aspect of the MBA is the network it comes with. For the technology business, I would rate Harvard or Stanford well above the others. If you have the quant skills to work in derivatives or money management, you probably don't need an MBA, but it would be worthwhile to understand what is taught in business schools, and how that influences the thinking and collective culture of business and the markets.

NYTimes: ... As more Americans have become abundantly wealthy, young people are recalculating old assumptions about success. The flood of money into private equity and hedge funds over the last decade has made billionaires out of people like Kenneth Griffin, 38, chief executive of the Citadel Investment Group, and Eddie Lampert, 45, the hedge fund king who bought Sears and Kmart. These men are icons for the fast buck set — particularly the mathematically gifted cohort of rising stars known as “quants.” Many college graduates who are bright enough to be top computer scientists or medical researchers are becoming traders instead, and they measure their status in dollars instead of titles.

Many of the brightest don’t covet a corner office at Goldman Sachs or Morgan Stanley. Instead, they’re happy to work at a little-known hedge fund run out of a two-room office in Greenwich, Conn., as long as they get a fat payday. The competition from alternative investment firms — private equity and hedge funds in particular — is driving up salaries of entry-level analysts at much larger banks. And top performers at the banks make so much money today that they don’t want to take two years off for business school, even if it’s a prestigious institution like the Wharton School or Harvard.

The new ranks of traders and high-octane number crunchers on Wall Street are also a breed apart from celebrated long-term investors like Warren E. Buffett and investment banking gurus like Felix G. Rohatyn. What sets the new crowd apart is the need for speed and a thirst for instant riches.

“With the growth of hedge funds, you’re getting a lot of really smart people who are getting paid a lot very young,” says Arjuna Rajasingham, 29, an analyst and a trader at a hedge fund in London. “I know it’s a bit of a short-term view, but it’s hard to walk away from something that’s going really well.”

The shift has not gone unnoticed by administrators at some business schools. Richard Schmalensee, who was dean of the M.I.T. Sloan School of Management until June, chalked it up to the changing nature of money-making. In many banks and investment boutiques, traders with math and science backgrounds now contribute more to the bottom line than the white-shoed investment bankers who long presided over Wall Street. And traders tend to be less likely to go to business school.

“I don’t think you will see M.B.A.’s less represented in executive suites, but you may see M.B.A.’s less represented in the lists of the world’s richest people,” Professor Schmalensee says. ...

Thursday, September 13, 2007

The poor are different

Michael Lewis, commentary for Bloomberg. Note to humor impaired: it's tongue in cheek.

A little terminology: the SEC defines a qualified investor (i.e., someone who can invest in a hedge fund) as

Rule 215 -- Accredited Investor

Any natural person whose individual net worth, or joint net worth with that person's spouse, at the time of his purchase exceeds $1,000,000;

Any natural person who had an individual income in excess of $200,000 in each of the two most recent years or joint income with that person's spouse in excess of $300,000 in each of those years and has a reasonable expectation of reaching the same income level in the current year

Lateley people have been complaining that this threshold is too low and needs to be raised ;-)

A Wall Street Trader Draws Some Subprime Lessons: Michael Lewis
2007-09-05 00:05 (New York)

    Sept. 5 (Bloomberg) -- So right after the Bear Stearns funds blew up, I had a thought: This is what happens when you lend money to poor people.

     Don't get me wrong: I have nothing personally against the poor. To my knowledge, I have nothing personally to do with the poor at all. It's not personal when a guy cuts your grass: that's business. He does what you say, you pay him. But you don't pay him in advance: That would be finance. And finance is one thing you should never engage in with the poor. (By poor, I mean anyone who the SEC wouldn't allow to invest in my hedge fund.)

    That's the biggest lesson I've learned from the subprime crisis. Along the way, as these people have torpedoed my portfolio, I had some other thoughts about the poor. I'll share them with you.
1) They're masters of public relations.
    I had no idea how my open-handedness could be made to look, after the fact. At the time I bought the subprime portfolio I thought: This is sort of like my way of giving something back. I didn't expect a profile in Philanthropy Today or anything like that. I mean, I bought at a discount. But I thought people would admire the Wall Street big shot who found a way to help the little guy. Sort of like a money doctor helping a sick person. Then the little guy wheels around and gives me this financial enema. And I'm the one who gets crap in the papers! Everyone feels sorry for the poor, and no one feels sorry for me. Even though it's my money! No good deed goes unpunished. 

2) Poor people don't respect other people's money in the way money deserves to be respected.
     Call me a romantic: I want everyone to have a shot at the American dream. Even people who haven't earned it. I did everything I could so that these schlubs could at least own their own place. The media is now making my generosity out to be some kind of scandal. Teaser rates weren't a scandal. Teaser rates were a sign of misplaced trust: I trusted these people to get their teams of lawyers to vet anything before they signed it. Turns out, if you're poor, you don't need to pay lawyers. You don't like the deal you just wave your hands in the air and moan about how poor you are. Then you default.
3) I've grown out of touch with "poor culture.''
    Hard to say when this happened; it might have been when I stopped flying commercial. Or maybe it was when I gave up the bleacher seats and got the suite. But the first rule in this business is to know the people you're in business with, and I broke it. People complain about the rich getting richer and the poor being left behind. Is it any wonder? Look at them! Did it ever occur to even one of them that they might pay me back by WORKING HARDER? I don't think so.

    But as I say, it was my fault, for not studying the poor more closely before I lent them the money. When the only time you've ever seen a lion is in his cage in the zoo, you start thinking of him as a pet cat. You forget that he wants to eat you.

4) Our society is really, really hostile to success. At the same time it's shockingly indulgent of poor people.
    A Republican president now wants to bail them out! I have a different solution. Debtors' prison is obviously a little too retro, and besides that it would just use more taxpayers' money. But the poor could work off their debts. All over Greenwich I see lawns to be mowed, houses to be painted, sports cars to be tuned up. Some of these poor people must have skills. The ones that don't could be trained to do some of the less skilled labor -- say, working as clowns at rich kids' birthday parties. They could even have an act: put them in clown suits and see how many can be stuffed into a Maybach. It'd be like the circus, only better.
    Transporting entire neighborhoods of poor people to upper Manhattan and lower Connecticut might seem impractical. It's not: Mexico does this sort of thing routinely. And in the long run it might be for the good of poor people. If the consequences were more serious, maybe they wouldn't stay poor.

5) I think it's time we all become more realistic about letting the poor anywhere near Wall Street.
    Lending money to poor countries was a bad idea: Does it make any more sense to lend money to poor people? They don't even have mineral rights! There's a reason the rich aren't getting richer as fast as they should: they keep getting tangled up with the poor. It's unrealistic to say that Wall Street should cut itself off entirely from poor -- or, if you will, "mainstream'' -- culture. As I say, I'll still do business with the masses. But I'll only engage in their finances if they can clump themselves together into a semblance of a rich person. I'll still accept pension fund money, for example. (Nothing under $50 million, please.) And I'm willing to finance the purchase of entire companies staffed basically with poor people. I did deals with Milken, before they broke him. I own some Blackstone. (Hang tough, Steve!)

    But never again will I go one-on-one again with poor people. They're sharks.

Tuesday, September 11, 2007

Hardware vs Software

OK, call me biased, but the kind of physical science behind hardware advances seems a bit, well, harder than writing a new OS or application. In fact, if I think about the main drivers behind the information revolution of the last 20 years, I'd give much more credit to hardware advances than to the concurrent advances in software.

Think about it -- state of the art OSes aren't that different from the original BSD or Unix flavors, whereas the flash memory in my iPod is the equivalent of warp drive to someone from 1985! I don't see a factor of 10^6 improvement in anything related to software, whereas we've achieved gains of that scale in processors, storage and networking (bandwidth).

Meanwhile, funding for research in physical science has been flat in real dollars during my scientific career. Go figure!

From the Times, an article about IBM's work on "racetrack" storage, which may be key to the continued exponential growth in storage capacity.

I'm no experimentalist, but what they're doing sounds hard!

The tech world, obsessed with data density, is taking notice because Mr. Parkin has done it before. An I.B.M. research fellow largely unknown outside a small fraternity of physicists, Mr. Parkin puttered for two years in a lab in the early 1990s, trying to find a way to commercialize an odd magnetic effect of quantum mechanics he had observed at supercold temperatures. With the help of a research assistant, he was able to alter the magnetic state of tiny areas of a magnetic data storage disc, making it possible to store and retrieve information in a smaller amount of space. The huge increases in digital storage made possible by giant magnetoresistance, or GMR, made consumer audio and video iPods, as well as Google-style data centers, a reality.

Mr. Parkin’s new approach, referred to as “racetrack memory,” could outpace both solid-state flash memory chips as well as computer hard disks, making it a technology that could transform not only the storage business but the entire computing industry.

“Finally, after all these years, we’re reaching fundamental physics limits,” he said. “Racetrack says we’re going to break those scaling rules by going into the third dimension.”

His idea is to stand billions of ultrafine wire loops around the edge of a silicon chip — hence the name racetrack — and use electric current to slide infinitesimally small magnets up and down along each of the wires to be read and written as digital ones and zeros.

His research group is able to slide the tiny magnets along notched nanowires at speeds greater than 100 meters a second. Since the tiny magnetic domains have to travel only submolecular distances, it is possible to read and write magnetic regions with different polarization as quickly as a single nanosecond, or one billionth of a second — far faster than existing storage technologies.

If the racetrack idea can be made commercial, he will have done what has so far proved impossible — to take microelectronics completely into the third dimension and thus explode the two-dimensional limits of Moore’s Law, the 1965 observation by Gordon E. Moore, a co-founder of Intel, that decrees that the number of transistors on a silicon chip doubles roughly every 18 months.

Monster talk

"Curved space, monsters and black hole entropy"

Slides. Earlier post with link to arxiv preprint.

Sunday, September 09, 2007

Massive maths campus

I've arrived in Cambridge and am amazed by the maths campus they've built called the Centre for Mathematical Sciences.

The Newton Institute is in the upper left corner of this picture (the whole campus is devoted to mathematical sciences):

As a lover of modern architecture I was in heaven when I saw this crazy place! Where have all the teletubbies gone? ;-)

Walking through Trinity College past Newton's old residence was also quite a thrill :-)

Friday, September 07, 2007

Baby vs chimp

My wife hates it when I compare our adorable twins to simians. But apparently my intelligence estimates aren't way off. (From the NYTimes.)

Baby versus Chimp

106 chimpanzees, 32 orangutans and 105 humans who were about 2.5 years old were put through “The Primate Cognition Test Battery,” which includes 16 tasks divided between physical and social cognition. Here’s how the authors of the study in the journal Science described the difference:

Physical cognition deals with inanimate objects and their spatial-temporal-causal relations, whereas social cognition deals with other animate beings and their intentional actions, perceptions, and knowledge.

Now brace yourselves, human readers: The babies did not trounce the apes. In fact, chimpanzees scored more correct responses in the tests on causality and just about tied on spatial skills, according to this chart.

But the social learning tests were a rout for the babies, with chimpanzees way behind and orangutans apparently shut out. Reuters outlines how one social learning test went:

A researcher showed the children and apes how to pop open a plastic tube to get food or a toy contained inside. The children observed and imitated the solution. Chimpanzees and orangutans, however, tried to smash open the tube or yank out the contents with their teeth.

Despite the mixed results, Time magazine sounded uplifted. After all, the results suggested that we are special because we “cooperate and share expertise.” And that’s what “has allowed us to build complex societies, collaborate and learn from each other at a high level.”

Cantabrigian travels

I'll be away for the next week, mainly in Cambridge (UK) at the Isaac Newton Institute for Mathematical Sciences.

I hope to take some time out from physics to check out the startup/technology ecosystem there. Anyone in the area who wants to meet up, send me an email :-) (I'm not spending time in London, though.)

Wednesday, September 05, 2007

Babbage quote: not even wrong

Unfortunately, I know where Charles Babbage is coming from here.

Babbage designed and built the difference engine, a mechanical device for automated computation that was far ahead of its time. He was also the first to conceive of a programmable computer.
On two occasions I have been asked [by members of Parliament!], "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

Flynn on the Flynn effect

Via GNXP, this lecture by James Flynn. The lecture is long but worth reading in full. His discussion of factor loading in the context of the decathalon, 100m speed, hurdling, etc. is excellent and very relevant to the subject of intelligence and its various cognitive components. The Flynn effect refers to a significant increase in raw IQ scores over the last 100 years or so -- equivalent of 30 points or more, in some cases. This raises a number of thorny issues. Were our ancestors idiots? Is IQ really that malleable under environmental influence (contrary to recent twin studies)? Is the principal component identified as "g" actually time dependent? See Flynn's answers below. A couple of comments: (i) Dramatic gains are seen only in certain areas of intelligence (see table from Sailer discussion), which are plausibly the areas in which modern life provides much more stimulation. (ii) Average people 100 years ago were massively deprived by modern standards -- much more than we could ever reproduce in a modern twin study. US GDP per capita is 10x higher now, and average years of schooling has increased dramatically. It's not surprising that a child who only received (say) 6 years of formal schooling would be far behind someone with 12. Modern twin studies only include kids raised in a much smaller range of environments -- no twin in any recent study had less than the legally mandated US high school education. With a smaller range of environmental effects, the genetic component plays a larger role, leading to the high (.5-.7 or so) heritability result for IQ (similar to height). (Flynn notes: In the America of 1900, adults had an average of about 7 years of schooling, a median of 6.5 years, and 25 percent had completed 4 years or less.) (iii) The analogy with height is quite appropriate. While taller parents tend to have taller children (i.e., height is heritable), we've seen siginificant gains in average height as nutrition and diet have improved. (iv) I think Flynn would agree that variance in adult IQ must have been much larger in the past. People like Newton or Thomas Jefferson obviously had tremendously more exposure to ideas and abstract thinking than a kid on the farm with little or no education, few books, no TV and no radio. The Flynn effect does not imply that the great geniuses of the past were necessarily inferior to those of today.
Lecture by Professor James Flynn at The Psychometrics Centre, Cambridge Assessment, University of Cambridge, 15th December 2006 Naming the paradoxes (1) The factor analysis paradox: Factor analysis shows a first principal component called "g" or general intelligence that seems to bind performance on the various WISC subtests together. However, IQ gains over time show score gains on the WISC subtests occurring independently of one another. How can intelligence be both one and many? (2) The intelligence paradox: If huge IQ gains are intelligence gains, why are we not stuck by the extraordinary subtlety of our children's conversation? Why do we not have to make allowances for the limitations of our parents? A difference of some 18 points in the average IQ over two generations ought to be highly visible. (3) The MR paradox: In 1900, the average IQ scored against current norms was somewhere between 50 and 70. If IQ gains are in any sense real, we are driven to the absurd conclusion that a majority of our ancestors were mentally retarded. (4) The identical twins paradox: Twin studies show that genes dominate individual differences in IQ and that environmental effects are feeble. IQ gains are so great as to signal the existence of environmental factors of enormous potency. How can environment be both so feeble and so potent? The solutions in shorthand (1) The WISC subtests measure a variety of cognitive skills that are functionally independent and responsive to changes in social priorities over time. The inter-correlations that engender "g" are binding only when comparing individuals within a static social context. (2) Asking whether IQ gains are intelligence gains is the wrong question because it implies all or nothing cognitive progress. The 20th century has seen some cognitive skills make great gains, while others have been in the doldrums. To assess cognitive trends, we must dissect "intelligence" into solving mathematical problems, interpreting the great works of literature, finding on-the-spot solutions, assimilating the scientific world view, critical acumen, and wisdom. (3) Our ancestors in 1900 were not mentally retarded. Their intelligence was anchored in everyday reality. We differ from them in that we can use abstractions and logic and the hypothetical to attack the formal problems that arise when science liberates thought from concrete referents. Since 1950, we have become more ingenious in going beyond previously learned rules to solve problems on the spot. (4) At a given time, genetic differences between individuals (within a cohort) are dominant but only because they have hitched powerful environmental factors to their star. Trends over time (between cohorts) liberate environmental factors from the sway of genes and once unleashed, they can have a powerful cumulative effect.

Sunday, September 02, 2007

Financier pay: it's crazy, there's no 2nd or 3rd

Michael Steinhardt:
It's crazy. That's why the field of money management is today the most highly compensated field in the world times three. There's no close second. There's no close third. And I think the expectations inherent in that sort of compensation are absurdly unrealistic.

Charlie Munger:
I regard the amount of brainpower going into money management as a national scandal.

We have armies of people with advanced degrees in physics and math in various hedge funds and private-equity funds trying to outsmart the market. A lot of you older people in the room can remember when none of these people existed.

Note neither quote is from a socialist pinko -- both are famous hedge fund managers, though from the old school.

Below are the numbers, from the NYTimes article Pay at Investment Banks Eclipses all other Jobs. Note these are average weekly salaries, and include those of secretaries and support staff. Nationally, investment banking (broadly defined, including money management) accounted for just 0.1 percent of all private sector jobs, but it accounts for 1.3 percent of all wages, according to the Bureau of Labor Statistics.

Related posts here and here.

I'm anticipating reactions like "Well, of course they deserve it, their decisions have disproportionate impact on the economy, allocating massive resources. The market is efficient, after all!" All well and good if you can show that the 173,340 people (2006 BLS) working in investment banking really do produce better decisions than the people who would occupy those jobs in return for lower compensation. If not, there are some rents or inefficiencies hidden here :-)

Saturday, September 01, 2007

More Lee Kwan Yew

This earlier Der Spiegel interview is one of the most popular posts on this blog. Here's an excerpt from a more recent one in the Times.

One of his concerns now, Mr. Lee said, is that the United States has become so preoccupied with the Middle East that it is failing to look ahead and plan in this part of the world.

“I think it’s a real drag slowing down adjusting to the new situation,” he said, describing what he called a lapse that worries Southeast Asian countries that count on Washington to balance the rising economic and diplomatic power of China.

“Without this draining of energy, attention and resources for Iraq, Iran, Lebanon, Israel, Palestine, there would have been deep thinking about the long-term trends — working out possible options that the U.S. could exercise to change the direction of long-term trends more in its favor,” Mr. Lee said.

As the United States focuses on the Middle East, Mr. Lee said, the Chinese are busy refining their policies and building the foundations of more cooperative long-term relationships in Asia. “They are making strategic decisions on their relations with the region,” he said.

And this is where tiny Singapore sees itself as a model for China, the world’s most-populous country. “They’ve got to be like us,” Mr. Lee said, “with a very keen sense of what is possible, and what is not.”

Every year, he said, Chinese ministers meet twice with Singaporean ministers to learn from their experience. Fifty mayors of Chinese cities visit every three months for courses in city management.

Singapore’s secret, Mr. Lee said, is that it is “ideology free.” It possesses an unsentimental pragmatism that infuses the workings of the country as if it were in itself an ideology, he said. When considering an approach to an issue, he says, the question is: “Does it work? Let’s try it, and if it does work, fine, let’s continue it. If it doesn’t work, toss it out, try another one.”

The yardstick, he said, is: “Is this necessary for survival and progress? If it is, let’s do it.”

Worth a look or listen

Some recommendations from a bunch of content I consumed during recent travel.

The Black Swan by Nassim Taleb. I finally got around to reading this and recommend it highly. Physicists and others who are already familiar with nonlinear dynamics (chaos theory), the difference between Gaussian and power-law distributions, etc. will find the presentation slow and repetitious at times, but Taleb does have a lot of interesting insights. Particularly amusing: Chapter 10, The scandal of prediction, in which he recapitulates Philip Tetlock's results, chapter 17, which rails against the "Nobel" prize in economics, especially the one awarded to Merton and Scholes. I can't say I completely agree with Taleb on the (non)utility of modern finance theory. It's true that Gaussians underestimate the likelihood of rare events, but that is well known now and there are various ways to incorporate that into models (e.g., fat tails, stochastic vol). He's dismissive of these improvements in the book; it appears to me he's attacking a caricature from 10 years ago.

I also recommend a number of podcast interviews from the site Especially useful if you're going to be stuck on a plane, train or automobile. Some that I found especially good:

Taleb on the Black Swan (strange that the interviewer, an economist, didn't explore Taleb's extremely negative view of the profession! I guess they're both Hayekians so had some common ground :-)

Paul Romer on economic growth.

Ed Leamer on outsourcing and trade.

Vernon Smith on experimental economics.

Gregg Easterbrook on happiness and the American standard of living (we're 10x richer on average than 100 years ago!).

Bob Lucas on growth, poverty, monetary policy.

Blog Archive