Pessimism of the Intellect, Optimism of the Will     Archive   Favorite posts   Twitter: @steve_hsu

Tuesday, July 31, 2007

Turing: physics and cryptography

People often ask me what a physicist is doing in information security. Here's a partial answer, from Alan Turing:

“There is a remarkably close parallel between the problems of the physicist and those of the cryptographer. The system on which a message is enciphered corresponds to the laws of the universe, the intercepted messages to the evidence available, the keys for a day or a message to important constants which have to be determined. The correspondence is very close, but the subject matter of cryptography is very easily dealt with by discrete machinery, physics not so easily.”

Algorithm wars

Some tantalizing Renaissance tidbits in this lawsuit against two former employees, both physics PhDs from MIT. Very interesting -- I think the subtleties of market making deserve further scrutiny :-)

At least they're not among the huge number of funds rumored at the moment to be melting down from leveraged credit strategies. Previous coverage of Renaissance here.

Bloomberg

Ex-Simons Employees Say Firm Pursued Illegal Trades
2007-07-30 11:19 (New York)


By Katherine Burton and Richard Teitelbaum

July 30 (Bloomberg) -- Two former employees of RenaissanceTechnologies Corp., sued by the East Setauket, New York-based firm for theft of trade secrets, said the company violated securities laws and ``encouraged'' them to help.

Renaissance, the largest hedge-fund manager, sought to block Alexander Belopolsky and Pavel Volfbeyn from using the allegations as a defense in the civil trade-secrets case. The request was denied in a July 19 order by New York State judge Ira Gammerman, who wrote that the firm provided no evidence to dispute the claims.

The company denied the former employees' claims.

``The decision on this procedural motion makes no determination that there is any factual substance to the allegations,'' Renaissance said in a statement to Bloomberg News. ``These baseless charges are merely a smokescreen to distract from the case we are pursuing.''

Renaissance, run by billionaire investor James Simons, sued Belopolsky and Volfbeyn in December 2003, accusing them of misappropriating Renaissance's trade secrets by taking them to another firm, New York-based Millennium Partners LP. Renaissance settled its claims against Millennium in June. The men, who both hold Ph.D.'s in physics from the Massachusetts Institute of Technology, worked for the company from 2001 to mid-2003, according to the court document.

``We think the allegations are very serious and will have a significant impact on the outcome of the litigation,'' said Jonathan Willens, an attorney representing Volfbeyn and Belopolsky. ``We continue to think the allegations by Renaissance concerning the misappropriation of trade secrets is frivolous.''

`Quant' Fund

Renaissance, founded by Simons in 1988, is a quantitative manager that uses mathematical and statistical models to buy and sell securities, options, futures, currencies and commodities. It oversees $36.8 billion for clients, most in the 2-year-old Renaissance Institutional Equities Fund.

According to Gammerman's heavily redacted order, Volfbeyn said that he was instructed by his superiors to devise a way to ``defraud investors trading through the Portfolio System for Institutional Trading, or POSIT,'' an electronic order-matching system operated by Investment Technology Group Inc. Volfbeyn said that he was asked to create an algorithm, or set of computer instructions, to ``reveal information that POSIT intended to keep confidential.''

Refused to Build

Volfbeyn told superiors at Renaissance that he believed the POSIT strategy violated securities laws and refused to build the algorithm, according to the court document. The project was reassigned to another employee and eventually Renaissance implemented the POSIT strategy, according to the document.

New York-based Investment Technology Group took unspecified measures, according to the order, and Renaissance was forced to abandon the strategy, Volfbeyn said. Investment Technology Group spokeswoman Alicia Curran declined to comment.

According to the order, Volfbeyn said that he also was asked to develop an algorithm for a second strategy involving limit orders, which are instructions to buy or sell a security at the best price available, up to a maximum or minimum set by
the trader. Standing limit orders are compiled in files called limit order books on the New York Stock Exchange and Nasdaq and can be viewed by anyone.

The redacted order doesn't provide details of the strategy. Volfbeyn refused to participate in the strategy because he believed it would violate securities laws. The limit-order strategy wasn't implemented before Volfbeyn left Renaissance, the two men said, according to the order.

Swap `Scam' Claimed

Volfbeyn and Belopolsky said that Renaissance was involved in a third strategy, involving swap transactions, which they describe as ``a massive scam'' in the court document. While they didn't disclose what type of swaps were involved, they said that Renaissance violated U.S. Securities and Exchange Commission and National Association of Securities Dealers rules governing short sales.

Volfbeyn and Belopolsky said they were expected to help find ways to maximize the profits of the strategy, and Volfbeyn was directed to modify and improve computer code in connection with the strategy, according to the order.

In a swap transaction, two counterparties exchange one stream of cash flows for another. Swaps are often used to hedge certain risks, such as a change in interest rates, or as a means of speculation. In a short sale, an investor borrows shares and then sells them in the hopes they can be bought back in the future at a cheaper price.

Besides the $29 billion institutional equity fund, Renaissance manages Medallion, which is open only to Simons and his employees. Simons, 69, earned an estimated $1.7 billion last year, the most in the industry, according to Institutional Investor's Alpha magazine.

I, Robot

My security badge from a meeting with Israeli internet security company CheckPoint (Nasdaq: CHKP). There was some discussion as to whether I should be classified as a robot or a robot genius :-)




Infoworld article: ... RGguard accesses data gathered by a sophisticated automated testbed that has examined virtually every executable on the Internet. This testbed couples traditional anti-virus scanning techniques with two-pronged heuristic analysis. The proprietary Spyberus technology establishes causality between source, executable and malware, and user interface automation allows the computers to test programs just as a user would - but without any human intervention.

Monday, July 30, 2007

Tyler Cowen and rationality

I recently came across the paper How economists think about rationality by Tyler Cowen. Highly recommended -- a clear and honest overview.

The excerpt below deals with rationality in finance theory and strong and weak versions of efficient markets. I believe the weak version; the strong version is nonsense. (See, e.g, here for a discussion of limits to arbitrage that permit long lasting financial bubbles. In other words, capital markets are demonstrably far from perfect, as defined below by Cowen.)

Although you might think the strong version of EMH is only important to traders and finance specialists, it is also very much related to the idea that markets are good optimizers of resource allocation for society. Do markets accurately reflect the "fundamental value of corporations"? See related discussion here.

Financial economics has one of the most extreme methods in economic theory, and increasingly one of the most prestigious. Finance concerns the pricing of market securities, the determinants of market returns, the operating of trading systems, the valuation of corporations, and the financial policies of corporations, among other topics. Specialists in finance can command very high salaries in the private sector and have helped design many financial markets and instruments. To many economists, this ability to "meet a market test" suggests that financial economists are doing something right. Depending on one's interpretation, the theory of finance makes either minimal or extreme assumptions about rationality. Let us consider the efficient markets hypothesis (EMH), which holds the status of a central core for finance, though without commanding universal assent. Like most economic claims, EMH comes in many forms, some weaker, others stronger. The weaker versions typically claim that deliberate stock picking does not on average outperform selecting stocks randomly, such as by throwing darts at the financial page. The market already incorporates information about the value of companies into the stock prices, and no one individual can beat this information, other than by random luck, or perhaps by outright insider trading.

Note that the weak version of EMH requires few assumptions about rationality. Many market participants may be grossly irrational or systematically biased in a variety of ways. It must be the case, however, that their irrationalities are unpredictable to the remaining rational investors. If the irrationalities were predictable, rational investors could make systematic extra-normal profits with some trading rule. The data, however, suggest that it is very hard for rational investors to outperform the market averages. This suggests that extant irrationalities are either very small, or very hard to predict, two very different conclusions. The commitment that one of these conclusions must be true does not involve much of a substantive position on the rationality front.

The stronger forms of EMH claim that market prices accurately reflect the fundamental values of corporations and thus cannot be improved upon. This does involve a differing and arguably stronger commitment to a notion of rationality.

Strong EMH still allows that most individuals may be irrational, regardless of how we define that concept. These individuals could literally be behaving on a random basis, or perhaps even deliberately counter to standard rationality assumptions. It is assumed, however, that at least one individual does have rational information about how much stocks are worth. Furthermore, and most importantly, it is assumed that capital markets are perfect or nearly perfect. With perfect capital markets, the one rational individual will overwhelm the influence of the irrational on stock prices. If the stock ought to be worth $30 a share, but irrational "noise traders" push it down to $20 a share, the person who knows better will keep on buying shares until the price has risen to $30. With perfect capital markets, there is no limit to this arbitrage process. Even if the person who knows better has limited wealth, he or she can borrow against the value of the shares and continue to buy, making money in the process and pushing the share price to its proper value.

So the assumptions about rationality in strong EMH are tricky. Only one person need be rational, but through perfect capital markets, this one person will have decisive weight on market prices. As noted above, this can be taken as either an extreme or modest assumption. While no one believes that capital markets are literally perfect, they may be "perfect enough" to allow the rational investors to prevail.

"Behavioral finance" is currently a fad in financial theory, and in the eyes of many it may become the new mainstream. Behavioral finance typically weakens rationality assumptions, usually with a view towards explaining "market anomalies." Almost always these models assume imperfect capital markets, to prevent a small number of rational investors from dwarfing the influence of behavioral factors. Robert J. Shiller claims that investors overreact to very small pieces of information, causing virtually irrelevant news to have a large impact on market prices. Other economists argue that some fund managers "churn" their portfolios, and trade for no good reason, simply to give their employers the impression that they are working hard. It appears that during the Internet stock boom, simply having the suffix "dot com" in the firm's name added value on share markets, and that after the bust it subtracted value.11

Behavioral models use looser notions of rationality than does EMH. Rarely do behavioral models postulate outright irrationality, rather the term "quasi-rationality" is popular in the literature. Most frequently, a behavioral model introduces only a single deviation from classical rationality postulates. The assumption of imperfect capital markets then creates the possibility that this quasi-rationality will have a real impact on market phenomena.

The debates between the behavioral theories and EMH now form the central dispute in modern financial theory. In essence, one vision of rationality -- the rational overwhelm the influence of the irrational through perfect capital markets -- is pitted against another vision -- imperfect capital markets give real influence to quasi-rationality. These differing approaches to rationality, combined with assumptions about capital markets, are considered to be eminently testable.

Game theory and the failed quest for a unique basis for rationality:

Game theory has shown economists that the concept of rationality is more problematic than they had previously believed. What is rational depends not only on the objective features of the problem but also depends on what actors believe. This short discussion has only scratched the surface of how beliefs may imply very complex solutions, or multiple solutions. Sometimes the relevant beliefs, for instance, are beliefs about the out-of-equilibrium behavior of other agents. These beliefs are very hard to model, or it is very hard to find agreement among theorists as to how they should be modeled.

In sum, game theorists spend much of their time trying to figure out what rationality means. They are virtually unique amongst economists in this regard. Game theory from twenty years ago pitted various concepts of rationality against each other in purely theoretical terms. Empirical results had some feedback into this process, such as when economists reject Nash equilibrium for some of its counterintuitive predictions, but it remains striking how much of the early literature does not refer to any empirical tests. This enterprise has now become much more empirical, and more closely tied to both computational science and experimental economics.

Computational economics and the failed quest for a unique basis for rationality:

Nonetheless it is easy to see how the emphasis on computability puts rationality assumptions back on center stage, and further breaks down the idea of a monolithic approach to rationality. The choice of computational algorithm is not given a priori, but is continually up for grabs. Furthermore the choice of algorithm will go a long way to determining the results of the model. Given that the algorithm suddenly is rationality, computational economics forces economists to debate which assumptions about procedural rationality are reasonable or useful ones.

The mainstream criticism of computational models, of course, falls right out of these issues. Critics believe that computational models can generate just about "any" result, depending on the assumptions about what is computable. This would move economics away from being a unified science. Furthermore it is not clear how we should evaluate the reasonableness of one set of assumptions about computability as opposed to another set. We might consider whether the assumptions yield plausible results, but if we already know what a plausible result consists of, it is not clear why we need computational theories of rationality.

As you can tell from my comments, I do not believe there is any unique basis for "rationality" in economics. Humans are flawed information processing units produced by the random vagaries of evolution. Not only are we different from each other, but these differences arise both from genes and the individual paths taken through life. Can a complex system comprised of such creatures be modeled through simple equations describing a few coarse grained variables? In some rare cases, perhaps yes, but in most cases, I would guess no. Finance theory already adopts this perspective in insisting on a stochastic (random) component in any model of security prices. Over sufficiently long timescales even the properties of the random component are not constant! (Hence, stochastic volatility, etc.)

Saturday, July 28, 2007

From physics to finance

Professor Akash Bandyopadhyay recounts his career trajectory from theoretical physics to Wall Street to the faculty of the graduate school of business at Chicago in this interview.

One small comment: Bandyopadhyay says below that banks hire the very best PhDs from theoretical physics. I think he meant to say that, generally, they hire the very best among those who don't find jobs in physics. Unfortunately, few are able to find permanent positions in the field.

Mike K. -- if you're reading this, why didn't you reply to the guy's email? :-)

CB: Having a Ph.D. in Theoretical Physics, you certainly have quite a unique background compared to most other faculty members here at the GSB. Making a transition from Natural Science to Financial Economics and becoming a faculty member at the most premier financial school in the world in a short span of five years is quite an unbelievable accomplishment! Can you briefly talk about how you ended up at the GSB?

AB: Sure. It is a long story. In 1999, I was finishing up my Ph.D. in theoretical physics at the University of Illinois at Urbana Champaign when I started to realize that the job situation for theoretical physicists is absolutely dismal. Let alone UIUC, it was very difficult for physicists from even Harvard or Princeton to find decent jobs. As a matter of fact, once, when I was shopping at a Wal-Mart in the Garden State, I bumped into a few people who had Ph.D. in theoretical physics from Princeton and they were working at the Wal-Mart's check-out counter. Yes, Wal-Mart! I could not believe it myself!

CB: So, what options did you have at that point?

AB: When I started to look at the job market for theoretical physicists, I found that the top investment banks hire the very best of the fresh Ph.D.s. I started to realize that finance (and not physics!) is the heart of the real world and Wall Street is the hub of activity. So, I wanted to work on Wall Street - not at Wal-Mart! (laughs!)

I knew absolutely nothing about finance or economics at that time, but I was determined to make the transition. I got a chance to speak with Professor Neil Pearson, a finance professor at UIUC, who advised me to look at the 'Risk' Magazine and learn some finance by myself. There were two highly mathematical research papers at the end of an issue that caught my attention. Having a strong mathematical background, I could understand all the mathematical and statistical calculations/analysis in those papers, although I could not comprehend any of the financial terminology. As I perused more articles, my confidence in my ability to solve mathematical models in finance grew. At that point, I took a big step in my pursuit of working on the Street and e-mailed the authors of those two articles in the Risk Magazine, Dr. Peter Carr at the Banc of America Securities and Dr. Michael Kamal at Goldman Sachs. Dr. Carr (who, later on I found, is a legend in mathematical finance!), replied back in 2 lines: 'If you really want to work here, you have to walk on water. Call me if you are in the NYC area.'

CB: So, we presume you went to NYC?

AB: After some contemplation, I decided to fly to NYC; I figured I had nothing to lose. Dr. Carr set me up for an interview a few weeks later. Being a physics student throughout my life, I was not quite aware of the business etiquettes. So, when I appeared in my jeans, T-shirt and flip-flops at the Banc of America building at 9 West 57th Street, for an interview, there was a look on everyone's face (from the front desk staffs to everyone I met) that I can never forget. Looking back, I still laugh at those times.

CB: Did you get an offer from Banc of America?

AB: Not at the first attempt. After the interview, I was quite positive that I would get an offer. However, as soon as I returned home, I received an email from Dr. Carr saying, "You are extremely smart, but the bank is composed of deal makers, traders, marketers, and investment bankers. We are looking for someone with business skills. You will not fit well here." He suggested that we both write a paper on my derivation of Black-Scholes/Merton partial differential equation, or even possibly a book. He also suggested I read thoroughly (and to work out all the problems of) the book "Dynamic Asset Pricing Theory" by Darrell Duffie. In fact, Duffie's book was my starting point in learning financial economics. I assume your readers never heard of this book. It is a notoriously difficult book on continuous time finance and it is intended for the very advanced Ph.D. students in financial economics. But, it was the right book for me - I read it without any difficulty in the math part and it provided me with a solid foundation in financial economics. Anyway, I think I am going too off tangent to your question.

CB: So, what did you do after you received that mail from Dr. Carr?

AB: The initial set back did not deter me. I already started to become aware of my lack of business skills. So I offered Dr. Carr to work as an unpaid intern at Banc of America to gain experience and to learn more about the financial industry and the business. Dr. Carr finally relented and made me an offer to work as an unpaid intern in his group during the summer of 1999.

CB: What did you do during the internship?

AB: Upon my arriving, Dr. Carr told me that, "A bank is not a place to study. A bank is a place to make money. Be practical." This was probably the best piece of advice I could get. He gave me three tasks to help me get more familiar with finance and get closer to bankers. First, catalog and classify his books and papers on finance and at the same time flip through them. This way, believe it or not, I read tens of thousands of papers and other books that summer. Second, I helped test a piece of software, Sci-Finance, which would help traders to set and hedge exotic option prices. Thirdly, I answered math, statistics, and other quantitative modeling questions for equity, fixed income and options traders, and other investment bankers.

CB: Wow! That is a lot of reading for one summer. So, did you get a full time offer from Banc of America after your internship? What did you do after that?

AB: Yes, I got an offer for them, but then I had more than a year left to finish my PhD thesis, so I accepted an even better offer from Deutsche Bank next summer. I worked at Deutsche for three months in the summer of 2000. Then moved to Goldman Sachs for a while (where I gave seminars on finance theory to the quants, traders, and risk managers), then, after finishing my Ph.D., I took an offer from Merrill Lynch as the quant responsible for Convertible Bond valuation in their Global Equity Linked Products division in New York. I left Merrill after a few months to lead the North America's Equity Derivatives Risk Management division in Société Generale. So, basically, I came to GSB after getting some hardcore real-world experience in a string of top investment banks.

CB: Are there any 'special' moments on Wall Street that you would like to talk about?

AB: Sure, there are many. But one that stands out is the day I started my internship at Banc of America. As is the norm in grad school or academia, I felt that I had to introduce myself to my colleagues. So, on my very first day of internship, I took the elevator to the floor where the top bosses of the bank had offices. I completely ignored the secretary at the front desk, knocked on the CEO and CFO's door, walked in, and briefly introduced myself. Little did I know that this was not the norm in the business world!!! Shortly thereafter, Dr. Carr called me and advised that I stick to my cube instead of 'just wandering around'! In retrospect, that was quite an experience!

CB: What made you interested in teaching after working for top dollar on Wall Street?

AB: You mean to say that professors here don't get paid top dollar? (laughs)

I always planned to be in academia. To be totally honest with you, I never liked the culture of Wall Street. Much of the high profile business in Wall Street heavily relies on the academic finance research, but, after all, they are there to make money, not to cultivate knowledge. One must have two qualities to succeed well in this financial business: First, one must have a solid knowledge on the strengths and limitations of financial models (and the theory), which comes from cutting edge academic research, and second, one must have the skills to translate the academic knowledge into a money-making machine. I was good in the first category, but not as good in the second. ...

Babies!

I generally try to keep this blog free of kid pictures, but I found these old ones recently and couldn't resist!



Thursday, July 26, 2007

Humans eke out poker victory

But only due to a bad decision by the human designers of the robot team! :-)

Earlier post here.

NYTimes: The human team reached a draw in the first round even though their total winnings were slightly less than that of the computer. The match rules specified that small differences were not considered significant because of statistical variation. On Monday night, the second round went heavily to Polaris, leaving the human players visibly demoralized.

“Polaris was beating me like a drum,” Mr. Eslami said after the round.

However, during the third round on Tuesday afternoon, the human team rebounded, when the Polaris team’s shift in strategy backfired. They used a version of the program that was supposed to add a level of adaptability and “learning.”


Unlike computer chess programs, which require immense amounts of computing power to determine every possible future move, the Polaris poker software is largely precomputed, running for weeks before the match to build a series of agents called “bots” that have differing personalities or styles of play, ranging from aggressive to passive.

The Alberta team modeled 10 different bots before the competition and then chose to run a single program in the first two rounds. In the third round, the researchers used a more sophisticated ensemble of programs in which a “coach” program monitored the performance of three bots and then moved them in and out of the lineup like football players.

Mr. Laak and Mr. Eslami won the final round handily, but not before Polaris won a $240 pot with a royal flush than beat Mr. Eslami’s three-of-a-kind. The two men said that Polaris had challenged them far more than their human opponents.

Wednesday, July 25, 2007

Man vs machine: live poker!

This blog has live updates from the competition. See also here for a video clip introduction. It appears the machine Polaris is ahead of the human team at the moment.

The history of AI tells us that capabilities initially regarded as sure signs of intelligence ("machines will never play chess like a human!") are discounted soon after machines master them. Personally I favor a strong version of the Turing test: interaction which takes place over a sufficiently long time that the tester can introduce new ideas and watch to see if learning occurs. Can you teach the machine quantum mechanics? At the end will it be able to solve some novel problems? Many humans would fail this Turing test :-)

Earlier post on bots invading online poker.

2007

World-Class Poker Professionals Phil Laak and Ali Eslami
versus
Computer Poker Champion Polaris (University of Alberta)

Can a computer program bluff? Yes -- probably better than any human. Bluff, trap, check-raise bluff, big lay-down -- name your poison. The patience of a monk or the fierce aggression of a tiger, changing gears in a single heartbeat. Polaris can make a pro's head spin.

Psychology? That's just a human weakness.

Odds and calculation? Computers can do a bit of that.

Intimidation factor and mental toughness? Who would you choose?

Does the computer really stand a chance? Yes, this one does. It learns, adapts, and exploits the weaknesses of any opponent. Win or lose, it will put up one hell of a fight.

Many of the top pros, like Chris "Jesus" Ferguson, Paul Phillips, Andy Bloch and others, already understand what the future holds. Now the rest of the poker world will find out.

Tuesday, July 24, 2007

What is a quant?

The following log entry, which displays the origin of and referring search engine query for a pageload request to this blog, does not inspire confidence. Is the SEC full of too many JD's and not enough people who understand monte carlo simulation and stochastic processes?

secfwopc.sec.gov (U.S. Securities & Exchange Commission)

District Of Columbia, Washington, United States, 0 returning visits

Date Time WebPage

24th July 2007 10:04:52
referer: www.google.com/search?hl=en&q=what is a quants&btnG=Search

infoproc.blogspot.com/2006/08/portrait-of-quant-ii.html


24th July 2007 12:11:59
referer: www.google.com/search?hl=en&q=Charles Munger and the pricing of derivatives&btnG=Google Search

infoproc.blogspot.com/2006/09/citadel-of-finance.html

Sunday, July 22, 2007

Income inequality and Marginal Revolution

Tyler Cowen at Marginal Revolution discusses a recent demographic study of who, exactly, the top US wage earners are. We've discussed the problem of growing US income inequality here before.

To make the top 1 percent in AGI (IRS: Adjusted Gross Income), you needed to earn $309,160. To make it to the top 0.1 percent, you needed $1.4 million (2004 figures).

Here's a nice factoid:

...the top 25 hedge fund managers combined appear to have earned more than all 500 S&P 500 CEOs combined (both realized and estimated).

Somewhat misleading, as this includes returns on the hedgies' own capital invested as part of their funds. But, still, you get the picture of our gilded age :-)

One of the interesting conclusions from the study is that executives of non-financial public companies are a numerically rather small component of top earners, comprising no more than 6.5%. Financiers comprise a similar, but perhaps larger, subset. Who are the remaining top earners? The study can't tell! (They don't know.) Obvious candidates are doctors in certain lucrative specialties, sports and entertainment stars and owners of private businesses. The category which I think is quite significant, but largely ignored, is founders and employees of startups that have successful exits. Below is the comment I added to Tyler's blog:

The fact that C-level execs are not the numerically dominant subgroup is pretty obvious. The whole link between exec compensation and inequality is a red herring (except in that it symbolizes our acceptance of winner take all economics).

I suspect that founders and early employees of successful private companies (startups) that have a liquidity event (i.e., an IPO or acquisition) are a large subset of the top AGI group. Note, though, that this population does not make it into the top tier (i.e., top 1 or .1%) with regularity, but rather only in a very successful year (the one in which they get their "exit"). Any decent tech IPO launches hundreds of employees into the top 1 or even .1%.

It is very important to know what fraction of the top group are there each year (doctors, lawyers, financiers) versus those for whom it is a one-time event (sold the business they carefully built over many years). If it is predominantly the latter it's hard to attribute an increase in top percentile earnings to unhealthy inequality.


To be more quantitative: suppose there are 1M employees at private companies (not just in technology, but in other industries as well) who each have a 10% chance per year of participating in a liquidity event that raises their AGI to the top 1% threshold. That would add 100k additional top earners each year, and thereby raise the average income of that group. If there are 150M workers in the US then there are 1.5M in the top 1%, so this subset of "rare exit" or employee stock option beneficiaries would make up about 7% of the total each year (similar to the corporate exec number). But these people are clearly not part of the oligarchy, and if the increase in income inequality is due to their shareholder participation, why is that a bad thing?

We reported earlier on the geographic distribution of income gains to the top 1 percent: they are concentrated in tech hotbeds like silicon valley, which seems to support our thesis that the payouts are not going to the same people every year.

Many Worlds: A brief guide for the perplexed

I added this to the earlier post 50 years of Many Worlds and thought I would make it into a stand alone post as well.

Many Worlds: A brief guide for the perplexed

In quantum mechanics, states can exist in superpositions, such as (for an electron spin)

(state)   =   (up)   +   (down)

When a measurement on this state is performed, the Copenhagen interpretation says that the state (wavefunction) "collapses" to one of the two possible outcomes:

(up)     or     (down),

with some probability for each outcome depending on the initial state (e.g., 1/2 and 1/2 of measuring up and down). One fundamental difference between quantum and classical mechanics is that even if we have specified the state above as precisely as is allowed by nature, we are still left with only a probabilistic prediction for what will happen next. In classical physics knowing the state (e.g., position and velocity of a particle) allows perfect future prediction.

There is no satisfactory understanding of how or exactly when the Copenhagen wavefunction "collapse" proceeds. Indeed, collapse introduces confusing issues like consciousness: what, exactly, constitutes an "observer", capable of causing the collapse?

Everett suggested we simply remove wavefunction collapse from the theory. Then the state evolves in time always according to the Schrodinger equation. In fact, the whole universe can be described by a "universal wave function" which evolves according to the Schrodinger equation and never undergoes Copenhagen collapse.

Suppose we follow our electron state through a device which measures its spin. For example: by deflecting the electron using a magnetic field and recording the spin-dependent path of the deflected electron using a detector which amplifies the result. The result is recorded in some macroscopic way: e.g., a red or green bulb lights up depending on whether deflection was up or down. The whole process is described by the Schrodinger equation, with the final state being

(state)   =   (up) (device recorded up)   +   (down) (device recorded down)

Here "device" could, but does not necessarily, refer to the human or robot brain which saw the detector bulb flash. What matters is that the device is macroscopic and has a large (e.g., Avogadro's number) number of degrees of freedom. In that case, as noted by Everett, the two sub-states of the world (or device) after the measurement are effectively orthogonal (have zero overlap). In other words, the quantum state describing a huge number of emitted red photons and zero emitted green photons is orthogonal to the complementary state.

If a robot or human brain is watching the experiment, it perceives a unique outcome just as predicted by Copenhagen. That is, any macroscopic information processing device ends up in one of the possible macroscopic states (red light vs green light flash). The amplitude for those macroscopically different states to interfere is exponentially small, hence they can be treated thereafter as completely independent "branches" of the wavefunction.

Success! The experimental outcome is predicted by a simpler (sans collapse) version of the theory. The tricky part: there are now necessarily parts of the final state (wavefunction) describing both the up and down outcomes (I saw red vs I saw green). These are the many worlds of the Everett interpretation.

Personally, I prefer to call it No Collapse instead of Many Worlds -- why not emphasize the advantageous rather than the confusing part of the interpretation?

Some eminent physicists who (as far as I can tell) believe(d) in MW: Feynman, Gell-Mann, Hawking, Steve Weinberg, Bryce DeWitt, David Deutsch, Sidney Coleman ... In fact, I was told that Feynman and Gell-Mann each claim(ed) to have independently invented MW, without any knowledge of Everett!

Saturday, July 21, 2007

Man vs machine: poker

It looks like we will soon add poker to the list of games (chess, checkers, backgammon) at which machines have surpassed humans. Note we're talking about heads up play here. I imagine machines are not as good at playing tournaments -- i.e., picking out and exploiting weak players at the table.

How long until computers can play a decent game of Go?

Associated Press: ...Computers have gotten a lot better at poker in recent years; they're good enough now to challenge top professionals like Laak, who won the World Poker Tour invitational in 2004.

But it's only a matter of time before the machines take a commanding lead in the war for poker supremacy. Just as they already have in backgammon, checkers and chess, computers are expected to surpass even the best human poker players within a decade. They can already beat virtually any amateur player.

"This match is extremely important, because it's the first time there's going to be a man-machine event where there's going to be a scientific component," said University of Alberta computing science professor Jonathan Schaeffer.

The Canadian university's games research group is considered the best of its kind in the world. After defeating an Alberta-designed program several years ago, Laak was so impressed that he estimated his edge at a mere 5 percent. He figures he would have lost if the researchers hadn't let him examine the programming code and practice against the machine ahead of time.

"This robot is going to do just fine," Laak predicted.

The Alberta researchers have endowed the $50,000 contest with an ingenious design, making this the first man-machine contest to eliminate the luck of the draw as much as possible.

Laak will play with a partner, fellow pro Ali Eslami. The two will be in separate rooms, and their games will be mirror images of one another, with Eslami getting the cards that the computer received in its hands against Laak, and vice versa.

That way, a lousy hand for one human player will result in a correspondingly strong hand for his partner in the other room. At the end of the tournament the chips of both humans will be added together and compared to the computer's.

The two-day contest, beginning Monday, takes place not at a casino, but at the annual conference of the Association for the Advancement of Artificial Intelligence in Vancouver, British Columbia. Researchers in the field have taken an increasing interest in poker over the past few years because one of the biggest problems they face is how to deal with uncertainty and incomplete information.

"You don't have perfect information about what state the game is in, and particularly what cards your opponent has in his hand," said Dana S. Nau, a professor of computer science at the University of Maryland in College Park. "That means when an opponent does something, you can't be sure why."

As a result, it is much harder for computer programmers to teach computers to play poker than other games. In chess, checkers and backgammon, every contest starts the same way, then evolves through an enormous, but finite, number of possible states according to a consistent set of rules. With enough computing power, a computer could simply build a tree with a branch representing every possible future move in the game, then choose the one that leads most directly to victory.

...The game-tree approach doesn't work in poker because in many situations there is no one best move. There isn't even a best strategy. A top-notch player adapts his play over time, exploiting his opponent's behavior. He bluffs against the timid and proceeds cautiously when players who only raise on the strongest hands are betting the limit. He learns how to vary his own strategy so others can't take advantage of him.

That kind of insight is very hard to program into a computer. You can't just give the machine some rules to follow, because any reasonably competent human player will quickly intuit what the computer is going to do in various situations.

"What makes poker interesting is that there is not a magic recipe," Schaeffer said.

In fact, the simplest poker-playing programs fail because they are just a recipe, a set of rules telling the computer what to do based on the strength of its hand. A savvy opponent can soon gauge what cards the computer is holding based on how aggressively it is betting.

That's how Laak was able to defeat a program called Poker Probot in a contest two years ago in Las Vegas. As the match progressed Laak correctly intuited that the computer was playing a consistently aggressive game, and capitalized on that observation by adapting his own play.

Programmers can eliminate some of that weakness with game theory, a branch of mathematics pioneered by John von Neumann, who also helped develop the hydrogen bomb. In 1950 mathematician John Nash, whose life inspired the movie "A Brilliant Mind," showed that in certain games there is a set of strategies such that every player's return is maximized and no player would benefit from switching to a different strategy.

In the simple game "Rock, Paper, Scissors," for example, the best strategy is to randomly select each of the options an equal proportion of the time. If any player diverted from that strategy by following a pattern or favoring one option over, the others would soon notice and adapt their own play to take advantage of it.

Texas Hold 'em is a little more complicated than "Rock, Paper, Scissors," but Nash's math still applies. With game theory, computers know to vary their play so an opponent has a hard time figuring out whether they are bluffing or employing some other strategy.

But game theory has inherent limits. In Nash equilibrium terms, success doesn't mean winning — it means not losing.

"You basically compute a formula that can at least break even in the long run, no matter what your opponent does," Billings said.

That's about where the best poker programs are today. Though the best game theory-based programs can usually hold their own against world-class human poker players, they aren't good enough to win big consistently.

Squeezing that extra bit of performance out of a computer requires combining the sheer mathematical power of game theory with the ability to observe an opponent's play and adapt to it. Many legendary poker players do that by being experts of human nature. They quickly learn the tics, gestures and other "tells" that reveal exactly what another player is up to.

A computer can't detect those, but it can keep track of how an opponent plays the game. It can observe how often an opponent tries to bluff with a weak hand, and how often she folds. Then the computer can take that information and incorporate it into the calculations that guide its own game.

"The notion of forming some sort of model of what another player is like ... is a really important problem," Nau said.

Computer scientists are only just beginning to incorporate that ability into their programs; days before their contest with Laak and Eslami, the University of Alberta researchers are still trying to tweak their program's adaptive elements. Billings will say only this about what the humans have in store: "They will be guaranteed to be seeing a lot of different styles."

Thursday, July 19, 2007

Visit to Redmond

No startup odyssey is complete without a trip to Microsoft!

I'm told there are 35k employees on their sprawling campus. Average age a bit higher than at Google, atmosphere a bit more serious and corporate, but still signs of geekery and techno wizardry.

Fortunately for me, no one complained when I used a Mac Powerbook for my presentation :-)







Monday, July 16, 2007

50 years of Many Worlds

Max Tegmark has a nice essay in Nature on the Many Worlds (MW) interpretation of quantum mechanics.

Previous discussion of Hugh Everett III and MW on this blog.



Personally, I find MW more appealing than the conventional Copenhagen interpretation, which is certainly incomplete. This point of view is increasingly common among those who have to think about the QM of isolated, closed systems: quantum cosmologists, quantum information theorists, etc. Tegmark correctly points out in the essay below that progress in our understanding of decoherence in no way takes the place of MW in clarifying the problems with measurement and wavefunction collapse, although this is a common misconception.

However, I believe there is a fundamental problem with deriving Born's rule for probability of outcomes in the MW context. See research paper here and talk given at Caltech IQI here.

A brief guide for the perplexed:
In quantum mechanics, states can exist in superpositions, such as (for an electron spin)

(state)   =   (up)   +   (down)

When a measurement on this state is performed, the Copenhagen interpretation says that the state (wavefunction) "collapses" to one of the two possible outcomes:

(up)     or     (down),

with some probability for each outcome depending on the initial state (e.g., 1/2 and 1/2 of measuring up and down). One fundamental difference between quantum and classical mechanics is that even though we have specified the state above as precisely as is allowed by nature, we are still left with a probabilistic prediction for what will happen next. In classical physics knowing the state (e.g., position and velocity of a particle) allows perfect future prediction.

There is no satisfactory understanding of how or exactly when the Copenhagen wavefunction "collapse" proceeds. Indeed, collapse introduces confusing issues like consciousness: what, exactly, constitutes an "observer", capable of causing the collapse?

Everett suggested we simply remove wavefunction collapse from the theory. Then the state evolves in time always according to the Schrodinger equation. Suppose we follow our electron state through a device which measures its spin. For example: by deflecting the electron using a magnetic field and recording the spin-dendent path of the deflected electron using a detector which amplifies the result. The result is recorded in some macroscopic way: e.g., a red or green bulb lights up depending on whether deflection was up or down. The whole process is described by the Schrodinger equation, with the final state being

(state)   =   (up) (device recorded up)   +   (down) (device recorded down)

Here "device" could, but does not necessarily, refer to the human or robot brain which saw the detector bulb flash. What matters is that the device is macroscopic and has a large (e.g., Avogadro's number) number of degrees of freedom. In that case, as noted by Everett, the two states of the world (or device) after the measurement are effectively orthogonal (have zero overlap). In other words, the quantum state describing a huge number of emitted red photons and zero emitted green photons is orthogonal to the complementary state.

If a robot or human brain is watching the experiment, it perceives a unique outcome just as predicted by Copenhagen. Success! The experimental outcome is predicted by a simpler (sans collapse) version of the theory. The tricky part: there are now necessarily parts of the final state (wavefunction) describing both the up and down outcomes (I saw red vs I saw green). These are the many worlds of the Everett interpretation.

Personally, I prefer to call it No Collapse instead of Many Worlds -- why not emphasize the advantageous rather than the confusing part of the interpretation?

Do the other worlds exist? Can we interact with them? These are the tricky questions remaining...

Some eminent physicists who (as far as I can tell) believe in MW: Feynman, Gell-Mann, Hawking, Steve Weinberg, Bryce DeWitt, David Deutsch, ... In fact, I was told that Feynman and Gell-Mann each claim(ed) to have independently invented MW, without any knowledge of Everett!

Many lives in many worlds

Max Tegmark, Nature

Almost all of my colleagues have an opinion about it, but almost none of them have read it. The first draft of Hugh Everett's PhD thesis, the shortened official version of which celebrates its 50th birthday this year, is buried in the out-of-print book The Many-Worlds Interpretation of Quantum Mechanics. I remember my excitement on finding it in a small Berkeley book store back in grad school, and still view it as one of the most brilliant texts I've ever read.

By the time Everett started his graduate work with John Archibald Wheeler at Princeton University in New Jersey quantum mechanics had chalked up stunning successes in explaining the atomic realm, yet debate raged on as to what its mathematical formalism really meant. I was fortunate to get to discuss quantum mechanics with Wheeler during my postdoctorate years in Princeton, but never had the chance to meet Everett.

Quantum mechanics specifies the state of the Universe not in classical terms, such as the positions and velocities of all particles, but in terms of a mathematical object called a wavefunction. According to the Schrödinger equation, this wavefunction evolves over time in a deterministic fashion that mathematicians term 'unitary'. Although quantum mechanics is often described as inherently random and uncertain, there is nothing random or uncertain about the way the wavefunction evolves.

The sticky part is how to connect this wavefunction with what we observe. Many legitimate wavefunctions correspond to counterintuitive situations, such as Schrödinger's cat being dead and alive at the same time in a 'superposition' of states. In the 1920s, physicists explained away this weirdness by postulating that the wavefunction 'collapsed' into some random but definite classical outcome whenever someone made an observation. This add-on had the virtue of explaining observations, but rendered the theory incomplete, because there was no mathematics specifying what constituted an observation — that is, when the wavefunction was supposed to collapse.

Everett's theory is simple to state but has complex consequences, including parallel universes. The theory can be summed up by saying that the Schrödinger equation applies at all times; in other words, that the wavefunction of the Universe never collapses. That's it — no mention of parallel universes or splitting worlds, which are implications of the theory rather than postulates. His brilliant insight was that this collapse-free quantum theory is, in fact, consistent with observation. Although it predicts that a wavefunction describing one classical reality gradually evolves into a wavefunction describing a superposition of many such realities — the many worlds — observers subjectively experience this splitting merely as a slight randomness (see 'Not so random'), with probabilities consistent with those calculated using the wavefunction-collapse recipe.

Gaining acceptance

It is often said that important scientific discoveries go though three phases: first they are completely ignored, then they are violently attacked, and finally they are brushed aside as well known. Everett's discovery was no exception: it took more than a decade before it started getting noticed. But it was too late for Everett, who left academia disillusioned1.

Everett's no-collapse idea is not yet at stage three, but after being widely dismissed as too crazy during the 1970s and 1980s, it has gradually gained more acceptance. At an informal poll taken at a conference on the foundations of quantum theory in 1999, physicists rated the idea more highly than the alternatives, although many more physicists were still 'undecided'2. I believe the upward trend is clear.

Why the change? I think there are several reasons. Predictions of other types of parallel universes from cosmological inflation and string theory have increased tolerance for weird-sounding ideas. New experiments have demonstrated quantum weirdness in ever larger systems. Finally, the discovery of a process known as decoherence has answered crucial questions that Everett's work had left dangling.

For example, if these parallel universes exist, why don't we perceive them? Quantum superpositions cannot be confined — as most quantum experiments are — to the microworld. Because you are made of atoms, then if atoms can be in two places at once in superposition, so can you.

The breakthrough came in 1970 with a seminal paper by H. Dieter Zeh, who showed that the Schrödinger equation itself gives rise to a type of censorship. This effect became known as 'decoherence', and was worked out in great detail by Wojciech Zurek, Zeh and others over the following decades. Quantum superpositions were found to remain observable only as long as they were kept secret from the rest of the world. The quantum card in our example (see 'Not so random') is constantly bumping into air molecules, photons and so on, which thereby find out whether it has fallen to the left or to the right, destroying the coherence of the superposition and making it unobservable. Decoherence also explains why states resembling classical physics have special status: they are the most robust to decoherence.

Science or philosophy?

The main motivation for introducing the notion of random wavefunction collapse into quantum physics had been to explain why we perceive probabilities and not strange macroscopic superpositions. After Everett had shown that things would appear random anyway (see 'Not so random') and decoherence had been found to explain why we never perceive anything strange, much of this motivation was gone. Even though the wavefunction technically never collapses in the Everett view, it is generally agreed that decoherence produces an effect that looks like a collapse and smells like a collapse.

In my opinion, it is time to update the many quantum textbooks that introduce wavefunction collapse as a fundamental postulate of quantum mechanics. The idea of collapse still has utility as a calculational recipe, but students should be told that it is probably not a fundamental process violating the Schrödinger equation so as to avoid any subsequent confusion. If you are considering a quantum textbook that does not mention Everett and decoherence in the index, I recommend buying a more modern one.

After 50 years we can celebrate the fact that Everett's interpretation is still consistent with quantum observations, but we face another pressing question: is it science or mere philosophy? The key point is that parallel universes are not a theory in themselves, but a prediction of certain theories. For a theory to be falsifiable, we need not observe and test all its predictions — one will do.

Because Einstein's general theory of relativity has successfully predicted many things we can observe, we also take seriously its predictions for things we cannot, such as the internal structure of black holes. Analogously, successful predictions by unitary quantum mechanics have made scientists take more seriously its other predictions, including parallel universes.

Moreover, Everett's theory is falsifiable by future lab experiments: no matter how large a system they probe, it says, they will not observe the wavefunction collapsing. Indeed, collapse-free superpositions have been demonstrated in systems with many atoms, such as carbon-60 molecules. Several groups are now attempting to create quantum superpositions of objects involving 1017 atoms or more, tantalizingly close to our human macroscopic scale. There is also a global effort to build quantum computers which, if successful, will be able to factor numbers exponentially faster than classical computers, effectively performing parallel computations in Everett's parallel worlds.

The bird perspective

So Everett's theory is testable and so far agrees with observation. But should you really believe it? When thinking about the ultimate nature of reality, I find it useful to distinguish between two ways of viewing a physical theory: the outside view of a physicist studying its mathematical equations, like a bird surveying a landscape from high above, and the inside view of an observer living in the world described by the equations, like a frog being watched by the bird.

From the bird perspective, Everett's multiverse is simple. There is only one wavefunction, and it evolves smoothly and deterministically over time without any kind of splitting or parallelism. The abstract quantum world described by this evolving wavefunction contains within it a vast number of classical parallel storylines (worlds), continuously splitting and merging, as well as a number of quantum phenomena that lack a classical description. From their frog perspective, observers perceive only a tiny fraction of this full reality, and they perceive the splitting of classical storylines as quantum randomness.

What is more fundamental — the frog perspective or the bird perspective? In other words, what is more basic to you: human language or mathematical language? If you opt for the former, you would probably prefer a 'many words' interpretation of quantum mechanics, where mathematical simplicity is sacrificed to collapse the wavefunction and eliminate parallel universes.

But if you prefer a simple and purely mathematical theory, then you — like me — are stuck with the many-worlds interpretation. If you struggle with this you are in good company: in general, it has proved extremely difficult to formulate a mathematical theory that predicts everything we can observe and nothing else — and not just for quantum physics.

Moreover, we should expect quantum mechanics to feel counterintuitive, because evolution endowed us with intuition only for those aspects of physics that had survival value for our distant ancestors, such as the trajectories of flying rocks.

The choice is yours. But I worry that if we dismiss theories such as Everett's because we can't observe everything or because they seem weird, we risk missing true breakthroughs, perpetuating our instinctive reluctance to expand our horizons. To modern ears the Shapley–Curtis debate of 1920 about whether there was really a multitude of galaxies (parallel universes by the standards of the time) sounds positively quaint.

If we dismiss theories because they seem weird, we risk missing true breakthroughs.
Everett asked us to acknowledge that our physical world is grander than we had imagined, a humble suggestion that is probably easier to accept after the recent breakthroughs in cosmology than it was 50 years ago. I think Everett's only mistake was to be born ahead of his time. In another 50 years, I believe we will be more used to the weird ways of our cosmos, and even find its strangeness to be part of its charm.

Saturday, July 14, 2007

Behavioral economics

I found this overview and intellectual history of behavioral economics via a link from Economist's View.

By now I think anyone who has looked at the data knows that the agents -- i.e., humans -- participating in markets are limited in many ways. (Only a mathematics-fetishizing autistic, completely disconnected from empiricism, could have thought otherwise.) If the agents aren't reliable or even particularly good processors of information, how does the system find its neoclassical equilibrium? (Can one even define the equilibrium if there are not individual and aggregate utility functions?)

The next stage of the argument is whether the market magically aggregates the decisions of the individual agents in such a way that their errors cancel. In some simple cases (see Wisdom of Crowds for examples) this may be the case, but in more complicated markets I suspect (and the data apparently show; see below) that cancellation does not occur and outcomes are suboptimal. Where does this leave neoclassical economics? You be the judge!

Related posts here (Mirowski) and here (irrational voters and rational agents?).

The paper (PDF) is here. Some excerpts below.

Opening quote from Samuelson and Conclusions:

I wonder how much economic theory would be changed if [..] found to be empirically untrue. I suspect, very little.
--Paul Samuelson

Conclusions

Samuelson’s claim at the beginning of this paper that a falsification would have little effect on his economics remains largely an open question. On the basis of the overview provided in this paper, however, two developments can be observed. With respect to the first branch of behavioral economics, Samuelson is probably right. Although the first branch proposes some radical changes to traditional economics, it protects Samuelson’s economics by labeling it a normative theory. Kahneman, Tversky, and Thaler propose a research agenda that sets economics off in a different direction, but at the same time saves traditional economics as the objective anchor by which to stay on course.

The second branch in behavioral economics is potentially much more destructive. It rejects Samuelson’s economics both as a positive and as a normative theory. By doubting the validity of the exogeneity of preference assumption, introducing the social environment as an explanatory factor, and promoting neuroscience as a basis for economics, it offers a range of alternatives for traditional economics. With game theory it furthermore possesses a powerful tool that is increasingly used in a number of related other sciences. ...

Kahneman and Tversky:

Over the past ten years Kahneman has gone one step beyond showing how traditional economics descriptively fails. Especially prominent, both in the number of publications Kahneman devotes to it and in the attention it receives, is his reinterpretation of the notion of utility.13 For Kahneman, the main reason that people do not make their decisions in accordance with the normative theory is that their valuation and perception of the factors of these choices systematically differ from the objective valuation of these factors. This is what amongst many articles Kahneman and Tversky (1979) shows. People’s subjective perception of probabilities and their subjective valuation of utility differ from their objective values. A theory that attempts to describe people’s decision behavior in the real world should thus start by measuring these subjective values of utility and probability. ...

Thaler:

Thaler distinguishes his work, and behavioral economics generally, from experimental economics of for instance Vernon Smith and Charles Plott. Although Thaler’s remarks in this respect are scattered and mostly made in passing, two recurring arguments can be observed. Firstly, Thaler rejects experimental economics’ suggestion that the market (institutions) will correct the quasi-rational behavior of the individual. Simply put, if one extends the coffee-mug experiment described above with an (experimental) market in which subjects can trade their mugs, the endowment effect doesn’t change one single bit. Furthermore, there is no way in which a rational individual could use the market system to exploit quasi-rational individuals in the case of this endowment effect36. The implication is that quasi-rational behavior can survive. As rational agents cannot exploit quasi-rational behavior, and as there seems in most cases to be no ‘survival penalty’ on quasi-rational behavior, the evolutionary argument doesn’t work either.

Secondly, experimental economics’ market experiments are not convincing according to Thaler. It makes two wrong assumptions. First of all, it assumes that individuals will quickly learn from their mistakes and discover the right solution. Thaler recounts how this has been falsified in numerous experiments. On the contrary, it is often the case that even when the correct solution has been repeatedly explained to them, individuals still persist in making the wrong decision. A second false assumption of experimental economics is to suppose that in the real world there exist ample opportunity to learn. This is labeled the Ground Hog Day argument37, in reference to a well-known movie starring Bill Murray. ... Subjects in (market) experiments who have to play the exact same game for tens or hundreds of rounds may perhaps be observed to (slowly) adjust to the rational solution. But real life is more like a constant sequence of the first few round of an experiment. The learning assumption of experimental economics is thus not valid.

Lowenstein:

But perhaps even more destructive for economics is the fact that individuals’ intertemporal choices can be shown to be fundamentally inconsistent49. People who prefer A now over B now also prefer A in one month over B in two months. However, at the same time they also prefer B in one month and A in two months over A in one month and B in two months.

Camerer:

The ultimatum game (player one proposes a division of a fixed sum of money, player two either accepts (the money is divided according to the proposed division), or rejects (both players get nothing)) has been played all over the world and leads always to the result that individuals do not play the ‘optimum’ (player one proposes the smallest amount possible to player two and player two accepts), but typically divide the money about half-half. The phenomenon is remarkably stable around the globe. However, the experiments have only been done with university students in advanced capitalist economies. The question is thus whether the results hold when tested in other environments.

The surprising result is not so much that the average proposed and accepted divisions in the small-scale societies differ from those of university students, but how they differ. Roughly, the average proposed and accepted divisions go from [80%,20%] to [40%,60%]. The members of the different societies thus show a remarkable difference in the division they propose and accept.

...“preferences over economic choices are not exogenous as the canonical model would have it, but rather are shaped by the economic and social interactions of everyday life. ..."

Camerer’s critique is similar to Loewenstein’s and can perhaps best be summed up with the conclusion that for Camerer there is no invisible hand. That is, for Camerer nothing mysterious happens between the behavior of the individual and the behavior of the market. If you know the behavior of the individuals, you can add up these behaviors to obtain the behavior of the market. In Anderson and Camerer (2000), for instance, it is shown that even when one allows learning to take place, a key issue for experimental economics, the game does not necessarily go to the global optimum, but as a result of path-dependency may easily get stuck in a sub-optimum. Camerer (1987) shows that, contrary to the common belief in experimental economics, decision biases persist in markets. In a laboratory experiment Camerer finds that a market institution does not reduce biases but may even increase them. ...

Finally,

The second branch of behavioral economics is organized around Camerer, Loewenstein, and Laibson. It considers the uncertainty of the decision behavior to be of an endogenous or strategic nature. That is, the uncertainty depends upon the fact that, like the individual, also the rest of the world tries to make the best decision. The most important theory to investigate individual decision behavior under endogenous uncertainty is game theory. The second branch of behavioral economics draws less on Kahneman and Tversky. What it takes from them is the idea that traditional Samuelson economics is plainly false. It argues, however, that traditional economics is both positively/descriptively and normatively wrong. Except for a few special cases, it neither tells how the individuals behave, nor how they should behave. The main project of the second branch is hence to build new positive theories of rational individual economic behavior under endogenous uncertainty. And here the race is basically still open.

Friday, July 13, 2007

Made in China

In an earlier post I linked to Bunnie Huang's blog, which describes (among other things) the manufacturing of his startup's hi-tech Chumby gadget in Shenzhen. At Foo Camp he and I ran a panel on the Future of China. In the audience, among others, were Jimmy Wales, the founder of Wikipedia, and Guido van Rossum, the creator of Python. Jimmy was typing on his laptop the whole time, but Guido asked a bunch of questions and recommended a book to me.

Bunnie has some more posts up (including video) giving his impressions of manufacturing in China. Highly recommended!

Made in China: Scale, Skill, Dedication, Feeding the factory.


Below: Bunnie on the line, debugging what turns out to be a firmware problem with the Chumby. Look at those MIT wire boys go! :-)

Wednesday, July 11, 2007

Hedge funds or market makers?

To what extent are Citadel, DE Shaw and Renaissance really just big market makers? The essay excerpted below is by Harry Kat, a finance professor and former trader who was profiled in the New Yorker recently.

First, from the New Yorker piece:

It is notoriously difficult to distinguish between genuine investment skill and random variation. But firms like Renaissance Technologies, Citadel Investment Group, and D. E. Shaw appear to generate consistently high returns and low volatility. Shaw’s main equity fund has posted average annual returns, after fees, of twenty-one per cent since 1989; Renaissance has reportedly produced even higher returns. (Most of the top-performing hedge funds are closed to new investors.) Kat questioned whether such firms, which trade in huge volumes on a daily basis, ought to be categorized as hedge funds at all. “Basically, they are the largest market-making firms in the world, but they call themselves hedge funds because it sells better,” Kat said. “The average horizon on a trade for these guys is something like five seconds. They earn the spread. It’s very smart, but their skill is in technology. It’s in sucking up tick-by-tick data, processing all those data, and converting them into second-by-second positions in thousands of spreads worldwide. It’s just algorithmic market-making.”

Next, the essay from Kat's academic web site. I suspect Kat exaggerates, but he does make an interesting point. Could a market maker really deliver such huge alpha? Only if it knows exactly where and when to take a position!

Of Market Makers and Hedge Funds

David and Ken both work for a large market making firm and both have the same dream: to start their own company. One day, David decides to quit his job and start a traditional market-making company. He puts in $10m of his own money and finds 9 others that are willing to do the same. The result: a company with $100m in equity, divided equally over 10 shareholders, meaning that each shareholder will share equally in the companyís operating costs and P&L. David will manage the company and will receive an annual salary of $1m for doing so.

Ken decides to quit as well. He is going to do things differently though. Instead of packaging his market-making activities in the traditional corporate form, he is going to start a hedge fund. Like David, he also puts in $10m of his own money. Like David, he also finds 9 others willing to do the same. They are not called shareholders, however. They are investors in a hedge fund with a net asset value of $100m. Just like David, Ken has a double function. Apart from being one of the 10 investors in the fund, he will also be the fundís manager. As manager, he is entitled to 20% of the profit (over a 5% hurdle rate); the average incentive fee in the hedge fund industry.

At first sight, it looks like David and Ken have accomplished the same thing. Both have a market-making operation with $100m in capital and 9 others to share the benefits with. There is, however, one big difference. Suppose David and Ken both made a net $100m. In Davidís company this would be shared equally between the shareholders, meaning that, including his salary, David received $11m. In Ken's hedge fund things are different, however. As the manager of the fund, he takes 20% of the profit, which, taking into account the $5m hurdle, would leave $81m to be divided among the 10 investors. Since he is also one of those 10 investors, however, this means that Ken would pocket a whopping $27.1m in total. Now suppose that both David and Ken lost $100m. In that case David would lose $9m, but Ken would still only lose $10m since as the fundís manager Ken gets 20% of the profit, but he does not participate in any losses.

So if you wanted to be a market maker, how would you set yourself up? Of course, we are not the first to think of this. Some of the largest market maker firms in the world disguise themselves as hedge funds these days. Their activities are typically classified under fancy hedge fund names such as ëstatistical arbitrageí or ëmanaged futuresí, but basically these funds are market makers. This includes some of the most admired names in the hedge fund business such as D.E. Shaw, Renaissance, Citadel, and AHL, all of which are, not surprisingly, notorious for the sheer size of their daily trading volumes and their fairly consistent alpha.

The above observation leads to a number of fascinating questions. The most interesting of these is of course how much of the profits of these market-making hedge funds stems from old-fashioned market making and how much is due to truly special insights and skill? Is the bulk of what these funds do very similar to what traditional market-making firms do, or are they responsible for major innovations and/or have they embedded major empirical discoveries in their market making? They tend to employ lots of PhDs and make a lot of fuzz about only hiring the best, etc. However, how much of that is window-dressing and how much is really adding value?

Another question is whether market-making hedge funds get treated differently than traditional market makers when they go out to borrow money or securities. Given prime brokersí eagerness to service hedge funds these days, one might argue that in this respect market-making hedge funds are again better off then traditional market makers.

So what is the conclusion? First of all, given the returns posted by the funds mentioned, it appears that high volume multi-market market making is a very good business to be in. Second, it looks like there could be a trade-off going on. Market-making hedge funds take a bigger slice of the pie, but the pie might be significantly bigger as well. Obviously, all of this could do with quite a bit more research. See if I can put a PhD on it.


HMK
04-02-2007

Monday, July 09, 2007

Theorists in diaspora

Passing the time, two former theoretical physicists analyze a research article which only just appeared on the web. Between them, they manage over a billion dollars in hedge fund assets. While their computers process data in the background, vacuuming up nickels from the trading ether, the two discuss color magnetic flux, quark gluon plasma and acausal correlations.

For fun, one of the two emails the paper to a former colleague, a humble professor still struggling with esoteric research...

Quark-gluon plasma paradox

D. Miskowiec

Gesellschaft fur Schwerionenforschung mbH, Planckstr. 1, 64291 Darmstadt

http://xxx.lanl.gov/abs/0707.0923

Based on simple physics arguments it is shown that the concept of quark-gluon plasma, a state of matter consisting of uncorrelated quarks, antiquarks, and gluons, has a fundamental problem.




The result? The following email message.

Dear Dr. Miskowiec,

I read your interesting preprint on a possible QGP paradox. My
comments are below.

Best regards,

Stephen Hsu


In the paper it seems you are discussing a caricature of QGP, indeed a straw man. I don't know whether belief in this straw man is widespread among nuclear theorists; perhaps it is. But QGP is, after all, merely the high temperature phase of QCD.

There *are* correlations (dynamics) that lead to preferential clustering of quarks into color neutral objects. These effects are absent at length scales much smaller than a fermi, due to asymptotic freedom. It is only on these short length scales that one can treat QCD as a (nearly) free gas of quarks and gluons. On sufficiently long length scales (i.e., much larger than a fermi) the system would still prefer to be color neutral. While it is true that at high temperatures the *linear* (confining) potential between color charges is no longer present, there is still an energetic cost for unscreened charge.

It's a standard result in finite temperature QCD that, even at high temperatures, there are still infrared (long distance) nonperturbative effects. These are associated with a scale related to the magnetic screening length of gluons. The resulting dynamics are never fully perturbative, although thermodyamic quantities such as entropy density, pressure, etc. are close to those of a free gas of quarks and gluons. The limit to our ability to compute these thermodynamic quantities beyond a certain level in perturbation theory arises from the nonperturbative effects I mention.

Consider the torus of QGP you discuss in your paper. Suppose I make a single "cut" in the torus, possibly separating quarks from each other in a way that leaves some uncancelled color charge. Once I pull the two faces apart by more than some distance (probably a few fermis), effects such as preferential hadronization into color neutral, integer baryon number, objects come into play. The energy required to make the cut and pull the faces apart is more than enough to create q-qbar pairs from the vacuum that can color neutralize each face. Note this is a *local* phenomenon taking place on fermi lengthscales.

I believe the solution to your paradox is the third possibility you list. See below, taken from the paper, bottom of column 1 p.3. I only disagree with the last sentence: high temperature QCD is *not* best described as a gas of hadrons, but *does* prefer color neutrality. No rigorous calculation ever claimed a lack of correlations except at very short distances (due to asymptotic freedom).

...The third possibility is that local correlations between quarks make some cutting surfaces more probable than the others when it comes to cutting the ring and starting the hadronization. Obviously, in absence of such correlations the QGP ring basically looks like in Fig. 3 and no preferred breaking points can be recognized. If, however, some kind of interactions lead to clustering of quarks and gluons into (white) objects of integer baryon numbers like in Fig. 4 then starting hadronization from several points of the ring at the same time will not lead to any problem. However, this kind of matter would be hadron resonance matter rather than the QGP.

Cooking the books: US News college rankings

I found this amusing article from Slate. It turns out the dirty scoundrels at US News need a "logarithmic adjustor" (fudge factor) to keep Caltech from coming out ahead of HYP (Harvard-Yale-Princeton). Note the article is from back in 2000. The earlier Gottlieb article mentioned below discussing the 1999 rankings (where Caltech came out number 1) is here.

For revealed preferences rankings of universities (i.e., where do students really choose to go when they are admitted to more than one school), see here.

Cooking the School Books (Yet Again)
The U.S. News college rankings get phonier and phonier.

By Nicholas Thompson
Posted Friday, Sept. 15, 2000, at 3:00 AM ET

This year, according to U.S. News & World Report, Princeton is the best university in the country and Caltech is No. 4. This represents a pretty big switcheroo—last year, Caltech was the best and Princeton the fourth.

Of course, it's not as though Caltech degenerated or Princeton improved over the past 12 months. As Bruce Gottlieb explained last year in Slate, changes like this come about mainly because U.S. News fiddles with the rules. Caltech catapulted up in 1999 because U.S. News changed the way it compares per-student spending; Caltech dropped back this year because the magazine decided to pretty much undo what it did last year.

But I think Gottlieb wasn't quite right when he said that U.S. News makes changes in its formula just so that colleges will bounce around and give the annual rankings some phony drama. The magazine's motives are more devious than that. U.S. News changed the scores last year because a new team of editors and statisticians decided that the books had been cooked to ensure that Harvard, Yale, or Princeton (HYP) ended up on top. U.S. News changed the rankings back because those editors and statisticians are now gone and the magazine wanted HYP back on top. Just before the latest scores came out, I wrote an article in the Washington Monthly suggesting that this might happen. Even so, the fancy footwork was a little shocking.

The story of how the rankings were cooked goes back to 1987, when the magazine's first attempt at a formula put a school in first that longtime editor Mel Elfin says he can't even remember, except that it wasn't HYP. So Elfin threw away that formula and brought in a statistician named Robert Morse who produced a new one. This one puts HYP on top, and Elfin frankly defends his use of this result to vindicate the process. He told me, "When you're picking the most valuable player in baseball and a utility player hitting .220 comes up as the MVP, it's not right."

For the next decade, Elfin and Morse essentially ran the rankings as their own fiefdom, and no one else at the magazine really knew how the numbers worked. But during a series of recent leadership changes, Morse and Elfin moved out of their leadership roles and a new team came in. What they found, they say, was a bizarre statistical measure that discounted major differences in spending, for what seemed to be the sole purpose of keeping HYP at the top. So, last year, as U.S. News itself wrote, the magazine "brought [its] methodology into line with standard statistical procedure." With these new rankings, Caltech shot up and HYP was displaced for the first time ever.

But the credibility of rankings like these depends on two semiconflicting rules. First, the system must be complicated enough to seem scientific. And second, the results must match, more or less, people's nonscientific prejudices. Last year's rankings failed the second test. There aren't many Techie graduates in the top ranks of U.S. News, and I'd be surprised if The New Yorker has published a story written by a Caltech grad, or even by someone married to one, in the last five years. Go out on the streets of Georgetown by the U.S. News offices and ask someone about the best college in the country. She probably won't start to talk about those hallowed labs in Pasadena.

So, Morse was given back his job as director of data research, and the formula was juiced to put HYP back on top. According to the magazine: "[W]e adjusted each school's research spending according to the ratio of its undergraduates to graduate students ... [and] we applied a logarithmic adjuster to all spending values." If you're not up on your logarithms, here's a translation: If a school spends tons and tons of money building machines for its students, they only get a little bit of credit. They got lots last year—but that was a mistake. Amazingly, the only categories where U.S. News applies this logarithmic adjuster are also the only categories where Caltech has a huge lead over HYP.

The fact that the formulas had to be rearranged to get HYP back on top doesn't mean that those three aren't the best schools in the country, whatever that means. After all, who knows whether last year's methodology was better than this year's? Is a school's quality more accurately measured by multiplying its spending per student by 0.15 or by taking a logarithmic adjuster to that value? A case could also be made for taking the square root.

But the logical flaw in U.S. News' methodology should be obvious—at least to any Caltech graduate. If the test of a mathematical formula's validity is how closely the results it produces accord with pre-existing prejudices, then the formula adds nothing to the validity of the prejudice. It's just for show. And if you fiddle constantly with the formula to produce the result you want, it's not even good for that.

U.S. News really only has one justification for its rankings: They must be right because the schools we know are the best come out on top. Last year, that logic fell apart. This year, the magazine has straightened it all out and HYP's back in charge—with the help of a logarithmic adjuster.

Nicholas Thompson is a senior editor at Legal Affairs.

Saturday, July 07, 2007

Myth of the Rational Voter

The New Yorker has an excellent discussion by Louis Menand of Bryan Caplan's recent book The Myth of the Rational Voter.

Best sentence in the article (I suppose this applies to physicists as well):

Caplan is the sort of economist (are there other sorts? there must be) who engages with the views of non-economists in the way a bulldozer would engage with a picket fence if a bulldozer could express glee.

Short summary (obvious to anyone who has thought about democracy): voters are clueless, and resulting policies and outcomes are suboptimal, but allowing everyone to have their say lends stability and legitimacy to the system. Democracy is a tradeoff, of course! While a wise and effective dictator (e.g., Lee Kwan Yew of Singapore, or, in Caplan's mind, a board of economic "experts") might outperform the electorate over a short period of time, the more common kind of dictator (stupid, egomaniacal) is capable of much, much worse. Without democracy, what keeps a corrupt and stupid dictator from succeeding the efficient and benevolent one?

The analogous point for markets is that, for a short time (classic example: during a war), good central planning might be more effective for certain goals than market mechanisms. But over the long haul distributing the decisions over many participants will give a better outcome, both because of the complexity of economic decision making (e.g., how many bagels does NYC need each day? can a committee figure this out?) and because of the eventuality of bad central planning. When discussing free markets, people on the left always assume the alternative is good central planning, while those on the right always assume the opposite.

Returning to Caplan, his view isn't just that voters are uninformed or stupid. He attacks an apparently widely believed feel-good story that says although most voters are clueless their mistakes are random and magically cancel out when aggregated, leaving the outcome in the hands of the wise fraction of the electorate. What a wonderfully fine-tuned dynamical system! (That is how markets are supposed to work, except when they don't, and instead horribly misprice things.) Caplan points out several common irrationalities of voters that do not cancel out, but rather tend to bias government in particular directions.

Any data or argument supporting the irrationality of voters and suboptimality of democratic outcomes can be applied just as well to agents in markets. (What Menand calls "shortcuts" below others call heuristics or bounded cognition.) The claim that people make better decisions in market situations (e.g., buying a house or a choosing a career) because they are directly affected by the outcome is only marginally convincing to me. Evaluating the optimality of many economic decisions is about as hard as figuring out whether a particular vote or policy decision was optimal. Did your vote for Nader lead to G.W. Bush and the Iraq disaster? Did your votes for Reagan help end the cold war safely and in our favor? Would you have a higher net worth if you had bought a smaller house and invested the rest of your down payment in equities? Would the extra money in the bank compensate you for the reduced living space?

Do typical people sit down and figure these things out? Do they come to correct conclusions, or just fool themselves? I doubt most people could even agree as to Reagan's effect on the cold war, over 20 years ago!

I don't want to sound too negative. Let me clarify, before one of those little bulldozers engages with me :-) I regard markets as I regard democracy: flawed and suboptimal, but the best practical mechanisms we have for economic distribution and governance, respectively. My main dispute is with academics who really believe that woefully limited agents are capable of finding global optima.

The average voter is not held in much esteem by economists and political scientists, and Caplan rehearses some of the reasons for this. The argument of his book, though, is that economists and political scientists have misunderstood the problem. They think that most voters are ignorant about political issues; Caplan thinks that most voters are wrong about the issues, which is a different matter, and that their wrong ideas lead to policies that make society as a whole worse off. We tend to assume that if the government enacts bad policies, it’s because the system isn’t working properly—and it isn’t working properly because voters are poorly informed, or they’re subject to demagoguery, or special interests thwart the public’s interest. Caplan thinks that these conditions are endemic to democracy. They are not distortions of the process; they are what you would expect to find in a system designed to serve the wishes of the people. “Democracy fails,” he says, “because it does what voters want.” It is sometimes said that the best cure for the ills of democracy is more democracy. Caplan thinks that the best cure is less democracy. He doesn’t quite say that the world ought to be run by economists, but he comes pretty close.

The political knowledge of the average voter has been tested repeatedly, and the scores are impressively low. In polls taken since 1945, a majority of Americans have been unable to name a single branch of government, define the terms “liberal” and “conservative,” and explain what the Bill of Rights is. More than two-thirds have reported that they do not know the substance of Roe v. Wade and what the Food and Drug Administration does. Nearly half do not know that states have two senators and three-quarters do not know the length of a Senate term. More than fifty per cent of Americans cannot name their congressman; forty per cent cannot name either of their senators. Voters’ notions of government spending are wildly distorted: the public believes that foreign aid consumes twenty-four per cent of the federal budget, for example, though it actually consumes about one per cent.

Even apart from ignorance of the basic facts, most people simply do not think politically. They cannot see, for example, that the opinion that taxes should be lower is incompatible with the opinion that there should be more government programs. Their grasp of terms such as “affirmative action” and “welfare” is perilously uncertain: if you ask people whether they favor spending more on welfare, most say no; if you ask whether they favor spending more on assistance to the poor, most say yes. And, over time, individuals give different answers to the same questions about their political opinions. People simply do not spend much time learning about political issues or thinking through their own positions. They may have opinions—if asked whether they are in favor of capital punishment or free-trade agreements, most people will give an answer—but the opinions are not based on information or derived from a coherent political philosophy. They are largely attitudinal and ad hoc.

For fifty years, it has been standard to explain voter ignorance in economic terms. Caplan cites Anthony Downs’s “An Economic Theory of Democracy” (1957): “It is irrational to be politically well-informed because the low returns from data simply do not justify their cost in time and other resources.” In other words, it isn’t worth my while to spend time and energy acquiring information about candidates and issues, because my vote can’t change the outcome. I would not buy a car or a house without doing due diligence, because I pay a price if I make the wrong choice. But if I had voted for the candidate I did not prefer in every Presidential election since I began voting, it would have made no difference to me (or to anyone else). It would have made no difference if I had not voted at all. This doesn’t mean that I won’t vote, or that, when I do vote, I won’t care about the outcome. It only means that I have no incentive to learn more about the candidates or the issues, because the price of my ignorance is essentially zero. According to this economic model, people aren’t ignorant about politics because they’re stupid; they’re ignorant because they’re rational. If everyone doesn’t vote, then the system doesn’t work. But if I don’t vote, the system works just fine. So I find more productive ways to spend my time.

Political scientists have proposed various theories aimed at salvaging some dignity for the democratic process. One is that elections are decided by the ten per cent or so of the electorate who are informed and have coherent political views. In this theory, the votes of the uninformed cancel each other out, since their choices are effectively random: they are flipping a coin. So candidates pitch their appeals to the informed voters, who decide on the merits, and this makes the outcome of an election politically meaningful. Another argument is that the average voter uses “shortcuts” to reach a decision about which candidate to vote for. The political party is an obvious shortcut: if you have decided that you prefer Democrats, you don’t really need more information to cast your ballot. Shortcuts can take other forms as well: the comments of a co-worker or a relative with a reputation for political wisdom, or a news item or photograph (John Kerry windsurfing) that can be used to make a quick-and-dirty calculation about whether the candidate is someone you should support. (People argue about how valid these shortcuts are as substitutes for fuller information, of course.)

There is also the theory of what Caplan calls the Miracle of Aggregation. As James Surowiecki illustrates in “The Wisdom of Crowds” (2004), a large number of people with partial information and varying degrees of intelligence and expertise will collectively reach better or more accurate results than will a small number of like-minded, highly intelligent experts. Stock prices work this way, but so can many other things, such as determining the odds in sports gambling, guessing the number of jelly beans in a jar, and analyzing intelligence. An individual voter has limited amounts of information and political sense, but a hundred million voters, each with a different amount of information and political sense, will produce the “right” result. Then, there is the theory that people vote the same way that they act in the marketplace: they pursue their self-interest. In the market, selfish behavior conduces to the general good, and the same should be true for elections.

Caplan thinks that democracy as it is now practiced cannot be salvaged, and his position is based on a simple observation: “Democracy is a commons, not a market.” A commons is an unregulated public resource—in the classic example, in Garrett Hardin’s essay “The Tragedy of the Commons” (1968), it is literally a commons, a public pasture on which anyone may graze his cattle. It is in the interest of each herdsman to graze as many of his own cattle as he can, since the resource is free, but too many cattle will result in overgrazing and the destruction of the pasture. So the pursuit of individual self-interest leads to a loss for everyone. (The subject Hardin was addressing was population growth: someone may be concerned about overpopulation but still decide to have another child, since the cost to the individual of adding one more person to the planet is much less than the benefit of having the child.)

Caplan rejects the assumption that voters pay no attention to politics and have no real views. He thinks that voters do have views, and that they are, basically, prejudices. He calls these views “irrational,” because, once they are translated into policy, they make everyone worse off. People not only hold irrational views, he thinks; they like their irrational views. In the language of economics, they have “demand for irrationality” curves: they will give up y amount of wealth in order to consume x amount of irrationality. Since voting carries no cost, people are free to be as irrational as they like. They can ignore the consequences, just as the herdsman can ignore the consequences of putting one more cow on the public pasture. “Voting is not a slight variation on shopping,” as Caplan puts it. “Shoppers have incentives to be rational. Voters do not.”

...But, as Caplan certainly knows, though he does not give sufficient weight to it, the problem, if it is a problem, is more deeply rooted. It’s not a matter of information, or the lack of it; it’s a matter of psychology. Most people do not think politically, and they do not think like economists, either. People exaggerate the risk of loss; they like the status quo and tend to regard it as a norm; they overreact to sensational but unrepresentative information (the shark-attack phenomenon); they will pay extravagantly to punish cheaters, even when there is no benefit to themselves; and they often rank fairness and reciprocity ahead of self-interest. Most people, even if you explained to them what the economically rational choice was, would be reluctant to make it, because they value other things—in particular, they want to protect themselves from the downside of change. They would rather feel good about themselves than maximize (even legitimately) their profit, and they would rather not have more of something than run the risk, even if the risk is small by actuarial standards, of having significantly less.

People are less modern than the times in which they live, in other words, and the failure to comprehend this is what can make economists seem like happy bulldozers. ...

Blog Archive

Labels