Saturday, February 11, 2006

Taleb podcast: What do we know?

Here is an excellent talk by Nassim Taleb, hedge fund manager and author of the book Fooled by Randomness, which I highly recommend. Taleb addresses the prediction problem: how do you evaluate your knowledge of the world, other than by testing your ability to make predictions about what will happen next? (Post-diction is too easy - one can always construct post-hoc stories which are consistent with the data. Sorry, historians :-) He then notes that in certain fields like finance, economics and social science, the accuracy of predictions, when carefully studied, is dismal. (See my earlier discussion of Tetlock's research, which confirms this in a quantitative way. Tetlock had to work hard at this, since he looked at softer non-quantitative predictions as in foreign affairs. If you stick to quantitative predictions, like of equity or commodity prices, it is much easier to see that prognosticators are terrible.)

Feynman once said, holding up his fist and rotating it as if it were a charged sphere or something, "Physics is about answering the question: if I do this, what happens next?" I think this is very much in the spirit of Taleb's viewpoint.

One nice experiment Taleb describes shows how overconfident we are in our ability to predict the future.

Ask a group of people to make a prediction -- for example, how many Corollas will Toyota sell next (or last) year? We're not interested in the central values of their predictions. We're more interested in their understanding of its accuracy. So we say, give me a range that covers the 98 percent confidence interval. That is, give me a range of how many Corollas were sold last year, with the real value somewhere in that range at 98 percent confidence. Even if you know nothing about the auto industry, you could incorporate this into your guess by choosing a large range (e.g., between 10,000 and 10 million).

However, people are systematically overconfident in the quality of their predictions - by at least an order of magnitude, says Taleb. Typically, their 98 percent confidence level prediction is more like a 60 percent confidence level prediction. In other words, if you try this experiment with 100 students who correctly understand their own state of knowledge, you would expect only about 2 students to choose ranges which don't include the actual value. Instead, what you find is that 40 or so of the ranges will not contain the correct number! (Their error estimate of 2% is a gross underestimate.)

Taleb claims that the worst performing groups on this kind of exercise (regardless of the prediction requested) are stock analysts and economists, probably because the two groups are selected for a systematic bias toward overconfidence in dealing with noisy data. I wonder how physicists would do? I often stress that in communicating some information to a colleague (e.g., "A neutrino with those properties is ruled out by LEP data"), it is useful to also include a confidence level ("I have thought carefully about the loopholes and have looked at the LEP analysis and am 99% confident what I just said to you is true"). Thus, rather than transmitting a single statement, it is better to transmit the statement plus a confidence estimate. The utility of the pair is dramatically greater than just the statement itself.

My feeling is that when it comes to discussing the implications of a particular experiment, physicists are trained to accurately understand the confidence intervals. However, when it comes to a question like "How likely is it that supersymmetry solves the hierarchy problem?" I suspect we are as overconfident as any other group in the accuracy of our predictions.

7 comments:

Anonymous said...

I'll bet (with only 50% confidence) that physicists are actually even worse than economists and stock analysts. I mean, if there is anything that physicists are known for it is going into other fields and thinking they can be just as good at that field as they were in physics! Lack of confidence is not allowed in physics ;)

Another interesting question is how well repeated exposure to random events helps your ability to produce confidence levels. It would be interesting to see how poker players, who have repeated exposure to calculating odds and then seeing how these calculations pay off in practice, do in assigning confidence intervals.

Steve Hsu said...

Physicists as a population might exhibit selection bias for *general* overconfidence, but perhaps not specifically for overconfidence in dealing with noisy data, the way a stock analyst might be. In fact, the other way around. If a physicist is caught over-generalizing from some statistical data they will endure a good thrashing from their peers.

I think it's clear that good poker players have trained themselves to understand confidence levels when dealing with cards, just as scientists have trained themselves to do so with data from lab experiments. How well people then generalize that training is a good question. Jim Simons, Ed Thorpe and Claude Shannon would likely claim that scientists are *good* at extending that discipline to financial markets. (At least, they have an advantage over other people without similar training.)

Seth said...

Thanks for the link. Taleb has a good point and is a fun speaker.

Anonymous said...

Funny.... Tableb is so confident that his fun will make money :)

Anonymous said...

I thought he made a rather bold assertion: securities analysts have the worst forecasting record. What other occupations/fields/problems were tested? How were they tested? I find it improbable that he is referencing a valid study, since this assertion is so poorly circumscribed. It is probably merely a snide remark about an easy target (rich Wall Streeters!)

Further, the assertion that amateurs are overconfident in their knowledge of trivia (how many corrolas were sold last year), is quite different than the assertion that experts are consistently overconfident, or that users of forecasts are unaware of the relevant standard errors.

Carson C. Chow said...

Instead of confidence intervals, an even better way is if we relayed information in a Bayesian way. You just give your prior and compute your posterior probability. This way, your assumptions will be on full display.

Anonymous said...

Poker outcomes follow a normal distribution market returns do not good poker players take advantage of overbets by players and pot odds. Security analyst are sales men is Gs calling for 180 dollar oil in July and reversing to seventy dollar oil in October. Physcist for the most part change hypothesis based on data your ignorance is funny you probably believe in global warming because you observed a hot summer the ludic fallacy is modern mans new religion

Be sure to take your thougts with a grain of salt

Blog Archive

Labels