Wednesday, December 07, 2005

Expert predictions

How good are "experts" at making accurate predictions? Much worse than you think, says psychology professor Philip Tetlock (Haas School of Business at UC Berkeley) in his new book Expert Political Judgment: How Good Is It? How Can We Know? (See New Yorker review.) In detailed studies, in which "experts" were asked to make forecasts about the future (predicting which of three possible futures would occur), it was found that the "experts" did no better than well-informed non-experts! As Tetlock says, “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.”

Now, I expect the performance of scientific experts to be somewhat better. Questions like "How hot will that spacecraft get while in orbit around Mercury?" or "How many CPU cycles will it take to compute that integral?" are ones where predictions of real experts will far outperform those of lay people. I guess there is something fundamentally different about scientific versus non-scientific expertise? The last bit below about predicting freshman academic performance is amazing (but not unexpected). Let a simple one or two parameter model pick your freshman class :-)

Finally, what type of "expert" would you trust to run your money (make investment predictions)? As Jim Simons said: "The advantage scientists bring into the game is less their mathematical or computational skills than their ability to think scientifically. They are less likely to accept an apparent winning strategy that might be a mere statistical fluke." In other words, they know when they know something, while others might just be fooling themselves ;-)

New Yorker: Tetlock is a psychologist—he teaches at Berkeley—and his conclusions are based on a long-term study that he began twenty years ago. He picked two hundred and eighty-four people who made their living “commenting or offering advice on political and economic trends,” and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Would Canada disintegrate? (Many experts believed that it would, on the ground that Quebec would succeed in seceding.) And so on. By the end of the study, in 2003, the experts had made 82,361 forecasts.

...Tetlock got a statistical handle on his task by putting most of the forecasting questions into a “three possible futures” form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two dimensions: how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?), and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.

Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study.

...“Expert Political Judgment” is just one of more than a hundred studies that have pitted experts against statistical or actuarial formulas, and in almost all of those studies the people either do no better than the formulas or do worse. In one study, college counsellors were given information about a group of high-school students and asked to predict their freshman grades in college. The counsellors had access to test scores, grades, the results of personality and vocational tests, and personal statements from the students, whom they were also permitted to interview. Predictions that were produced by a formula using just test scores and grades were more accurate.

9 comments:

  1. Anonymous9:34 AM

    One think that puzzles me is the following.

    As you have noted, Simons has been extremely successful in consistently doing well in the market. So his success is no fluke.

    Now models of finance all assume that financial instruments evolve according to random laws, SDEs to to precise. How can one find ``patterns" in such a situation?
    This, too me would be a classic case where I would expect to see patterns where there are none.

    Or is it that the financial market is deterministic in short time scales, while appearing to be ``random" in longer time scales? This is the opposite of the usual case: short-term randomness, but long-term simple, phenomenological deterministic laws(like Ohm's law, for instance).

    MFA

    ReplyDelete
  2. As Bohr once noticed: Predictions are difficult, especially when they are about the future.

    ReplyDelete
  3. Anonymous1:33 PM

    I have been told that at Caltech they used to predict, when you came in, what your GPA would be. Then, when you were a Junior or Senior, you could go to the admissions office and check what they predicted for you.

    ReplyDelete
  4. MFA: the usual efficient market story is that any pattern is eventually erased as enough arbitrageurs become aware of the strategy and "arb it away" :-)

    How long does "eventually" take? What if the patterns are discovered only through very sophisticated and computationally intensive means? Also, Renaissance are presumably discovering new correlations all the time, and relinquishing old ones as they get arbed away, or go away for other reasons.

    DBacon: wow, I wish I had done that. I bet I more or less performed according to prediction :-)

    ReplyDelete
  5. Anonymous1:43 PM

    Steve,

    Many thanks for the nice and clear explanation!

    MFA

    ReplyDelete
  6. Anonymous4:48 PM

    High school guidance councellors, stock analysts, college admissions officers, realtors... these are all "value subtracted" professions who derive their value through gatekeeper roles and have no incentive to provide accurate forecasts.

    ReplyDelete
  7. Anonymous7:36 PM

    For those who would like to delve more deeply into the psychology literature on this subject, a good place to start would be the work of Robyn Dawes (Social and Decision Sciences, Carnegie-Mellon).

    ReplyDelete
  8. Anonymous8:45 AM

    Steve wrote, In detailed studies, in which "experts" were asked to make forecasts about the future (predicting which of three possible futures would occur), it was found that the "experts" did no better than well-informed non-experts!

    Yeah, there was a large experiment on this. It was called "the invasion of Iraq."

    Some of the so-called "experts" failed drastically. Of course, it didn't seem to hurt their careers too much.

    ReplyDelete