Here is an excellent talk by Nassim Taleb, hedge fund manager and author of the book
Fooled by Randomness, which I highly recommend. Taleb addresses the
prediction problem: how do you evaluate your knowledge of the world, other than by testing your ability to make predictions about what will happen next? (Post-diction is too easy - one can always construct post-hoc stories which are consistent with the data. Sorry, historians :-) He then notes that in certain fields like finance, economics and social science, the accuracy of predictions, when carefully studied, is dismal. (See my earlier
discussion of Tetlock's research, which confirms this in a quantitative way. Tetlock had to work hard at this, since he looked at softer non-quantitative predictions as in foreign affairs. If you stick to quantitative predictions, like of equity or commodity prices, it is much easier to see that prognosticators are terrible.)
Feynman once said, holding up his fist and rotating it as if it were a charged sphere or something, "Physics is about answering the question: if I do this, what happens next?" I think this is very much in the spirit of Taleb's viewpoint.
One nice experiment Taleb describes shows how
overconfident we are in our ability to predict the future.
Ask a group of people to make a prediction -- for example, how many Corollas will Toyota sell next (or last) year? We're not interested in the central values of their predictions. We're more interested in their
understanding of its accuracy. So we say, give me a range that covers the 98 percent confidence interval. That is, give me a range of how many Corollas were sold last year, with the real value somewhere in that range at 98 percent confidence. Even if you know nothing about the auto industry, you could incorporate this into your guess by choosing a large range (e.g., between 10,000 and 10 million).
However, people are systematically overconfident in the quality of their predictions - by at least an order of magnitude, says Taleb. Typically, their 98 percent confidence level prediction is more like a 60 percent confidence level prediction. In other words, if you try this experiment with 100 students
who correctly understand their own state of knowledge, you would expect only about 2 students to choose ranges which don't include the actual value. Instead, what you find is that 40 or so of the ranges will not contain the correct number! (Their error estimate of 2% is a gross underestimate.)
Taleb claims that the worst performing groups on this kind of exercise (regardless of the prediction requested) are stock analysts and economists, probably because the two groups are selected for a systematic bias toward overconfidence in dealing with noisy data. I wonder how physicists would do? I often stress that in communicating some information to a colleague (e.g., "A neutrino with those properties is ruled out by LEP data"), it is useful to also include a confidence level ("I have thought carefully about the loopholes and have looked at the LEP analysis and am 99% confident what I just said to you is true"). Thus, rather than transmitting a single statement, it is better to transmit the statement plus a confidence estimate. The utility of the pair is dramatically greater than just the statement itself.
My feeling is that when it comes to discussing the implications of a particular experiment, physicists are trained to accurately understand the confidence intervals. However, when it comes to a question like "How likely is it that supersymmetry solves the hierarchy problem?" I suspect we are as overconfident as any other group in the accuracy of our predictions.