Sunday, January 9, 2011

The uselessness of expert forecasts

[Jerker] Denrell and [Christina] Fang took predictions from July 2002 to July 2005, and calculated which economists had the best record of correctly predicting “extreme” outcomes, defined for the study as either 20 percent higher or 20 percent lower than the average prediction. They compared those to figures on the economists’ overall accuracy. What they found was striking. Economists who had a better record at calling extreme events had a worse record in general. “The analyst with the largest number as well as the highest proportion of accurate and extreme forecasts,” they wrote, “had, by far, the worst forecasting record.” ...

Their work is the latest in a long line of research dismantling the notion that predictions are really worth anything. The most notable work in the field is “Expert Political Judgment” by Philip Tetlock of the University of Pennsylvania. Tetlock analyzed more than 80,000 political predictions ventured by supposed experts over two decades to see how well they fared as a group. The answer: badly. The experts did about as well as chance. And the more in-demand the expert, the bolder, and thus the less accurate, the predictions. ...

There’s no great, complex explanation for why people who get one big thing right get most everything else wrong, argues Denrell. It’s simple: Those who correctly predict extreme events tend to have a greater tendency to make extreme predictions; and those who make extreme predictions tend to spend most of the time being wrong — on account of most of their predictions being, well, pretty extreme.
--Joe Keohane, Boston Globe, on why we should ignore soothsayers

No comments: