The Value of Forecasting

“The trouble with the world is that the stupid are cocksure and the intelligent full of doubt.”
– Bertrand Russell

Jonah Lehrer asked recently: Do Political Experts Know What They’re Talking About? He talks with Dr. Philip Tetlock, who is well-known for his work assessing the forecasting abilities of professionals paid to predict the future.

After Tetlock tallied up the data, the predictive failures of the pundits became obvious. Although they were paid for their keen insights into world affairs, they often performed worse than random chance. Most of Tetlock’s questions had three possible answers; the pundits, on average, selected the right answer less than 33 percent of the time. In other words, a dart-throwing chimp would have beaten the vast majority of professionals.

Nassim Taleb (of The Black Swan) addresses some of these same questions, and his conclusion is that accurate prediction is very hard if not impossible on a fundamental level, making him one of the radical skeptics mentioned in the Lehrer interview. Very smart people can be “fooled by randomness” into thinking they see predictable patterns where there are none. And the laws of statistics suggest that someone, somewhere, will have to be lucky simply by chance.

Theory plays a disproportionate role in social science, and it’s arguable whether a social science theory can even be disproved. It is vanishingly unlikely that geocentrism will make a comeback for astrophysicists, but can the same be said of mercantilism for policymakers?

This points to a problem inherent in social science. Whereas hard sciences have the benefit of laboratory experimentation, history gives us a sample size of one. We cannot go back and run counterfactuals in order to verify what works and what doesn’t. This is why people can still be arguing about what caused the financial crash three years ago, the Great Depression seventy years ago, and even the fall of Rome, let alone the best methods to avoid a relapse.

Taleb argues for a society that does not make its decisions based on such predictions and that makes itself resilient to the once-in-a-hundred-years events that tend to pop up every decade. What use is a fifty year projection that must be revised yearly?

Dr. Tetlock’s decades of work suggests that forecasters who depend on a top-down theory approach are less accurate than those that are self-critical and apply a bottom-up approach. The question remains open of whether forecasting can be learned or improved. I’m part of the four-year IARPA study mentioned in the article, the most comprehensive experiment on forecasting yet. It will be interesting to see the results.

1 Comment

Comments are closed.