How do we judge the political pundits and economic forecasters that roll up to give us their predictions, only to be proven completely and utterly wrong? A Nobel prize winning scientist is calling them to account.
The hottest thing in the social sciences -- the replicability debate -- is bad news for economic forecasters and worse news for the pundits who peddle their analyses through the media.
The debate got started when economics Nobel Prize winner, Daniel Kahneman, posted a note
on September 26 to a wide range of his colleagues on "questions about priming effects" -- the study of how subtle cues can unconsciously influence thoughts and behaviour. Kahneman (with Amos Tversky) is one of the pioneers of behavioural economics and demonstrated that many of the assumptions which underlay economic models of behaviour and efficiency were just plain wrong.
His book Thinking, Fast and Slow
ought to be the primer for PR people, political advisers and anyone in the business of forecasting or trying to change behaviours. In related vein, Ross Gittins' latest book Gittins' Gospel
collects his writings on some of these behavioural economics issues in an Australian context.
Kahneman's note is, as Ed Yong writes in Nature
, a "strongly worded call to one group of psychologists to restore the credibility of their field by creating a replication ring to check each other's results". The recent November issue of Perspectives on Psychological Science
asks whether the replicability issue is a crisis of confidence for the discipline or not and has a number of contributions which look at the implications.
But this debate is not just academic (well, it is in the proper sense of the word) as it has implications for how we judge the stream of comment on politics, business, economics and other subjects we are constantly subjected to.
Essentially the basis of science is replicability. You have a theory (or discover something) and publish your results and/or proof. Then others set out to replicate the results or evaluate the proof. If the work is demonstrated to be right, you get academic advancement, Nobel prizes or some references in the literature. Very occasionally you might even make some money.
The social sciences, however, often find it hard to replicate results. Indeed, in some fields such as economics or investment theory the only thing that usually gets replicated is a long history of getting it wrong. Take any annual media round-up of economic forecasts and compare the predictions with the results in 12 months' time and in the majority of cases the only totally predictable thing is that most of the forecasts are wrong.
Pundits as a whole are not much better. As I've previously pointed out, the most far-reaching study of expert option was Phillip Tetlock's 2005 study Expert Political Judgement: How Good is it? How Can we Know?
". Tetlock gathered some 80,000 predictions each based on rating the probabilities of three alternative outcomes. Analysing the predictions against the actual outcome Tetlock found the experts' predictions were no better than if they had just randomly chosen across the options.
What was also interesting was that "experts in demand were more over-confident than their colleagues who eked out existences far from the limelight". This finding was vividly replicated when Fox News switched from Karl Rove to the back room number-crunching team to discuss the Ohio outcome calls. Ideological blinkers are a useful guide to replication. I think I could write a Charles Krauthammer op ed in my sleep on any subject and accurately replicate most of his probable predictions and opinions. He was, of course, another Fox News pundit who got it wrong in his analysis of the election outcome but got it wrong predictably. Ironically, Krauthammer is a psychiatrist (an area where replicability of results and theories is often controversial) as well as being a neocon pundit.
Our media tends, however, not to be full of news about replicable results but more full of pundits full of themselves or full of some ideological viewpoint (Come in Spinner recognises that some may criticise him on those grounds too). Some of the pundits are columnists and commentators but many of them are academics, business economists, think tank researchers and others whose PR people promote their views to the media. On any subject you can get an instant expert to give you an instant opinion or a convenient op ed.
In the US at present the debate is also hotting up about the "contest" between the nerds and the pundits in predicting the presidential election outcome, with much focus on Nate Silver who "got it right" in "contrast" to the pundits. What is interesting about Silver's predictions and performance is that it could have been replicated by simple arithmetic analysis of the polling date in each state, although his book The Signal and the Noise
is an excellent discussion of how you can increase the probability of being right -- with the outcome of the US housing bubble as a good real life example -- by avoiding the mistakes in thinking (see Kanhneman also) that lead you to wrong conclusions.
And his book highlights the most important thing about punditry and forecasting -- we are not really dealing in predictions but in assessing probabilities all carefully explained to us by Thomas Bayes more than 250 years ago.
*Acknowledgement: the author is grateful for the advice and information provided by John Spitzer -- but all errors are the author’s responsibility