Yesterday’s Financial Review has a report about research to be
presented at the Economic Society of Australia’s conference advancing
the claims of the betting market to predict election results. As the
opening sentence of the Fin’s report has it, “Polls are useless
in predicting the outcomes of Australian elections but it would appear
the bookies are on the money.” (The Fin’s writer has evidently
never been on a racecourse; if betting markets are accurate that shows
the punters are on the money, not the bookies.)
An earlier version of the paper, by Andrew Leigh and Justin Wolfers, is available here and yesterday’s report appears to add nothing new. I’m not quite such a sceptic about this as Peter Brent at Mumble,
who recently referred to “those silly articles about betting punters
always getting it right,” but I do think Leigh and Wolfers are
considerably overstating their claims.
Punters do get most elections right. But that’s not difficult; most
election results are pretty obvious. Neither of the Howard government’s
last two victories, which provide Leigh and Wolfers with most of their
data, could be described as unexpected.
At the micro level, the betting
market’s performance was less impressive; out of 33 individual seats on
which Centrebet gave odds last year, the final prices forecast the
winner correctly in only 24. And in the recent New Zealand election,
Labour was returned despite the fact that Centrebet’s final odds had
the National Party as favourites.
Comparing the betting market with the opinion polls is a tricky exercise. As John Quiggin
has pointed out, it’s misleading to treat it as a contest between the
two, since punters have the benefit of the poll results before they
bet. More significantly, no-one knows how to account for the margins of
error in the opinion polls.
Leigh and Wolfers refer to “the extreme
volatility of the implied probability of a Coalition victory suggested
by the polls” (ranging, amazingly, from 0.7% to 98.3%), but they get
that result only by assuming that the only inaccuracy in the polls is
sampling error. In fact we know that other factors, such as bias and
voter movement, increase margins of error and therefore reduce the
Leigh and Wolfers say pollsters should allow for this by reporting
their results as probabilities, but it is not clear that this is what
readers want. For example: if Poll A puts the government on 50% and
Poll B 60%, and the government is then returned with a one-seat
majority, we can say Poll A did a better job.
But if Poll A said there
was a 50% chance of the government being returned, and Poll B said a
60% chance, how do we know which was more accurate? (Mathematically
there is a way of answering this, but only by making the same
restrictive assumption about sampling error.)
Now that the betting markets are established, it would be foolish to
ignore them when forecasting elections. But it would be equally foolish
to put too much faith in them.