Polling will definitely influence the election outcome on September 14 — but not necessarily in the way you think. Evidence that polling directly influences how people vote is mixed — especially in Australia where nobody wants to be the front-runner.
Opinion polling is a key tool in political campaigning. But it’s also attacked as the bane of media coverage of politics and even a threat to democracy.
Implicit in this criticism is the idea that polling influences the behaviour of politicians and how voters respond to them, and it degrades media coverage. Polling is demonised as a substitute for integrity and core values in politicians, and rigorous coverage of policy by the media. The demonisation is usually stronger among proponents of the side of politics trailing in the polls.
While both public and private polling shapes political strategy and campaigning, does polling influence voters themselves? Will it influence the outcome on September 14? Yes it does, and it will — but not necessarily in the way you might think.
Evidence that polling directly influences how people vote is mixed. One theory — often called the bandwagon effect, or the “contagion” effect in other countries — suggests voters are more likely to vote for a party leading in the polls either because they want to support a winner or because they assume other voters have made an effort to assess competing parties that they themselves haven’t made. It’s a neat theory — and it remains by and large a theory, because evidence to support it isn’t especially strong, particularly in Australia where our compulsory voting system means results from overseas studies don’t necessarily apply.
There’s also a long Australian tradition of politicians desperately seeking to appear as underdogs in elections no matter how big their polling lead; former NSW premier Bob Carr famously insisted on the eve of the 1999 NSW election that it was so close the result wouldn’t be known on the night of the poll, before winning in a landslide with a two-party preferred swing of over 7%.
Moreover, it’s unclear what proportion of the electorate actually pays close attention to polls and is thereby susceptible to whatever influence they might have, although there is some evidence polls can affect strategic voting, where voters might support a party other than their first choice because it is unlikely to win.
What does seem clearer is that polling affects media coverage. In fact, polls and the media are mutually dependent. A polling company without a media outlet struggles to match the influence or profile of polls that are linked to national media (compare, for example, the prolific Roy Morgan, which makes a virtue of being “the only Australian-owned independent polling company that is not owned by a media organisation”, with the far rarer, Fairfax-carried Nielsen poll, for influence). For media companies, which invest in the costly process of polling either by owning a pollster or contracting with one, a poll provides influence and precious column inches for its political journalists.
Polls also assist the media in framing political coverage entirely in the context of who is winning and losing, generating a winning-losing narrative more appealing to readers and audiences than more nuanced public policy reporting. And having invested in polls, media outlets feel obliged to get their money’s worth by dramatising the results, regardless of what they are. Further, there is evidence (again, from Canada) that polling directly influences the tone of coverage even of outlets without an editorial agenda. In particular, journalists feel compelled to explain even small changes in polls, even those within the margin of error of the poll, as arising from specific political events or a change in tactics by a party, establishing a narrative even when none exists. Thus, rises or falls in polls, even those purely resulting from statistical noise, generate their own positive or negative coverage.
“Ultimately, the impact of polls is closely linked to the media that carry them.”
And there are more sources of statistical noise in polls than usually acknowledged. Most political tragics would be aware of sampling error — around 3% for a poll with a sample of around 1000, and 2.2% for polls of around 2000. But Essential Research’s pollster Andrew Bunn points out “there are sources of error which are probably more significant than the theoretical sampling error, including question wording, question order, weighting, late swing (for election forecasts), response errors (or false reporting by respondents), treatment of the ‘don’t knows’ (we assume they give answers similar to those who respond)”.
In Australia, the relationship between the media and pollsters is even closer than it is elsewhere. Newspoll is half-owned by News Limited (the other owner is global marketing firm Millward Brown). While News Ltd tabloids tend to use Galaxy polls (established by former Newspoll executive David Briggs), Newspoll is without doubt the most influential poll, partly because of its accuracy (it almost perfectly predicted the 2010 two-party preferred result, for example), partly because other media, and especially the ABC, accord it influence, but mostly because of the effort invested by The Australian in attributing significance to its results even when it is negligible. Newspoll allows an outlet openly committed to promoting right-wing political agendas to significantly influence the media cycle and national agenda.
Newspoll is also fortnightly, compared with the AC Nielsen poll (Nielsen is a global marketing firm) carried by Fairfax, which is at best monthly. Newspoll thus provides a better assessment of trends; Fairfax journalists are often required to explain apparently dramatic shifts in voter sentiment that have as much to do with the gaps between polls as actual voter behaviour. However, because of the media platform to which it is linked, Nielsen, too is influential, particularly given Fairfax will run the results across the Sydney Morning Herald, The Age and The Australian Financial Review, meaning the same poll is dissected repeatedly by different journalists.
In comparison, Essential Research, which Crikey carries, conducts a weekly online poll (Newspoll and Nielsen are phone polls), but averages its voting intention results over two weeks, giving it around 2000 respondents and further smoothing out trends; Essential was the most accurate of polls in predicting party primary votes in 2010. Roy Morgan polling does both weekly phone polls and face-to-face polls, and keeps track of them separately; both Morgan and, especially, Newspoll, provide excellent interactive online tools for looking at poll results over an extended period.
“Australian polls have an excellent record on elections,” Bunn told The Power Index. “Polling is certainly much easier in Australia due to compulsory voting. We don’t have to try to estimate the voter turnout, which can be very difficult to estimate — it can be significantly affected by weather, for example.”
He offers some simple advice to keep polls in perspective. “Sample size is still important. You’d have to be careful of samples of only a few hundred … Also, who has sponsored the poll — if it was done for a political party or other interest group, you’d have to be a bit wary. The other thing I’m a bit wary of is large short-term shifts between polls — given that a shift of 4% (for example) means that about half a million people have changed their mind, which over a few weeks is not really credible,” he said.
Ultimately, the impact of polls is closely linked to the media that carry them. It’s not polls and pollsters that wield direct influence over voters; it’s the media outlets that use them.