In the slippery world of political statistics, there are polls, slops and worms.

Polls are conducted using an established method in which all voters theoretically have an equal chance of being interviewed, their participation is a matter of random chance, and they get to give their opinions only once per poll.

This is the way the big polls operate, and their results, 95 per cent of the time, will lie within about two or three per cent of what you would get if you surveyed the entire population of voters. This is called sampling variance or sampling error.

These polls are a known quantity and in Australia have a generally good record of accuracy.

Slops get their name from an acronym – Self-selecting Listener Opinion Polls – invented when radio talkback programs began to invite listeners to lodge an opinion. These are entirely useless for anything other than entertainment.

Political parties and other lobby groups quickly cottoned on to the fact that they could arrange phone-ins by their supporters, thus distorting the results completely. Your typical slop has no controls over this, so you don’t know whether your 1000 responses are from one person 1000 times or 1000 people once each or something in between.

Online “surveys” run by the big media companies are in this category – though not so grossly uncontrolled as the original slops used to be. They do have rudimentary controls, typically by ensuring that only one response can be received from any one computer IP in any one survey.

But they have no way of stopping the old problem of organisations gingering up their members to vote en masse.

Again you really don’t know anything about the sample – except that it is almost certainly not representative of the voting population.

Once more, these so-called surveys exist for entertainment purposes. They should not be taken seriously.

For example, we see extraordinary differences in the answer to ‘Who won the Great Debate?’ The figures were as follows:



Rudd % of vote

Daily Telegraph



Herald Sun






The Age



Australian –
Core Data



Two probable sources of distortion are obvious: the political pre-dispositions of the readers, and their socio-economic status. Among readers of the two conservative blue-collar papers – the Telegraph and the Herald Sun – only 40-something per cent have Rudd winning. Among readers of the white-collar papers, of varying political hues, more than 70 per cent have Rudd winning.

It is all nonsense, but it adds colour and movement. The effect of those two sources of distortions can only be guessed at – and the existence of the distortions is itself only a hypothesis.

The media organisations who run these things don’t take them seriously and don’t expect others to do so. They are just another way of engaging with the audience.

As for the worm, the idea is that the audience is drawn at random from the population of swinging voters. The term “swinging voters” can be defined in many ways, but the one we have used is:

“A voter who has given his or her first preference vote to different parties in the past two House of Representatives elections.” By definition, this excludes first-time and second-time voters.

If the audience of swinging voters – however defined – has been assembled by a random process, then the results shown by the worm have some of the features of a poll, and therefore can be taken more seriously than a slop.

The big limitation is that the audience is so small – perhaps about 100. The sampling error for a random sample of 100 is plus or minus 10.3 per cent.

This time the worm gave the debate to Rudd by a whopping 65 to 27 per cent – well outside even a big sampling error.

But there are other problems with the worm, not the least of which are lapses of concentration by the respondents and the opportunity for people to influence one another’s responses.

Footnote: The Adelaide Advertiser features a poll article today that shows a landslide win for Labor in Hindmarsh. It says the poll was conducted on Monday evening among 714 voters in Hindmarsh. It tells us nothing more: who conducted it, what questions were asked.

Not nearly good enough.