You might have seen the health debate yesterday. You may even have been tragic enough to have both Channel Seven and Channel Nine broadcasts on at the same time to compare their respective real-time audience response tracking of the debate – The Pollie Graph for Seven and The Worm for Nine. The first thing you may have noticed, apart from how truly tragic you were for doing such a thing, was how the two tracking lines behaved very, very differently. Channel Nine’s Worm had very little volatility; it was very inertial in its behaviour. When audience reactions changed from positive to negative, they did so in a relatively gradual manner.

Channel Seven’s Pollie Graph on the other hand reacted to every sentence Abbott and Rudd uttered. It was more volatile and far much more responsive to the moment.

The differences in behaviour between the two tracking systems can be explained by the differences in both the technology and the samples used by each channel.

Channel Nine’s worm used market research firm Ekas to source their actual participants. Ekas runs a large online panel from which self-identified undecided voters were selected to man the worm handsets – with each participant getting paid $50 to attend the shindig. The actual audience response technology however was provided by a different company, IML Australia.

Channel 7 on the other hand used Roy Morgan to not only source participants, but to provide the Roy Morgan Reactor technology to do the audience response tracking. The people selected by Morgan to participate were a cross-section of all voters (not just Undecideds that Channel Nine used) that approximately reflected the current state of voting intentions. These folks too were paid $50 to participate.

The first difference between the two audience response systems that helps explain their differing behaviour during the debate is the sample – undecided voters vs. a partisan weighted cross section of all voters.

The second difference is even larger, and goes to the technology involved, particularly the technology and design of the handsets that were used to track the responses of participants.

Morgan Reactor uses a handset with a dial on it. You just hold the handset, watch the debate and rotate the dial clockwise when your reaction is positive and anti-clockwise when it’s negative. The further you rotate the dial in either direction, the stronger the magnitude of your positive or negative response. Not only is it idiot proof, but if you imagine using such a thing for a second, you can probably picture how the actual turning of the dial becomes a natural extension to what you’re doing – you don’t really have to think about it, it just happens in the background.

IML technology utilised by Channel Nine on the other hand, is button technology. Each handset contains 9 buttons, each representing various strengths of positive or negative reaction. It is much less intuitive to use and more attention needs to be paid to the handset to ensure that the right button is being pressed at any given time.

As a consequence, the dial technology is much more responsive in terms of the immediacy of reaction (quick twist of the dial when something grabs your attention), while button technology is more inertial in registering changes as you only press the buttons after you’ve found them, and when your opinion changes.

What does this mean for the debate?

Firstly, if the Ekas provided sample of undecided voters to Channel Nine was a good estimate of the true nature of undecided voters around the country, Tony Abbott is in deep shit. Initial responses to Kevin Rudd were much more positive than they were for Abbott and general audience responses across time were much more positive for Rudd than they were for Abbott, regardless of what each leader happened to be talking about at the time.

Negative turning points for Abbott were also much sharper than they were for Rudd, suggesting that even with the gradualism of the button technology, when each leader said something that the audience didn’t like, they tended to give Rudd the benefit of the doubt until they heard him out (with trickles of negative button presses coming in as Rudd’s answer progressed). When Abbott said something the audience didn’t like, they all pressed their negative buttons early and en masse.

That suggests that undecided voters have a relatively positive predisposition to Rudd and a very short tolerance for Abbott.

More importantly, on the Roy Morgan Reactor results (which I think was the superior piece of technology kit for measuring political reaction), the immediacy of its responses told us a few interesting things.

  • The public doesn’t like Abbott’s jokes and theatrics. Whenever he tried to crack a joke, the audience response literally fell in a ditch regardless of the level it was at before the joke.
  • When Rudd talks about the boring detail of process, far from turning the public off as some journos opine, the public reaction is actually positive, and not just a little bit positive, but substantially positive.
  • When Rudd went negative on Abbott, he usually wasn’t punished for it in terms of audience response. However, when Abbott went negative on Rudd, Abbott nearly always elicited a strong, negative reaction from the audience
  • Rudd has much more generic goodwill from the public than does Abbott. As soon as Rudd started answering any question, the audience response started out in net positive territory. When Abbott started answering any question, the audience response started out around zero – sometimes a little positive, sometimes a little negative.

One of the most important things it demonstrated – and something that the polling has been suggesting for a while now – is that Abbott has very little political room to move and his support appears to be generally soft.

Roy Morgan Research will soon put up on their site all sorts of goodies about this health debate, including the real-time reactions by voting intention cross-tabs and, hopefully, by gender. When they turn up, we’ll have a good look at them – there’ll be some interesting stuff in there to chew on.


Andrew Bunn, the Research Director for Essential Media has chimed in with something interesting from their own polling that’s particularly relevant here, particularly with the Channel 9 worm and it’s undecided voter sample.

We have found in polls that “undecided” voters (which actually covers a range of positions – from engaged “swinging” voters to those who pay no attention to politics and simply don’t care who they vote for) are also much more likely to give “don’t know” responses to other questions. So it may be that they are also less likely to give strong responses with the meters regardless of the technology.


Found a pic of the IML handset (via Sky):