This year’s state elections seem to have brought the misinformation and “fake news” playbook to Australian politics.
In Tasmania, we saw allegations (denied by gaming lobby Love Your Local) that pro-pokies campaigners were creating fake social media accounts to troll poker machine-free venues with misleading negative comments about food and service. Meanwhile a Liberal adviser was forced to resign after it was discovered she was targeting opponents through a fake Facebook account.
Political messages — whether pro-pokies (Jobs! Services!), or anti-Labor — from unknown users were proliferating within community Facebook groups, and posts and comments shared between family and friends. Their fundamental localism meant they were precisely the sort of posts that the site’s algorithm is programmed to prioritise.
Most of this kind of fakery remains out of sight. The comments, shares and likes simply vanish into individual news feeds. But every now and then, we get a glimpse like this — and it raises some big questions about what’s going on behind the scenes.
Just how this information ecosystem works is explained in a definitive MIT study released this month, and published in Science as The Spread of True and False News Online. It confirms Jonathon Swift’s aphorism: “Falsehood flies and truth comes limping after.”
After examining 126,000 stories, the Science paper concludes:
Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information.
And, the study says, it’s people — not bots, Russian or otherwise — that do the spreading. This compounds the damage because people trust news they receive from friends or families much more than they do news from institutional media.
There’s an old journalist joke: “Never let the facts get in the way of a good story”. Seems that people have taken this to heart when deciding what to share.
Bots aren’t totally off the hook, however. They are, after all, programmed to mimic us — often the worst of us. As Zeynep Tufekci wrote of YouTube in The New York Times last week: “Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.” So you start with Donald Trump rallies and end up with the Ku Klux Klan.
The Tasmanian Integrity Commission has said it intends to investigate at least part of the use of social media during the state election. It’s not quite the Mueller inquiry into the US election, but that sort of deeply local look may tell us a lot, particularly if they can get the active cooperation of the social media platforms. Hello, Facebook!
Commercialised fake news has also made a brief appearance in South Australia last month when two candidates — one Liberal, one SA Best — were apparently the random subject of a site that drops a candidate into an unrelated event to generate a fake story with just enough veneer of truth to be plausible, entertaining and, most importantly, shareable.
The site makes money through programmatic advertising sucked in to the hits and shares the false story generates.
So what effect does all this have? Premier Hodgman assures Tasmanians that the pokies were irrelevant in his election — it was the economy, stupid! We can’t be so sure.
Eighteen months into the Trump presidency, we still don’t know the impact of fake news in the US election. It seems that even Facebook — the primary vector — doesn’t know. But the growing body of academic and journalistic research tell us that the power of social media is weaponised through the fake news playbook for a simple reason: it works.