There’s real news about fake news this week as regulators around the world come to grips with information pollution on major digital platforms, while big tech in turn tries to slough off its responsibilities.
This week The Washington Post has pinned what’s getting in the way of meaningful action: it’s not (just) bad actors and cranky uncles. It’s the outrage factor that powers the Facebook algorithm as it rewards “engagement”.
It’s not just the algorithmic ghost in the machine. The Wall Street Journal reports that Facebook management has been putting its thumb on the news scale. The company’s 2017 pivot away from news particularly targeted progressive sites (like Mother Jones) while exempting right-wing voices (like The Daily Wire). The WSJ suggests this was a design tool to survive the Trump era.
Right now, the company seems to be getting its algorithm ready for a US Democratic administration as it bans Holocaust denial and conspiracy groups like QAnon, restricts political ads and throttles access to the Hunter Biden “scandal” promoted by News Corp.
Save up to 50% on a year of Crikey
Choose what you pay, from $99.
Expect this to be canvassed when Facebook CEO Mark Zuckerberg goes back before US Congress tonight (Australia time) to answer charges of the anti-conservative bias this is bringing to social media algorithms.
Amid all this drama, Australian regulators and big tech’s local representatives have been quietly ploughing through the process of “public consultation” for an Australian code of practice for the platforms. (Last summer’s bushfire fake news spread demonstrates why an Australian perspective matters.)
Along with its big tech partners Google and Twitter, Facebook has formed the Digital Industry Group Inc (“DIGI”, get it?) and released a draft code on how to minimise disinformation and signal credibility for professional news content.
The idea was recommended in the 2019 digital platforms inquiry by the Australian Competition and Consumer Commission. In response, the federal government asked the platforms to develop a voluntary code. (Light touch regulation through codes of practice have been popular with both governments and tech since the European Commission adopted its code in 2018.)
In June, the Australian Communications and Media Authority (ACMA) released a position paper and DIGI has now returned with its take for public consultation, with a plan to implement by the end of this year.
There may yet be a wrinkle. The DIGI draft seems to fall well short of the expectations in the ACMA paper. No surprise: the regulator may be responding to public concerns about misinformation, but the platforms are more meta, responding to concerns about the concerns.
“We’re just a platform for our users,” say the social media platforms, digging in on the hill of freedom of expression (as they do again in this draft code). They’re triangulating around three points: limiting any obligations to the barest of legal necessities; demonstrating enough activity to hold off government regulation; managing individual crises of public disquiet.
There’s broad agreement that disinformation is not just “wrong”, it’s “inauthentic”. Drawing on the work of the Berkman Klein Center at Harvard, that’s defined as mis- or mal-information driven by manipulative actors (states, political groups, politically aligned media) using deceptive behaviour (bots, troll farms) that causes harm (to public health, confidence in democracy etc).
The DIGI code stresses that the platforms should not be compelled to remove content “solely on the basis of perceived falsity if the content would not otherwise be unlawful”.
In other words, don’t expect the internet to be fact-checked.
In the draft code, the platforms commit to keep doing what they’re doing now: use AI to track, tag or block disinformation, upgrading to human intervention (not necessarily in Australia) as the information goes viral; target fake accounts and automated bots that are designed to spread disinformation; turn off the ad money tap to fake news factories.
It’s silent over remedies when these measures fail or are too slow, although that was the core of the recommendation in the original ACCC report.
It’s another step in the tech-slog as Australia’s regulators try to cajole the platforms to do the one thing they’ve long been resisting: take responsibility for the information they distribute.