(Image: AAP/Darren England)

In the wake of the El Paso killings, Home Affairs Minister Peter Dutton gave a hint of the government’s response to the ACCC call for a new regulatory framework to cover both the new social media platforms and traditional broadcast and print and online media.

The good news? The government has come a long way from its heady rhetoric about “the right to be a bigot”. Now, Dutton is foreshadowing new powers to force the take down of “hate speech” — an idea that the right would have once eschewed as “18C on steroids”.

The bad? We’re likely to get more of the political same: more restrictions on content in the name of security, with the political theatre of the Labor Party being wedged on the way through.

Since the Facebook live-streaming of the Christchurch shooting in March, the Morrison Government has been focussed on violent videos. It rushed legislation through the Parliament before the election introducing criminal penalties for failure to take down inappropriate material. Morrison took a proposal for global restraints to this year’s G20 meeting.

Last month, the ACCC report on digital platforms recommended the government develop a “platform-neutral regulatory framework” providing consistent oversight of content production or delivery, covering media businesses, publishers, broadcasters and digital platforms.

The only previous indication of government intent was the suggestion by Communications Minister Paul Fletcher that minimum Australian content quotas be extended to streaming services like Netflix or YouTube.

Dutton used this ACCC recommendation to justify restraints, saying to Seven’s Sunrise: “They are involved in spreading these hate messages and we need to stop it. The depth of hatred … it’s proliferated on the internet because the media platforms are operating on different rules than traditional media companies have.”

Regulating racism — and distinguishing it from  “racially-charged” speech — is complicated by the willingness of politicians to embrace racial signifiers (“African gangs”) and media’s willing platforming of once unacceptable views, such as Hanson’s contracted appearances on Sunrise, or those US extremists welcomed onto Q&A and, notoriously, Four Corners.

Gizmodo has identified 151 online companies that service white nationalism, although Cloudfare de-platformed 8chan after the El Paso killings. Regulation would require changes to the US “safe harbour” provisions that protect service providers. (These provisions are reflected in the Australia-US Free Trade Agreement.)

At least the tech platforms are trying — or trying to appear as though they’re trying. Facebook’s Civil Rights Audit estimates that about two-thirds of removed hate speech was detected by algorithm, and recommended further human intervention to remove “grey” matter.

In June, Google announced YouTube would bar “hateful and supremacist” content, likely to mean the removal of thousands of channels. YouTube is also adjusting the recommendation algorithm to reduce linking to racist content after it was found to be leading viewers to extremist content by design.

Regulating hate speech online  in the US is restrained by one big question: What about Trump? His anti-immigrant “invasion” rhetoric has been used in 2200 Facebook ads since May 2018. The trope inspired the El Paso killer, according to the (unlinked here) manifesto posted on the extremist billboard, 8chan.

At the same time, it’s become accepted wisdom on the US right that anti-conservative bias is baked into the platforms. Last week, US media reported that prolific tweeter Trump was drafting an executive order for regulation targeting that bias. As the weekend’s CPAC “send her back” chant showed, US talking points just about always end up here, so, expect that one as well.

In Australia, News Corp has been withdrawing from Twitter with Sky News CEO Paul Whittaker announcing it will no longer embed video with its tweets, relying instead on partnerships with YouTube, Microsoft News, Facebook and Taboola “to monetise our trusted news content”.

The real focus of the government is regulating encrypted content on, for example, US platforms like WhatsApp. Although the US administration (and security services) seem willing, legislative enthusiasm is weak.

Around the world, governments have used blunter tools — blocking individual providers, including Facebook or slowing or shuttering the internet. In 2018, it happened 196 times, double the number from the year before, most commonly in India. WhatsApp is already blocked in many international airports.

That’s something the Australian government can do. Expect that, too, to creep into the debate.