(Image: ABC)

The claim

The federal government put social media companies on notice after the Christchurch massacre for failing to prevent the shooter’s actions being broadcast and promoted online.

On March 20, Prime Minister Scott Morrison said in an interview on the Seven Network’s Sunrise program:

If they can geo-target an ad to you based on something you’ve looked at on Facebook within half a second — it is almost like they are reading your mind — then I’m sure they have the technological capability to write algorithms that screen out this type of violent and hideous material at a moment’s notice.

Do social media companies have the capability to write algorithms that can remove hate content within seconds?

RMIT ABC Fact Check investigates.

The verdict

Morrison’s claim is wishful thinking.

YouTube and Facebook already use algorithms to detect hate content which is then reviewed by people.

But ensuring such material is removed “at a moment’s notice” requires a fully automated approach — something experts told Fact Check is not currently possible.

Experts also dismissed Morrison’s comparison of content-detection systems and targeted advertising, saying the two technologies were completely different.

Still, the data social media companies use to target their advertising can be used to identify, if not the content itself, then the people who share it.

Companies already do this with banning certain groups — although, at least in Facebook’s case, white nationalists have only been targeted since Christchurch.

Experts suggested companies could use methods other than algorithms to prevent harmful content being shared, such as banning, regulating or delaying live streaming.

The role of social media

Social media played a unique role in the Christchurch massacre, with the shooter using Twitter and 8chan to share links to his manifesto, and Facebook to broadcast the shooting in real time for 17 minutes.

Footage of the white-supremacist-inspired attack was then copied and shared across social media.

Speaking with ABC News Breakfast on March 26, Attorney-General Christian Porter said it appeared to have been “well over an hour until Facebook took anything that resembled reasonable steps to prevent replaying of that video”.

Facebook said it first learnt of the broadcast 12 minutes after it ended — or 29 minutes after it began — when a user reported it.

The company removed 1.5 million videos of the attack in the first 24 hours, catching 1.2 million before users saw them.

In the same period, YouTube also deleted tens of thousands of videos and suspended hundreds of accounts, a spokeswoman told Fact Check.

“The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed, at times as fast as a new upload every second,” she said.

Who’s in trouble?

After Christchurch, the government demanded answers from the big three social media companies: Google (which also owns YouTube); Facebook (which also owns Instagram); and Twitter.

Both Facebook and YouTube offer live-streaming services.

Experts told Fact Check it made sense for the government to focus on these companies, as they offered the largest audiences.

The need for speed

Given the prime minister’s clear wish to catch content within seconds, Fact Check takes him as referring to fully automated content screening.

Morrison claimed social media companies should be able to screen content “at a moment’s notice” because they can target users with advertisements “within half a second”.

A day earlier, he justified his position on the premise that companies had the technology “to get targeted ads on your mobile within seconds”.

The government has since passed legislation that could see social media executives jailed and companies fined for failing to take down “abhorrent violent material expeditiously”.

What kind of content?

Fact Check also takes Morrison to be referring to more than just video.

While he referred to “this kind of violent and hideous material” in the Sunrise interview, in a letter to Japanese Prime Minister Shinzo Abe, tweeted the day before, he referred broadly to material by actors who “encourage, normalise, recruit, facilitate or commit terrorist and violent activities”.

On March 19, he said social media companies could write an algorithm to screen out “hate content”.

What were the companies already doing?

Screening content is generally a two-step process.

Material is identified, or flagged, by machines or users, and in some cases company employees.

Human reviewers then decide whether it broke the platform’s rules.

YouTube employs 10,000 reviewers for this, while Facebook employs 15,000.

A spokeswoman for Twitter told Fact Check that humans played a critical role in moderating tweets.

Before the attack, these companies already used algorithms to flag a variety of material that might be called hate content.

Professor Jean Burgess, director of Queensland University of Technology’s Digital Media Research Centre, said platforms had commercial incentives to do so, and pointed to how in 2017 Google lost millions when companies discovered their products were being promoted alongside extremist content on YouTube.

YouTube prohibits hate speech and violent or graphic content, among other things, and in the three months to December 2018, the platform removed nearly 16,600 videos promoting violent extremism.

Of the nearly 9 million videos it removed in total over the quarter, 71% were flagged by algorithms.

YouTube said that, thanks to machine learning, “well over 90 per cent of the videos uploaded in September 2018 and removed for violent extremism had fewer than 10 views”.

Facebook also bans hate speech, terrorism and violence.

In the three months to September 2018, it dealt with 15 million items of violent or graphic content, of which 97% was computer-flagged.

Algorithms also flagged 99.5% of content deemed to be promoting terrorism, though just 52% of reported hate speech.

Twitter told Fact Check it also uses algorithms to flag video content based on hashtags, keywords, links and other metadata.

In the six months to June 2018, it suspended 205,000 accounts for promoting terrorism, of which 91% were flagged by Twitter’s “internal, proprietary tools” …

Read the rest of this Fact Check over at the ABC

Principal researcher, David Campbell

[email protected]

Sources

© RMIT University 2019