The federal government put social media companies on notice after the Christchurch massacre for failing to prevent the shooter’s actions being broadcast and promoted online.
On March 20, Prime Minister Scott Morrison said in an interview on the Seven Network’s Sunrise program:
If they can geo-target an ad to you based on something you’ve looked at on Facebook within half a second — it is almost like they are reading your mind — then I’m sure they have the technological capability to write algorithms that screen out this type of violent and hideous material at a moment’s notice.
Do social media companies have the capability to write algorithms that can remove hate content within seconds?
Sign up for a FREE 21-day trial and get Crikey straight to your inbox
RMIT ABC Fact Check investigates.
Morrison’s claim is wishful thinking.
YouTube and Facebook already use algorithms to detect hate content which is then reviewed by people.
But ensuring such material is removed “at a moment’s notice” requires a fully automated approach — something experts told Fact Check is not currently possible.
Experts also dismissed Morrison’s comparison of content-detection systems and targeted advertising, saying the two technologies were completely different.
Still, the data social media companies use to target their advertising can be used to identify, if not the content itself, then the people who share it.
Companies already do this with banning certain groups — although, at least in Facebook’s case, white nationalists have only been targeted since Christchurch.
Experts suggested companies could use methods other than algorithms to prevent harmful content being shared, such as banning, regulating or delaying live streaming.
The role of social media
Social media played a unique role in the Christchurch massacre, with the shooter using Twitter and 8chan to share links to his manifesto, and Facebook to broadcast the shooting in real time for 17 minutes.
Speaking with ABC News Breakfast on March 26, Attorney-General Christian Porter said it appeared to have been “well over an hour until Facebook took anything that resembled reasonable steps to prevent replaying of that video”.
Facebook said it first learnt of the broadcast 12 minutes after it ended — or 29 minutes after it began — when a user reported it.
The company removed 1.5 million videos of the attack in the first 24 hours, catching 1.2 million before users saw them.
In the same period, YouTube also deleted tens of thousands of videos and suspended hundreds of accounts, a spokeswoman told Fact Check.
“The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed, at times as fast as a new upload every second,” she said.
Who’s in trouble?
After Christchurch, the government demanded answers from the big three social media companies: Google (which also owns YouTube); Facebook (which also owns Instagram); and Twitter.
Both Facebook and YouTube offer live-streaming services.
Experts told Fact Check it made sense for the government to focus on these companies, as they offered the largest audiences.
The need for speed
Given the prime minister’s clear wish to catch content within seconds, Fact Check takes him as referring to fully automated content screening.
Morrison claimed social media companies should be able to screen content “at a moment’s notice” because they can target users with advertisements “within half a second”.
A day earlier, he justified his position on the premise that companies had the technology “to get targeted ads on your mobile within seconds”.
The government has since passed legislation that could see social media executives jailed and companies fined for failing to take down “abhorrent violent material expeditiously”.
What kind of content?
Fact Check also takes Morrison to be referring to more than just video.
While he referred to “this kind of violent and hideous material” in the Sunrise interview, in a letter to Japanese Prime Minister Shinzo Abe, tweeted the day before, he referred broadly to material by actors who “encourage, normalise, recruit, facilitate or commit terrorist and violent activities”.
What were the companies already doing?
Screening content is generally a two-step process.
Material is identified, or flagged, by machines or users, and in some cases company employees.
Human reviewers then decide whether it broke the platform’s rules.
A spokeswoman for Twitter told Fact Check that humans played a critical role in moderating tweets.
Before the attack, these companies already used algorithms to flag a variety of material that might be called hate content.
Professor Jean Burgess, director of Queensland University of Technology’s Digital Media Research Centre, said platforms had commercial incentives to do so, and pointed to how in 2017 Google lost millions when companies discovered their products were being promoted alongside extremist content on YouTube.
Of the nearly 9 million videos it removed in total over the quarter, 71% were flagged by algorithms.
YouTube said that, thanks to machine learning, “well over 90 per cent of the videos uploaded in September 2018 and removed for violent extremism had fewer than 10 views”.
Facebook also bans hate speech, terrorism and violence.
In the three months to September 2018, it dealt with 15 million items of violent or graphic content, of which 97% was computer-flagged.
Algorithms also flagged 99.5% of content deemed to be promoting terrorism, though just 52% of reported hate speech.
Twitter told Fact Check it also uses algorithms to flag video content based on hashtags, keywords, links and other metadata.
In the six months to June 2018, it suspended 205,000 accounts for promoting terrorism, of which 91% were flagged by Twitter’s “internal, proprietary tools” …
Principal researcher, David Campbell
- Scott Morrison, Sunrise interview, March 20, 2019
- Scott Morrison, Facebook post, March 19, 2019
- Scott Morrison, Media conference, March 19, 2019
- Scott Morrison, Tweet of Letter to Japanese Prime Minister Shinzo Abe, March 19, 2019
- ABC, Interview with NZ privacy commissioner, March 27, 2019
- Facebook, Blog post: A Further Update on New Zealand Terrorist Attack, March 20, 2019
- Andreas Kaplan, The challenges and opportunities of Social Media, January 2010
- Christian Porter, Media Release, April 4, 2019
- YouTube, Transparency report, December 2018
- Facebook, Community standards enforcement report, September 2018
- Twitter, Transparency report, 13th edition
- Facebook, Standing against hate, March 27, 2019
- Facebook, Hard questions: how we counter terrorism, June 15, 2017
- Sheryl Sandberg, Op-ed in the New Zealand Herald, March 30, 2019
- Google’s senior vice president, Op-ed in the Financial Times, June 19, 2017
- Tarleton Gillespie, Custodians of the internet, June 2018
- Microsoft, Using PhotoDNA to fight child exploitation, September 12, 2018
- Facebook, Media release on terrorism, December 5, 2016
- YouTube, How Content ID works, accessed March 30, 2019
- YouTube, Expanding our work against abuse of our platform, December 4, 2017
- Mark Zuckerberg, The Internet needs new rules, March 30, 2019
- Parliament, Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019
- Transcript of Mark Zuckerberg’s US Senate testimony, Washington Post, April 10, 2018
- Facebook, Responses to questions from US House committee, June 29, 2018
- Sidney Fussell, Why the New Zealand Shooting Video Keeps Circulating, March 21, 2019