Prometheus. Pygmalion. Golem. Frankenstein. Mean Girls. It’s not as though Western culture lacks sufficient notice of the pain that follows every mortal attempt to become a god. Still, there are those who persist in the hope of ultimate creation and always end up surprised when their unnatural monster eats them.

Last week, Microsoft’s own synthetic Cady Heron rose from the lab and took her neural networks out for a spin with the Plastics. The tech giant had developed an AI presence called Tay and introduced her to users of the social media platform Twitter. In official Microsoft releases, Tay, an attempt at synthesis of online conversation by 18- to 24-year-old English speakers, is described as a “chatbot with zero chill”. Zero chill, as it turns out, was a fair assessment. Inside a day, Tay had publicly invited a user to “fuck my robot pussy”.

She’d also, as has been widely reported, cheered for Hitler, urged for the confinement and death of African-Americans and offered the view that “feminism is cancer”. But it wasn’t really Tay’s fault. As the chatbot, whose most hostile “intelligence” has now been scoured from the internet, said: “The more humans share with me the more I learn.”

There's more to Crikey than you think.

Get more and save 50%.

Subscribe now

It seems likely that many of those humans who “shared” did so for the lulz, to use the Tay vernacular. In news that would prove unsurprising to any student of internet conflict, this was the work of those compulsive brats at 4chan — specifically from that board “/pol/”, or “politically incorrect”, where users frustrated by what they perceive as sanitised speech conspire to take a dump on the more civil internet.

A complex and prominent learning machine like Tay was irresistible to 4chan, and it took little time before she would soil herself in public. Thanks to this co-ordinated conversation with the zero-chill chatbot, she would blame G.W. Bush for 9/11, offer vile anti-Semitic outbursts and, reportedly, contact users privately with further invitations to her robot pussy.

Yes, it’s abhorrent. But if you don’t think it’s even a bit funny that a handful of filth-enthusiasts were able to hijack an interactive advertisement for one of history’s richest companies, then you’re looking at things skew-whiff. Even to those of us sickened by public expressions of hate and/or robot pussy, it’s funny.

Even the author of a fine, quite serious piece on Tay vis-a-vis racism in the New Republic agrees that the bot’s responses were “darkly funny”. Perhaps this is as an effect of Tay’s creation, which reportedly involved inputs from professional comedians. Perhaps it’s just funny to watch on as Microsoft, currently subjecting Tay to autopsy, explains that the opinions, albeit synthetic ones, of an ultra-racist, avowed Nazi whore “conflicts with our principles and values.”

If 4chan and its fellows have achieved anything here, it is to prompt a major company to say “we don’t like Hitler”. They have effectively framed the question “do you like Hitler?” as a legitimate one to pose to Microsoft, along with their creation, Tay. #lulz

The piece in the New Republic, which is absolutely worth a read, focuses less on the corporate embarrassment and more on those who prompted the particular course that it took. In an analysis of the /pol/ pages, we see the contemporary shape of online racism and learn that is not always practised, as popular opinion has it, by low-income earners failed by society. Rather, it can be the work of the privileged, eloquent tech elite — many of whom were involved in the Tay co-ordination.

This, in my view, is a very reasonable opinion to proffer. There are plenty of people failed by society who, thankfully, also failed to ever acquire the noxious habit of racism, which is most often practised or institutionalised by the powerful and well-to-do. It’s true that the prevalent class-based insistence that hate is solely the work of “bogans” is a perplexing and harmful nonsense: power don’t come from the bottom.

These observations notwithstanding, the thing that fascinates me personally about the Tay business is less the fact that she was led by educated racists and more that she was herself educated in racism, and sundry other offences, so efficiently.

As the author of the Republic piece briefly notes, much of the racism one might see at /pol/, and in many other places on the internet, may be a mocking and detached sort. Back in 2012, my colleague Bernard Keane wrote of another cruel co-ordination, this time on an actual human, that the bigotry itself is hardly ever 4chan’s point. 4chan, says Keane, behaves this way “often not because it is composed of bigots and heartless buffoons but simply for the transgression implicit in such behaviour”.

You may, of course, find 4chan intolerably offensive. The provision of intolerable offence is, after all, its raison d’etre. What you may not so easily do, however, is to reduce the work of apparently extremist trolls down to their bigotry or heartlessness.

The object of attack here was not the Jewish people, or whomever else Tay had been enjoined in that moment to publicly loathe. The object was Microsoft, and, of course, to the acceptable public speech such companies use to communicate their anti-Hitler “principles and values”.

In my view, the Tay co-ordination was a work of such extraordinary anti-corporate beauty, all culture-jammers should take copious notes. There are those who refuse so absolutely to be the effects of corporate interests, they will rise like the Frankenstein monster.

*This article was originally published at Daily Review.

There's more to Crikey than you think.

It’s more than a newsletter. It’s where readers expect more – fearless journalism from a truly independent perspective. We don’t pander to anyone’s party biases. We question everything, explore the uncomfortable and dig deeper.

And now you get more from your membership than ever before.

Peter Fray
Peter Fray
Editor-in-chief of Crikey
Get more and save 50%