For one week in January 2012, Facebook deliberately manipulated the emotional content of the news feeds seen by nearly 700,000 users, just to see how it would affect their moods. This secret mood manipulation experiment, as The Atlantic called it
, is yet more proof that Silicon Valley and its poisonous culture need to be told that they don't get to decide the future of human society.
The question, though, is whether investors, legislators and regulators, bedazzled by multibillion-dollar company valuations but baffled by the technology behind it all, have either the clue or the spine to do something about it.
While Facebook's controversial experiment was conducted more than two years ago, the results were only published earlier this month in the prestigious Proceedings of the National Academy of Sciences
in a paper titled "Experimental evidence of massive-scale emotional contagion through social networks
". The news crossed over into mainstream media over the weekend as people began to understand what Facebook had actually done.
The questions Facebook was investigating are simple enough. Are people's moods influenced by the moods expressed by their social media contacts? If so, to what extent? A typical Facebook user's friends, family and other contacts generate far more posts than can be shown in that user's news feed -- reportedly around 1500 items at any given time. Facebook already has processes for selecting the most "relevant" -- taking into account factors such as popularity, the closeness of personal connections and, presumably, commercial reasons.
So what happens when that selection process has an emotional bias? If users see a preponderance of happy messages, do they feel left out and get depressed? Or are good and bad moods contagious?
It turn out the answer is yes. Or, as Facebook's research blandly puts it:
"When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks ...
"We also observed a withdrawal effect: People who were exposed to fewer emotional posts (of either valence) in their News Feed were less expressive overall on the following days, addressing the question about how emotional expression affects social engagement online."
Facebook thinks it has users' permission for this sort of emotional tinkering because, buried in its data use policy
, part of its 9045-word terms of service says "we may use the information we receive about you ... for internal operations, including troubleshooting, data analysis, testing, research and service improvement".
To many of the geeks who reacted to the story across the weekend, this isn't news. Facebook manipulates you all the time.
In one sense they're right. Advertising companies -- and that's what Facebook is -- have emotional manipulation as their primary mission. They hope we'll feel positive about their message, buy the product, or vote for the candidate. Naturally they'll conduct research to see what techniques sell most effectively.
But isn't there a difference between conducting research with a clear and specific commercial aim and poking at people's emotions to see what happens?
"Let's call the Facebook experiment what it is: a symptom of a much wider failure to think about ethics, power and consent on platforms," tweeted Kate Crawford
, an Australian who researches the politics and ethics of data for Microsoft Research, the MIT Center for Civic Media, the Information Law Institute at NYU, and the University of New South Wales.
"Perhaps what bothers me most about the Facebook experiment: it's just one glimpse into an industry-wide game. We are A/B testing the world," she tweeted
Crawford is right. This isn't just a Facebook thing. The entire Silicon Valley realm, what I sometimes call Startupland, is run by engineers who see us less as humans with our own needs, desires and fears, and more as data to be manipulated.
The core problem here is that for all its smarts, Startupland is populated by a very narrow segment of society: highly intelligent, well-educated software engineers and their associates. Most are from privileged backgrounds -- Stanford University is the main gateway on the United States west coast, Harvard on the east.
And most, it must be said, are white males. My experience of tech conferences on San Francisco and San Jose is that white men do the presentations, perhaps along with a smattering of middle-class Asian people. You might see Hispanics serving food and drink, while blacks might provide security muscle. It's a clearer, sharper racial stratification than you see in Australia.
By coincidence, last week Quartz
published an essay by top-shelf software engineer Carlos Bueno saying
that the next thing Silicon Valley needs to disrupt is its own culture:
"Silicon Valley has ... created a make-believe cult of objective meritocracy, a pseudo-scientific mythos to obscure and reinforce the belief that only people who look and talk like us are worth noticing. After making such a show of burning down the bad old rules of business, the new ones we've created seem pretty similar.
"It's even been stated [by Max Levchin, a founder of PayPal]: 'The notion that diversity in an early team is important or good is completely wrong. You should try to make the early team as non-diverse as possible.'"
Bueno is right. The geeks should not inherit the earth. Not this narrow little enclave of geeks, anyway.