What Facebook’s social experiments really say
Jul 04 2014
Research tells the social network how to facilitate user interaction
To be fair, The Wall Street Journal attempted an answer. It talked to a former data science team member, Andrew Ledvina, who said, “Anyone on the team could run a test. They’re always trying to alter people’s behaviour.”
Thee interesting part, though, is how — apart from reducing the number of “positive” and “negative” posts from users news feeds to see how that changed the tone of their posts, as the data science group’s Adam Kramer did in the now-infamous experiment.
In fact, Facebook has never tried to hide it was doing this kind of research. It just hasn’t advertised it. The best place to start looking for the data science team’s insights is, you guessed it, the unit’s Facebook page.
It links to a lot of innocuous research, like this little study of how mothers on Facebook relate to their children or this one proving that true rumours are more viral than false ones. There is, however, plenty of more sensitive material, like this study of how close Facebook friendships really are. The scientist behind it, Moira Burke, surveyed about 4,000 people about their relations with their friends and then matched the data to the server logs of the participants Facebook activity. Burke says in her description of the study that all those data were anonymised, but the approach still raises unpleasant possibilities.
And then there are behaviour-changing experiments like the 2012 one. During the 2010 US congressional election, 98 per cent of American users aged 18 and over were shown a “social message” at the top of their news feeds, encouraging them to vote and then report having done so by hitting a special button. Users could see which of their friends had done so. One per cent saw the same message, but without the pictures of their friends. The remaining one per cent saw nothing at all. The scientists found that the social messages worked best, and even showed that the first message generated tens of thousands of votes — by matching their data to voting records.
The experiment was widely reported in 2012, after the team that ran it published the results in Nature. For some reason, nobody got mad about it, though one could imagine repressive regimes interested in 100 per cent voting turnouts using the technology to smoke out dissenters.
Much of the Facebook research is published. The easiest way to unearth it is to take the names of scientists from the data science team page and run a specialised Google Scholar search for them.
Kramer, for example, suggested the number of positive words in Facebook posts as a measure of “gross national happiness” and analysed Facebook users’ “self-censorship” — that is, last-minute edits made after publishing a post (he found that people do it more for group posts, when their audience is harder to define). In 2011, he also argued that Facebook “holds potential to influence health behaviours of individuals and improve public health.”
Facebook has no plans to stop this research activity. It has obvious commercial value because it tells the social network how to facilitate user interaction (like this study by Moira Burke, focusing on newcomers’ socialisation on Facebook). The academic community loves it, too: Facebook scientists’ research is widely quoted. Besides, it holds huge potential for governments seeking to gauge the public mood or influence it, as in the election experiment or in Kramer’s public health example. That may explain why Facebook chief operating officer Sheryl Sandberg pointedly refused to apologise for the 2012 experiment, saying only that the company “regrets” the fact that it was “communicated really badly.”
(Leonid Bershidsky is a Bloomberg View contributor and a Berlin-based writer, author of three novels and two nonfiction books)