Facebook Data Scientists Manipulated News Feed To Perform A Psychology Experiment On 600,000 Users

Facebook isn't free.

Aside from being the world's largest social network, Facebook is also a sociologist's dream. With 1.28 billion worldwide active users, the social network has created the most formidable data set ever seen for studying human behavior.

Not one to let your data go to waste, the company employs a team of data scientists to conduct experiments with user data and behavior, as it did in a recent study, first reported by NewScientist.

According to the study, Facebook manipulated the News Feeds of 689,003 users to study whether online emotions can be contagious. For a week, some users were shown posts in News Feed containing a higher number of positive words, others were shown posts with more negative sentiments. From the study:


When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

This sort of social engineering is nothing new for Facebook, which has been using user data to conduct scientific studies for years now. Back in 2012, MIT Technology Review reported that Mark Zuckerberg himself was using the social network's influence to conduct personal experiments:



Influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.

Given that Facebook has spent the better part of 2014 making very public gestures to assure users it will protect their privacy, experiments like this one, which treats unwitting users and their data as test subjects, threaten to damage the social network's already shaky privacy reputation. And while Facebook assures the experiments are all designed to gain insights that will ultimately better users' experiences on the network, the study, which openly admits it emotionally manipulated its users, has already outraged privacy advocates and casual visitors, alike.

Get off Facebook. Get your family off Facebook. If you work there, quit. They're fucking awful.

While nobody likes being emotionally manipulated, part of the outrage seems to be due to the fact that Facebook is technically in the right here. When you sign up for Facebook, you are, in fact, offering up your consent to have your data and profiles used in these kinds of experiments. And, as the study notes, since Facebook's data team used machine analysis to surface the positive and negative posts, it didn't breach Facebook's privacy policy.

Though this sort of thing may be nothing new, it's a reminder that just because you don't have to pay to use Facebook, it doesn't mean admission to the social network is free.

Adam Kramer, the Facebook data scientist who co-authored the study, posted a response to the study's backlash on his Facebook page noting, "In hindsight, the research benefits of the paper may not have justified all of this anxiety."

Here's the text of the post in full:



OK so. A lot of people have asked me about my and Jamie and Jeff's recent study published in PNAS, and I wanted to give a brief public explanation. The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends' negativity might lead people to avoid visiting Facebook. We didn't clearly state our motivations in the paper.

Regarding methodology, our research sought to investigate the above claim by very minimally deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). Nobody's posts were "hidden," they just didn't show up on some loads of Feed. Those posts were always visible on friends' timelines, and could have shown up on subsequent News Feed loads. And we found the exact opposite to what was then the conventional wisdom: Seeing a certain kind of emotion (positive) encourages it rather than suppresses is.

And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it -- the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.
The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.

Skip to footer