back to top
Tech

Facebook Says Its Fake News Label Helps Reduce The Spread Of A Fake Story By 80%

BuzzFeed News obtained an email sent by a Facebook executive to its fact-checking partners that for the first time shared internal data about the program.

Posted on

A news story that's been labeled false by Facebook's third-party fact-checking partners sees its future impressions on the platform drop by 80%, according to new data contained in an email sent by a Facebook executive and obtained by BuzzFeed News.

The message also said it typically takes "over three days" for the label to be applied to a false story, and that Facebook wants to work with its partners to speed the process.

The data about the effectiveness of Facebook's fact-checking partnership initiative was contained in a brief email sent today by Jason White, Facebook's manager of news partnerships, to the company's fact-checking partners.

"We have been closely analyzing data over several weeks and have learned that once we receive a false rating from one of our fact checking partners, we are able to reduce future impressions on Facebook by 80 percent," White wrote.

A Facebook spokesperson told BuzzFeed News that the system begins to "demote" a story in the News Feed after a single fact-checker finds it to be false. The label is then applied to a link once at least two checkers rate it false.

The statistic about the reduced spread of fact-checked stories (which was not accompanied by additional information about how that figure was arrived upon) is the first time Facebook has shared internal data about its checking program. White's email emphasized that the company wants to work with its partners "to surface these hoaxes sooner" due to the lag time between a hoax being published and the label being applied.

"It commonly takes over 3 days, and we know most of the impressions typically happen in that initial time period," White wrote. "We also need to surface more of them, as we know we miss many."

Facebook has been working with external fact-checkers like PolitiFact and Snopes since December in an effort to reduce the spread of false stories on its platform. The checkers are given access to a special tool where they can view stories being shared on Facebook that are flagged as potentially worthy of a fact check. If two or more checkers deem a link to be false, Facebook ads a label to inform users that it has been flagged by fact-checkers.

From the moment of its launch, the efficacy of the disputed label has been questioned. In May, The Guardian reported that some publishers of false stories saw shares of their content increase after the disputed label was applied. Last month, Politico cited data from Yale researchers that found the label "has only a small impact on whether readers perceive their headlines as true."

White's email is the first time the company has provided its own data to back up public statements from executives that the fact checks and labels do help stop a story from being seen on the platform. It also gives the fact-checking partners the first tangible sense of the impact of their work.

Facebook has also long emphasized that data gathered from fact checks help inform decisions made by News Feed algorithms in terms of what content to surface for users, and that this ultimately has more effect than the public-facing label. But the checkers have been asking for data from Facebook since the early days of the program.

Sadly no actual data on success/failure of fake news flagging mechanism shared though #ijf17

White's email also included a caution that the push to increase the speed and volume of fact checks should not come at the expense of free speech. This likely refers to Facebook's guidance to fact-checking partners to only focus on stories that are 100% false, and avoid any with shades of grey.

"Increasing our speed and efficacy is important, but it’s equally important we do this the right way, and don’t restrict legitimate speech," he said. "It’s a difficult tension, but we are confident we can improve our efforts.

Read the full email:

We'd like to provide an initial update on the progress we’re seeing in our shared efforts to reduce false news on Facebook. Thanks to your hard work and partnership, we have learned how to reduce distribution of news hoaxes.

We have been closely analyzing data over several weeks and have learned that once we receive a false rating from one of our fact checking partners, we are able to reduce future impressions on Facebook by 80 percent. While we are encouraged by the efficacy we’re seeing, we believe there is much more work to do. As a first priority, we are working to surface these hoaxes sooner. It commonly takes over 3 days, and we know most of the impressions typically happen in that initial time period. We also need to surface more of them, as we know we miss many.

Increasing our speed and efficacy is important, but it’s equally important we do this the right way, and don’t restrict legitimate speech. It’s a difficult tension, but we are confident we can improve our efforts.

Please know your partnership is valuable to us. We know we must also take additional steps to meet this challenge and are continuing to pursue a multi-pronged approach to stop the spread of false news on Facebook.

We plan to provide you additional updates about our efforts going forward, and appreciate all the work you continue to do.

Craig Silverman is Media Editor for BuzzFeed News and is based in Toronto.

Contact Craig Silverman at craig.silverman@buzzfeed.com.

Got a confidential tip? Submit it here.