Facebook stumbles with early effort to stamp out fake news

epa06227489 The logo of the messaging application WhatsApp (C) is pictured on a smartphone screen in Taipei, Taiwan, 26 September 2017. WhatsApp, the only app from Facebook that has not been blocked in China, has been experiencing disruptions as China tightens its online censorship as a security measure ahead of the Communist Party's national congress, according to media reports.  EPA-EFE/RITCHIE B. TONGO

Bloomberg

Facebook Inc.’s strategy to stamp out fake news is struggling.
The company outsources the process to third-party fact checkers who can only tackle a small fraction of the bogus news that floods the social network, according to interviews with people involved in the process. And screenshots obtained by Bloomberg reveal a process that some partners say is too cumbersome and inefficient to stop misinformation duplicating and spreading.
“There is no silver bullet,” Facebook said in a statement. “This is part of a multi-pronged approach to combating false news. We have seen real progress in our efforts so far, but are not nearly done yet.”
The flaws highlight a fundamental question that will be asked this week when internet companies testify in front of Congressional committees: How responsible should Facebook, Google and Twitter Inc. be for information others distribute through their systems?
Facebook started noticing fake stories trending on its network as early as the summer of 2016, and it took a long time for the company to take any responsibility. A few days after President Donald Trump’s November election win, Facebook Chief Executive Officer Mark Zuckerberg said it was “crazy” to think fake news had swayed voters. But as it became clear that some fake political stories garnered more traffic on Facebook than work from traditional outlets, criticism of Zuckerberg’s stance mounted. After reflecting on the problem he said he would prioritise fixing it. His main solution has been the fact-checking effort.
In early 2017, Facebook contracted for one year with PolitiFact, Snopes, ABC News, factcheck.org and the Associated Press to sniff out fake news on its social network. The company argued that paying outside firms helped address the problem without making Facebook the arbiter of what is true or untrue. Some critics say the company wants to avoid this responsibility because that could make it subject to more regulation and potentially less profitable, like media firms.
A previous Facebook effort to hire people to curate articles was criticised as biased and the company’s artificial intelligence systems aren’t yet smart enough to determine what’s suspicious on their own. However, an inside look at Facebook’s fact-checking operation suggests that the small-scale, human approach is unlikely to control a problem that’s still growing and spreading globally.
When enough Facebook users say an article may be false, the story ends up on a dashboard
accessible by the fact-checking staff at the five organisations, according to screenshots obtained by Bloomberg that showed a rash of bogus news. A list of questionable stories appears in Facebook’s signature dark blue font, accessible only after the organisations’ journalists log into their personal social-media accounts.
The fact-checking sites sometimes have to debunk the same story multiple times. There’s no room for nuance and it’s unclear how effectively they’re addressing the overall problem, workers for the fact-checking groups said in interviews. They only have time to tackle a small fraction of the articles in their Facebook lists, the people added. Once two of the fact-checking organisations mark an article as false, a “disputed” tag is added to the story in Facebook’s News Feed. That cuts the number of people seeing the piece by 80 percent, Facebook said recently. But the process typically takes more than three days, the company said.
“It might be even longer, honestly,” said Aaron Sharockman, executive director of PolitiFact. “Everyone wishes for more transparency as to the impact of this tool.” The group has marked about 2,000 links on Facebook as false so far, but he said he’s never personally seen a “disputed” tag from this work on the social network.
Facebook expects this manual fact-checking work to help the company improve its algorithm over time, so it can get smarter at automatically spotting patterns and figuring out what stories might be worth showing to human partners, even before they’re flagged by users. Facebook also plans to extend its contracts beyond the first year. The deals currently offer about $100,000 annually to some sites, while others do it for free, according to a person familiar with the matter.

Leave a Reply

Send this to a friend