Bloomberg
Facebook Inc.’s chief security officer warned that the fake news problem is more complicated and dangerous to solve than the public thinks.
Alex Stamos, who’s handling the company’s investigation into Russia’s use of the social media platform ahead of the 2016 US presidential election, cautioned about hoping for technical solutions that he says could have unintended consequences of ideological bias.
It’s very difficult to spot fake news and propaganda using just computer programmes, Stamos said in a series of Twitter posts. “Nobody of substance at the big companies thinks of algorithms as neutral,†Stamos wrote, adding that the media is simplifying the matter. “Nobody is not aware of the risks.â€
The easy technical solutions would boil down to silencing topics that Facebook is
aware are being spread by bots—which should only be done “if you don’t worry about becoming the Ministry of Truth†with machine learning systems “trained on your personal biases,†he said.
Stamos’s comments shed light on why Facebook added 1,000 more people review its advertising, rather than attempting an automated solution. The company sent a note to advertisers telling them it would start to manually review ads targeted to people based on politics, religion, ethnicity or social issues. The company is trying to figure out how to monitor use of its system without censoring ideas, after the Russian government used fake accounts to spread political discord in the US ahead of the election.
“A lot of people aren’t thinking hard about the world they are asking [Silicon Valley] to build,†Stamos wrote.