The advent of big data was supposed to usher in a more precise and rational world. Instead, it might be leading us into the swamp of â€œalternative facts.â€
Data may not lie, but they can be interpreted in ways that have the same effect. Consider President Donald Trump’s persistent claim that millions voted fraudulently in the 2016 election. In a twisted way, it might be based on data: In 2012, a study found that some 2.75 million people were registered to vote in two states or more. Although thereâ€™s zero evidence that any of them actually voted twice, that doesn’t matter to Trump or his supporters.
This sleight of hand both illustrates and contributes to a bigger problem: Weâ€™re losing trust in numbers, especially statistics. Their sheer volume and variety can be overwhelming. In Politicoâ€™s recent roundup of Trumpâ€™s popularity figures, for example, the approval numbers among nine polls ranged from 36 percent to 54 percent. Add the hangover that many still suffer from the misleading presidential election predictions, and it’s not surprising that people are starting to tune out data altogether, or simply interpret them in ways that support their beliefs.
I donâ€™t know whether this will lead to a full-blown crisis of democracy, but I think itâ€™s already fair to place at least some of the blame on big data. Algorithms developed by companies such as Google parent Alphabet Inc. and Facebook Inc. enable partisan confirmation bias. They tailor our online environments not to the truth, but to the specific information we search for or click on. This can undermine our understanding of, and trust in, objective scientific and historical facts.
Hereâ€™s an extreme example: Dylann Roof claimed in his manifesto that it was a Google search for â€œblack on white crimeâ€ that led him to massacre nine people in a Charleston, South Carolina church in 2015. Think about that search term. What kinds of texts will perfectly match â€œblack on white crime,â€ as opposed to, say, â€œstatistics on crime rates by race?â€ Naturally, Roof got links to racist web sites with their own alternative facts â€” just as a search for â€œwho really killed JFKâ€ will, more often than not, lead to conspiracy sites.
When I typed the phrase â€œWas the Holâ€ into Google, the search engine auto-completed to â€œWas the Holocaust real?â€ Of the top six search results, four were Holocaust-denying sites. That’s despite Google’s efforts to address this problem back in December, and Iâ€™m not unique.
Such steering happens even when we’re not actively searching. The Wall Street Journal, in the lead-up to the election, kept track of the Facebook news feeds of people on the left and on the right. In one feed Clinton was evil, in the other Trump was. The Guardian experimented with placing Democratic voters in the right-wing feed and Republican voters on the left. Being in the liberal feed, one Republican voter said, was â€œlike reading a book by a fool.â€ Different stories buttressed by different kinds of evidence, some of it very weak, have led to increasingly non-intersecting world views. Worse, checking what you saw on Facebook by going to Google might not help â€” it depends on how you phrase the question.
Despite these problems, big data companies try to maintain reputations as sources of reliable information. In a recent advertisement for Google Home, a father reads to his daughter and asks Google to look up facts about blue whales. It’s portrayed as a sort of handy encyclopedia, an extension of trustworthy household knowledge like the modern day equivalent of the kitchen dictionary. Letâ€™s just hope little sweetie doesnâ€™t ask about the Holocaust.
Cathy O’Neil is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of â€œWeapons of Math Destruction.â€