There’s a perfectly good reason to break open the secrets of social-media giants. Over the past decade, governments have watched helplessly as their democratic processes were disrupted by misinformation and hate speech on sites like Meta Platforms Inc’s Facebook, Alphabet Inc’s YouTube and Twitter Inc. Now some governments are gearing up for a comeuppance.
In the next two years, Europe and the UK are preparing laws that will rein in the troublesome content that social-media firms have allowed to go viral. There has been much skepticism over their ability to look under the hood of companies like Facebook. Regulators, after all, lack the technical expertise, manpower and salaries that Big Tech boasts. And there’s another technical snag: The artificial-intelligence (AI) systems tech firms use are notoriously difficult to decipher.
But naysayers should keep an open mind. New techniques are developing that will make probing those systems easier. AI’s so-called black-box problem isn’t as impenetrable as many think.
AI powers most of the action we see on Facebook or YouTube and, in particular, the recommendation systems that line up which posts go into your newsfeed, or what videos you should watch next — all to keep you scrolling. Millions of pieces of data are used to train AI software, allowing it to make predictions loosely similar to humans’. The hard part, for engineers, is understanding how AI makes a decision in the first place. Hence the black-box concept.
A paradox affects machine-learning models. It will often give the right answer, but its designers often can’t explain how. That doesn’t make them completely inscrutable. A small but growing industry is emerging that monitors how these systems work. Their most popular task: Improve an AI model’s performance. Companies that use them also want to make sure their AI isn’t making biased decisions when, for example, sifting through job applications or granting loans.
In a lot of ways, the reputation of AI’s black box for impenetrability has been exaggerated, according to Aporia’s chief executive officer, Liran Hosan. With the right technology, you can even — potentially — unpick the ultra-complicated language models that underpin social-media firms, in part because in computing, even language can be represented by numerical code. Finding out how an algorithm might be spreading hate speech, or failing to tackle it, is certainly harder than spotting mistakes in the numerical data that represent loans, but it’s possible. And European regulators are going to try.
According to a spokesman for the European Commission, the upcoming Digital Services Act will require online platforms to undergo audits once a year to assess how “risky†their algorithms are to citizens. That may sometimes force firms to provide unprecedented access to information that many consider trade secrets: code, training data and process logs. (The commission said its auditors would be bound by confidentiality rules.)
—Bloomberg