Can hiring algorithms be fixed now?

Some of the US’s largest employers, including General Motors, IBM and Meta, have formed a new venture with a laudable goal: ensuring that artificial intelligence doesn’t perpetuate or worsen discrimination in hiring. The mere existence of the Data & Trust Alliance, as it is called, is good news — but it needs to be a lot better.
Hiring has changed radically in the past couple of decades. Online recruiting allows many more people to apply to each opening, requiring employers to sort through more applications. Companies initially coped by simply throwing out resumes that lacked certain keywords or skills, but now they’re increasingly using more mathematically sophisticated AI.
Typically, such algorithms are licensed from specialised software vendors, which advertise their products as unbiased, fair and objective.
In reality, AI tends to entrench discrimination — for example, by scoring candidates based on data from a tainted history, in which women and minorities have faced much greater obstacles to success. As a result, people with distinctively Black names remain less likely to get job interviews, and studies consistently show no change in discrimination over time.
To date, corporate America has largely sought to avoid the issue: If they remained clueless about how their vendors’ algorithms actually worked, the logic went, they could maintain plausible deniability. But that’s not a viable longer-term strategy. Eventually, regulators will get wise and start to enforce anti-discrimination laws.
The emergence of the Data & Trust Alliance suggests that big employers are changing tack, looking to get ahead of enforced regulation. Its first initiative, “Algorithmic Bias Safeguards for Workforce,” includes a list of questions designed to help human-resource managers judge how well vendors are monitoring and addressing bias in their hiring algorithms.
I have my own bias here: I run an algorithm-auditing business, so I stand to gain if companies spend more money scrutinising the AI they use. That said, my experience working on hiring algorithms allows me to see a big flaw in the alliance’s approach: It sets no specific, binding standards.
Without clear thresholds to define discrimination, the so-called safeguards won’t really guard against anything.
Even where industry standards have emerged, they’re pretty awful. Consider the four-fifths rule. It says that the success rate for protected categories, such as women and minorities, should be no less than four-fifths that of the reference category. If, for example, 5% of White candidates make it through a given hiring filter, then the rate for Black candidates should be no less than 4%. But that’s still a big difference! If algorithms hew close to the limit, as they tend to do in this age of “optimisation,” the hiring process will remain very racist.
There’s a better way. It entails getting more of the people affected by hiring algorithms involved in setting standards — as, for example, the Worker Rights Consortium has done for sweatshops. Founded in 2000 by students outraged by working conditions in international supply chains, it creates binding human rights standards for companies to follow, focusing on the poorest workers in the poorest nations.
Imagine a similar organisation focused on algorithms, acting on behalf of workers around the world. It could foster a discussion between the employers that use the algorithms and advocates for the workers who tend to be harmed by bias. Together, they could produce binding standards that would be much fairer, and much more likely to obviate the need for further regulatory scrutiny, than the companies could come up with on their own.

—Bloomberg

Leave a Reply

Send this to a friend