When algorithms come for our children!

education, elementary school, learning, technology and people concept - group of school kids with tablet pc computer having fun on break in classroom

 

Consider the tragedy of a child killed by neglect and abuse. Now consider the tragedy of a child taken from parents who would not have
criminally abused her. Which is worse?
Computer algorithms might soon help humans make such difficult decisions — but only if we recognize the myriad ways in which they can go wrong.
In countless cities across the nation, child welfare services make extremely tough calls every day. With limited resources and information, they must often rely on gut instinct in predicting who is most vulnerable. It’s not a perfect system, and horrifying examples of failure are common.
In an effort to make decisions more systematic and accurate, some are turning to algorithms. In Los Angeles County, the Department of Children and Family Services has recently been testing a system called Approach to Understanding Risk Assessment, or AURA. Some people have high hopes that it can help reduce human error, while others are worried about the prospect of giving an opaque and unaccountable computer program the power to separate parents from their children.
Notice that I didn’t say the system would be “more objective.” Algorithms are often presented as if they’re silver bullets, inherently fair and morally neutral thanks to their mathematical nature. But that’s not true. Rather, they should be seen as social constructs that have been formalized and automated.
The conclusions that algorithms draw depend crucially on the choice of target variable. Deaths are too rare to create discernible patterns, so modelers tend to depend on other indicators such as neighbour complaints or hospital records of multiple broken bones, which are much more common and hence easier to use.
Yet these could produce very different scores for the same family, making otherwise safe environments look
dangerous.
The quality and availability of data also matter. A community where members are reluctant to report child abuse, imagining it as a stigma or as a personal matter, might look much safer than it is. By contrast, a community that is consistently monitored by the state — say, one whose inhabitants must provide information to obtain government benefits — might display a lot more “risk factors.”
AURA, for example, uses contextual information like mental health records and age of parents to predict a child’s vulnerability. It’s not hard to imagine that such factors are correlated to race and class, meaning that younger, poorer, and minority parents are more likely to get scored as higher-risk than older, richer parents, even if they’re treating their children similarly.
AURA’s proponents boast of its 76 percent accuracy in identifying the highest risk category, which means that it could be a lot better than humans at finding likely victims of abuse.
We should not discard this additional accuracy at all costs. At the very least, we need to compare the algorithm to the current system, in its ability to avoid both false negatives (unprotected abused children) and false positives (unfairly targeted families).
Advocates of AURA tend to worry more about false negatives, while critics worry more about the false positives. The latter are troublingly prevalent: In a test run on historical data, AURA correctly identified 171 children at the highest risk while giving the highest score to 3,829 relatively safe families. That’s a false positive rate of 95.6 percent. It doesn’t mean that all those families would have lost custody of their kids, but such scrutiny inevitably carries a human price —
one that would probably be unevenly
distributed.
The irony is that algorithms, typically introduced to make decisions cleaner and more consistent, end up obfuscating important moral aspects and embedding difficult issues of inequality, privacy and justice. As a result, they don’t bypass much hard work. If we as a society want to make the best use of them, we’ll have to grapple with the tough questions before applying algorithms — particularly in an area as sensitive as the lives of our children.
—Bloomberg

Leave a Reply

Send this to a friend