Artificial Intelligence’s Emerging Threat to Human Rights

February 2, 2018

In the wake of consequential 2016 election, during which artificial intelligence was used to potentially influence voters, deeper questions about AI present themselves—one of which is: can AI threaten human rights? The answer is most certainly a resounding yes, because it already has.
To be clear, this is not a suggestion that robots are, on their own, making racist or sexist decisions; the fault still lies entirely with humans. In layman’s terms, AI technologies (which includes everything from computers and software to Sophia the Robot) still come down to basic input-output systems, in which the input is a large amount of pre-existing information. Whatever bias exists in the input data will translate to the output, by default.

Thus, our software is literally programmed to “replicate the injustices of the past,” unless something is done to change that effect.

 In one case, an image recognition software was producing sexist results. The software was input with research-image collections and began associating actions such as cooking and cleaning with women, while associating actions like shooting and playing sports with men. When the researchers investigated further, they discovered that it was because the software found a pattern of bias which already existed in the image collections, and that bias was amplified as the software trained itself based on the photos.
While a software used only by researchers may not seem to pose a human rights issue, consider an incident Google had with a similar image recognition software: tagging pictures of black people as gorillas. Or in another case, a program to help police departments find crime hotspots was shown to be vulnerable to over-policing predominantly black and brown neighborhoods, due to the input of previous crime reports—even if those reports were the result of over-policing. In yet another example, Linked In’s search function has a bias for male names. In each of these examples, the AI technology learned its behavior from some of the information given to it, then turned that behavior around and reproduced it in all of the results. Therein lies the danger; even when only some of the inputs contain the dangerous biases and prejudices, as soon as the machine learns them, those biases and prejudices become part of output.
This is especially troublesome if the technology were put in place in order to eliminate biases. Automated evaluation is likely to be most damaging to the more vulnerable individuals in a society, who are the most likely to be evaluated by an automated system—especially with these technologies being utilized more and more within the justice system arena. But despite the clear danger for significant future problems, it’s important not to get caught up solely in what could go wrong. With the problem identified, at least to some degree, researchers and developers can focus not only on ways to remediate the technology, but also to learn from it and utilize it. These systems, by their very nature, can help identify where biases and inequality exists in different sectors of society. The question then becomes: once it is known, what does society do to fix it?