The American Bar Association has identified the FBI’s interest in FRT as well as potential integration into police systems and highlighted legal concerns for unreasonable search and seizure and free speech abridgments. New Hampshire, Oregon and California have banned the use of the technology by police. The federal government has begun to answer the call of citizens and tech companies for regulation, and recently held a hearing discussing bipartisan support to develop legislation. Previously proposed legislation in the past year have targeted the application of FRT in public housing as well as law enforcement and government surveillance. The concerns over FRT seem to overcome the Trump administrations de-regulatory standpoint on artificial intelligence.
Digital rights group “Fight for the Future” generated a petition labeling FRT as “biased, invasive and violat[ive] [of] basic rights.” The organization encourages Congress to pass legislation to ban the government from using this technology. The organization’s recent protest in Washington D.C. involved a team of 3 activists wearing hazmat suits, with iPhones running FRT strapped to their heads, roaming the streets of D.C. and Congress itself. The group scanned nearly 14,000 faces and correct identification of Representative Mark DeSaulnier of California combined with several incorrect identifications of “journalists, lobbyists, and even a celebrity – the singer Roy Orbison, who died in 1988 – … highlight[ed] one of the main problems with facial recognition: Sometimes, the tech gets it wrong.” The group highlighted the dangers of inaccurate identification, violations of privacy and other basic human rights and noted the lack of regulation requiring companies to delete the data collected or report how they are using it.
There are also salient concerns about the FRT’s inaccuracies and racial bias. While the technology identifies white men accurately 99% of the time, the rate of error for darker skinned women rises to 35%. Like with all artificial intelligence technologies, the algorithms cannot overcome the biases of those that develop it. This is particularly salient when considering integrating FRT to an already biased political and social system. “Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement – and that African Americans were most likely to be singled out, because they were disproportionately represented in mug-shot databases.”
Digital activist, scientist, and acclaimed scholar Joy Buolamwini is working to combat what she has termed, “the coded gaze”, an “algorithmic bias . . . [that] leads to exclusionary experiences and discriminatory practices.” Buolamwini was not recognized by the AI technology she worked with during her studies due to her dark skin tone and had to place a plain white plastic mask over her face to be identified. Some algorithmic biases like these can be corrected by correcting the data sets and algorithms that generate the artificial intelligence of the device. For example, her face was not recognized in some of these technologies due to the fact that the data sets almost exclusively include white faces. Buolamwini studied the inaccuracy of FRTs extensively and ultimately published a paper on the phenomenon, discussing strategies for improving the accuracy of the tech.
There is a tenuous balance between disparate policy goals. Improving the technology to increase its accuracy by expanding the data sets to represent a more diverse set of human faces will ultimately make the FRT a greater weapon in the wrong hands. While virtually every democratic presidential candidate has announced general plans to regulate facial recognition technology, only Sanders and Steyer have clearly stated an intent to ban the technology from policing. It will be necessary for Congress to answer the calls for regulation or temporary bans on the technology swiftly to prevent misuse or violation of constitutional protections.