Renewed Calls for Facial Recognition Technology Regulation – When is it Dangerous?

In 2017, Apple developed Face-ID equipped iPhones, integrated with facial recognition technology (FRT) that unlocked the devices only when the user’s face was mapped accurately, keeping up with evolving facial hair and preventing identical twins from opening phones. The technology is more than twice as secure as the previous Touch ID technology, with only a one in a million chance a random person could unlock your phone with their face. Since 2017, Face-ID has become standard. FRT has evolved and increased in popularity, and with it the concerns for its future application. There are increasing fears of “modern mass surveillance”, based on the ability of facial recognition devices to identify individuals at a distance, correlate them with existing databases and ultimately further discrimination.Recently, citizens of Hong Kong, China wore masks and toppled ‘smart lampposts’ believed to be facial recognition towers to prevent repercussions from protesting the government. China’s growing investment in facial recognition represents the genesis of a real ‘Big Brother’ surveillance state. While western nations like the United States like to distinguish themselves from communist regimes, U.S. investment in FRT is increasing. While to some extent we elect to log onto social media accounts and use the internet, voluntarily offering data, FRT could ‘track’ us more extensively and without our knowledge or consent, raising ethical concerns. While Apple certifies that the facial profiles used to open iPhones never leave the phone, and that third party cannot gain access to the data, this current privacy policy does not assuage concerns about future applications of the technology.

The American Bar Association has identified the FBI’s interest in FRT as well as potential integration into police systems and highlighted legal concerns for unreasonable search and seizure and free speech abridgments. New Hampshire, Oregon and California have banned the use of the technology by police. The federal government has begun to answer the call of citizens and tech companies for regulation, and recently held a hearing discussing bipartisan support to develop legislation. Previously proposed legislation in the past year have targeted the application of FRT in public housing as well as law enforcement and government surveillance. The concerns over FRT seem to overcome the Trump administrations de-regulatory standpoint on artificial intelligence.

Digital rights group “Fight for the Future” generated a petition labeling FRT as “biased, invasive and violat[ive] [of] basic rights.” The organization encourages Congress to pass legislation to ban the government from using this technology. The organization’s recent protest in Washington D.C. involved a team of 3 activists wearing hazmat suits, with iPhones running FRT strapped to their heads, roaming the streets of D.C. and Congress itself. The group scanned nearly 14,000 faces and correct identification of Representative Mark DeSaulnier of California combined with several incorrect identifications of “journalists, lobbyists, and even a celebrity – the singer Roy Orbison, who died in 1988 – … highlight[ed] one of the main problems with facial recognition: Sometimes, the tech gets it wrong.” The group highlighted the dangers of inaccurate identification, violations of privacy and other basic human rights and noted the lack of regulation requiring companies to delete the data collected or report how they are using it.

There are also salient concerns about the FRT’s inaccuracies and racial bias. While the technology identifies white men accurately 99% of the time, the rate of error for darker skinned women rises to 35%. Like with all artificial intelligence technologies, the algorithms cannot overcome the biases of those that develop it. This is particularly salient when considering integrating FRT to an already biased political and social system. “Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement – and that African Americans were most likely to be singled out, because they were disproportionately represented in mug-shot databases.”

Digital activist, scientist, and acclaimed scholar Joy Buolamwini is working to combat what she has termed, “the coded gaze”, an “algorithmic bias . . . [that] leads to exclusionary experiences and discriminatory practices.” Buolamwini was not recognized by the AI technology she worked with during her studies due to her dark skin tone and had to place a plain white plastic mask over her face to be identified. Some algorithmic biases like these can be corrected by correcting the data sets and algorithms that generate the artificial intelligence of the device. For example, her face was not recognized in some of these technologies due to the fact that the data sets almost exclusively include white faces. Buolamwini studied the inaccuracy of FRTs extensively and ultimately published a paper on the phenomenon, discussing strategies for improving the accuracy of the tech.             

There is a tenuous balance between disparate policy goals. Improving the technology to increase its accuracy by expanding the data sets to represent a more diverse set of human faces will ultimately make the FRT a greater weapon in the wrong hands. While virtually every democratic presidential candidate has announced general plans to regulate facial recognition technology, only Sanders and Steyer have clearly stated an intent to ban the technology from policing. It will be necessary for Congress to answer the calls for regulation or temporary bans on the technology swiftly to prevent misuse or violation of constitutional protections.

Nicole Angelica