“I Knew You Were Trouble”: The Privacy Challenges of Artificial Intelligence and Facial Recognition Technology

In 2019, a civilization without artificial intelligence (AI) seems unheard of. Self-driving cars, smart bots on Jeopardy!, and automated drones are becoming integrated into the everyday lives of individuals worldwide. In many ways, AI has blended into society almost seemingly, while other AI technologies have surprised and even shocked citizens and consumers.

2018, for example, marked a unique year for Taylor Swift on the AI-front. In July 2018, CableTV.com developed algorithms to create a “new” Taylor Swift-like song. Despite not being created by the artist herself, the song resembled the style, tone, and lyrics of a classic Swift song. While not perfect, the AI-developed song represented the potential future of the music industry and other artistic fields. With increased precision in these technologies, which undoubtedly will occur, numerous legal issues surrounding the mere creation of these works is uncertain.

But Taylor-Swift-related AI did not stop there last year. Though AI may serve as a burden upon celebrities and artists attempting to protect their creative expression, AI is also being sought after by celebrities for their protection. In fact, in December 2018, a report surfaced that Taylor Swift herself employs AI facial recognition technology for security measures.

At one of Swift’s concerts, a kiosk showing the singer’s rehearsal clips drew fans in to look into a hidden facial recognition camera. The camera captured photos of the individuals who stopped by the kiosk and transferred them to a central security operation. These photos were then cross-referenced with photos of known Taylor Swift stalkers.

Taylor Swift, however, is not the only one affected by the uncertainty of AI. Numerous companies, government entities, and individuals currently or will soon face challenges with AI. In particular, issues within both privacy will have a substantial impact upon the integration of AI into society. The issues surrounding these challenges within AI technology are especially prevalent in facial recognition AI.

While providing some benefits, including increased safety measures, AI facial recognition technology will inevitably be at odds with privacy protections. As demonstrated in the Taylor Swift example, facial recognition technology captures a person’s image, stores that image, and compares the image to any number of other stored images.

There is no doubt that this technology could provide necessary security to individuals and society. Through real-time recognition, people of interest could be immediately identified in a crowd. This could have beneficial domestic and international security impacts. Similarly, economic benefits may be possible if such technology is used by shopping retailers to identify the most popular items or regions in a store, enabling innovation within marketing personalization.

These potential benefits, however, are offset by the immense risk to privacy. In July 2018, Amazon’s commercially-available facial recognition software was used to compare photographs of members of Congress with those of arrestees. The algorithm was only 95% inaccurate, misidentifying 28 Congressional members and concluding they had been among the arrested. In response, Amazon noted that the technology should be used in conjunction with human judgment. Amazon pressed the importance of its technology to humanitarian causes such as restricting human trafficking and helping to find lost children. Despite these potential benefits, the inaccuracies with AI technologies will be a continued issue for many applications. AI algorithms are only as good as the data input into them.

With facial recognition technologies, simple activities in an individual’s everyday life, could be compromised. Pictures posted to Facebook, Twitter, and Instagram could become part of tracked databases. Eleven states have explicit general privacy considerations in their state constitutions. Illinois and Texas require the consent of an individual for facial recognition data to be captured, 19 states require encryption of the data gained, and 20 states have enacted laws to protect the biometric data of minors. The City of San Francisco is considering a bill banning the governmental use of facial recognition software, the first of its kind in the United States. At the same time, AI facial recognition is protecting privacy through serving as a substitute for smartphone passwords and verification for visitors to schools. Facial recognition technologies ultimately present two conflicting realties: one in which privacy is protected and one in which privacy is usurped. With current AI technology and the lack of a regulatory protocol, a world with these conflicting outcomes may spell trouble for society’s future.

Ashle M. Page, 4 February 2019