On March 14th, Reuters reported that Ukraine’s Defense Ministry started using facial recognition software in their war efforts against Russia. Other elements of the government are expected to follow in the coming days. Facial recognition firm Clearview AI offered Ukraine’s government free use of its service, claiming that it will allow Ukraine to effectively screen people of interest at checkpoints by utilizing more than two billion images gathered from Russian social media service VKontakte. The service may also allow Ukraine to easily identify the dead, although a report by the U.S. Department of Energy found that it may be less effective for this use.
Clearview AI was not designed for warfare. Rather, it was created for personal and law enforcement uses. The service allows users to upload an individual’s face, finding matches out of the ten billion in their database and linking where on the internet those photos appear. Primarily marketed as a law enforcement tool, Clearview asserts that its “revolutionary investigative platform enables quicker identifications and apprehensions to help solve and prevent crimes, helping to make our communities safer.” Their marketing has been effective, as more than six hundred law enforcement agencies have started using Clearview in the past year, including the FBI and DHS.
“Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”
The criticisms are not hard to see. Eric Goldman, co-director of the High Tech Law Institute, argues: “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.” More generally, the software can often be error-prone, and civil rights groups argue that it misidentifies women and minorities at higher rates than white men. Some have argued for federal legislation to establish standards and guidelines for law enforcement agencies, and others have argued that government regulation should be combined with a commitment to social responsibility by developers.
However, the upside is also clear: “In February, the Indiana State Police started experimenting with Clearview. They solved a case within 20 minutes of using the app. Two men had gotten into a fight in a park, and it ended when one shot the other in the stomach. A bystander recorded the crime on a phone, so the police had a still of the gunman’s face to run through Clearview’s app. They immediately got a match.” While some criticize the effectiveness or morality of facial recognition, Clearview and similar services face more direct and burdensome legal challenges on several fronts. First, social media companies have demanded that Clearview stop taking their data. Second, lawsuits in the United States accuse Clearview of violating privacy rights by taking images from the internet. And third, for similar reasons, countries such as the United Kingdom and Australia have outlawed the use of facial recognition software.
Major social media companies, including Twitter, Google, YouTube, LinkedIn, and Venmo, have sent cease-and-desist letters to Clearview in an attempt to stop them from scraping photos and data from their platforms. Moreover, Facebook shut down its facial recognition system for societal concerns.
In the United States, four lawmakers sent letters to several federal agencies calling for an end to their use of Clearview: “Use of increasingly powerful technologies like Clearview AI’s have the concerning potential to violate Americans’ privacy rights and exacerbate existing injustices. Therefore, as the authors of the Facial Recognition and Biometric Technology Moratorium Act (S. 2052/H.R. 3907) — which would halt a federal agency or official from using these technologies — we urge you to stop use of facial recognition tools, including Clearview AI’s products.” Meanwhile, two immigrant-rights groups in California are suing Clearview for violating privacy laws by providing law enforcement with facial recognition services in cities that have banned their use. In response, Clearview asserts their service is fully protected by the First Amendment. Other actions have been filed by the ACLU, Twitter, and the New Jersey Attorney General.
In November, the United Kingdom’s Information Commissioner’s Office imposed a potential £17 million fine on Clearview. The preliminary review found that the service violated several data protection laws, like failing to have a lawful reason for collecting the information and failing to inform people about what is happening to their data. Australia also deemed the service to breach privacy laws. Similarly, Canadian privacy commissioner declared Clearview’s service illegal and directed the firm to remove Canadian facial images from its database. Even with international movement, Clearview announced that it raised $30 million from investors in 2021 and is on track to win a U.S. patent for its technology.
With such international controversy over facial recognition services, Clearview’s offer to Ukraine is clearly meant to align its disputed image with pro-democracy forces. As Ukraine implements the service, will Clearview’s attempt to disguise its contentious reputation prove effective?
Pearson Cost attended Gettysburg College and majored in Political Science. In law school, he is the President of UNC’s American Constitution Society, a Staff Member for JOLT’s Volume 23, and an Articles Editor for JOLT’s Volume 24. He is interested in a career at the intersection of law and public policy. See the author’s previous blog post here.