Blog: Unchecked Facial Recognition Technology Can Harm Vulnerable Communities
Advancements in artificial intelligence (AI) and machine learning have greatly accelerated the development and deployment of facial recognition technology. While this exciting technology offers tremendous potential, without adequate rules and safeguards in place, the probability of facial recognition being misused and abused by law enforcement warrants serious attention from policymakers. Once the technology is widely released, it will be even more difficult to solve for privacy concerns resulting from data breaches and other misuse of the information collected.
Unforeseen applications for facial recognition will undoubtedly transform the consumer experience and could even strengthen security and privacy efforts; however, as with any new technology, there are still bugs to be worked out. The problem is that this technology is being rolled out and adopted at such a rapid pace that these technical flaws and shortcomings — rather than being corrected — are simply being glossed over or ignored altogether. This approach has particularly devastating consequences for Latinx, African-American, and other communities of color. A New York school district has even had to delay implementation due to the civil rights concerns.
There is a simple fix that would let us enjoy the benefits of this and other revolutionary technologies without compromising out privacy: comprehensive federal privacy legislation. It is our hope that as Congress works toward a national solution to protect consumers’ privacy, they also develop fair framework for how facial recognition technology can and cannot be used by governments, law enforcement, and private businesses. Customs and Border Protection is already dealing with the fallout of a stolen database with pictures of international travelers. We simply cannot wait for more violations like these to occur before acting to protect our civil rights.
Recently, the U.S. House Oversight Committee held a hearing on this critical issue and heard important testimony about the need to strike an appropriate balance between safety, privacy, and other civil liberties considerations. Witness testimony highlighted the potential for facial recognition technology to exacerbate existing inequities in policing and in the criminal justice system. One of the witnesses representing Georgetown Law authored a recent report that details instances of law enforcement over-reliance on flawed data sets for investigations and instances of misuse.
Facial recognition technology has been proven to be particularly ineffective in correctly identifying women and people of color. That is just one of the reasons why a group of AI experts, as well as the American Civil Liberties Union, have both written open letters to Amazon urging the private company to cease selling its facial recognition software, Rekognition, to government agencies and police departments.
Based on a report cited by the group of AI experts, Amazon’s facial recognition technology had an error rate of roughly 31 percent when identifying the gender of women of color — versus an 0 percent for white males. The troubling reality is that facial recognition is far more likely to misidentify a woman or person of color, or both, which could lead to an increase in wrongful arrests that target and disproportionately affect our communities. As in other areas where technology, privacy, and civil liberties interact, there is no consensus among technology companies over how facial recognition technology should be governed. Contrary to Amazon’s approach, Microsoft recently declined a contract to provide the Los Angeles Police Department facial recognition technology, citing its potential disparate impact on women and minorities.
Yet none of these technical limitations and biases seem to be slowing down the adoption and application of facial recognition technology by law enforcement. During the recent Congressional hearing, one witness noted that the technology is accessible to at least a quarter of U.S. law enforcement agencies. Making matters worse, many police departments seem to be turning facial recognition into less of a science and more of an art, with enormous implications on law enforcement efforts and minority communities.
Because there are no clear laws or regulations governing how law enforcement can use facial recognition technology, local police departments have gotten creative in their approaches. In some cases, when an available facial image may be incomplete or obscured, police have resorted to using composite images to substitute some missing details — for example, by using an image of someone else’s mouth if the mouth in the photo or video footage is obscured, or by replacing closed eyes with a set of open eyes found through a simple Google search.
In the absence of uniform guidelines and transparency about how facial recognition data is used, this public safety tool may result in unwarranted surveillance and discrimination. Several State and local governments have already recognized the potential for abuse. In May, San Francisco became the first U.S. city to ban law enforcement agencies from using facial recognition technology. The Massachusetts legislature is considering a similar state-wide prohibition on facial recognition coupled with protections for the collection and use of other forms of biometric information.
Congress has a clear role to play to protect consumers and to develop safeguards for how their data — including what can be gathered using facial recognition software — is used. As most facial recognition technology used by local law enforcement is purchased with the use of federal grants, Congress has the authority and responsibility to enforce fair standards for its use. When it comes to protecting consumer privacy and civil rights, there is simply too much at stake to permit the reckless, unregulated use of facial recognition technology. Congress must step in to provide the national, comprehensive privacy solution that all communities deserve.