The House Committee on Oversight and Reform held a hearing last week that examined the use of facial recognition technology and the need for oversight on how it is used on the public.
Facial recognition technology – which analyzes faces to identify individuals — is growing in use by the government as well as companies. However, there are no federal regulations regarding the use of facial recognition technology for commercial or government use. The use of it by government entities poses potential questions of constitutionality, although the Supreme Court has not directly ruled upon the constitutionality of the use of facial recognition technology upon citizens.
Last session, the committee launched an investigation into federal law enforcement’s use of facial recognition technology. It resulted in a report by the Government Accountability Office (GAO) recommending that the Federal Bureau of Investigations (FBI) make numerous changes to its facial recognition database to improve its data security. The committee found that 18 states have agreements with the FBI to share their databases with the federal agency. As a result, over half of American adults are part of a facial recognition database. The committee also found that facial recognition technology misidentifies women and minorities and a much higher rate than white males.
At the hearing, committee members from both parties voiced concerns about the use of facial recognition by the government. While no course of action was decided upon by the committee, many of the experts at the hearing called for a moratorium on government use of facial recognition until a bill is passed to regulate the technology. The City of San Francisco recently banned the use of facial recognition by police and government agencies.
“Though the use of facial recognition and analysis systems is increasing, there are notable age, gender, race and phenotypic accuracy disparities that heighten the disparate impact risks of using these systems and other face-based tools in sensitive domains such as law enforcement, housing, and employment,” Joy Buolamwini, founder, Algorithmic Justice League, said in her testimony. “Regardless of accuracy, face-based tools can be abused in the hands of authoritarian governments, unfettered advertisers, or personal adversaries; and, as it stands, peer-reviewed research studies and real-world failure cases remind us that the technology is susceptible to consequential bias and misuse.”
Clare Garvie, senior associate, Center on Privacy & Technology at Georgetown Law, said numerous communities across the country are considering a ban on the government use of facial recognition, in addition to San Francisco. She said the federal government should do the same until regulations are in place.
“In light of the problems outlined above, federal, state, and local governments should enact moratoria on the use of the technology for law enforcement purposes. These moratoria will offer some jurisdictions the opportunity to ban the technology altogether; these jurisdictions will be amply justified in their actions,” Garvie said. “For others, I recommend a combination of targeted bans, strict court oversight and regulation, transparency and public reporting, and provisions to publicly test the accuracy and bias of algorithms used for law enforcement. A few years ago, I thought that regulation of this technology would be enough to address the risks it raised. Today, in light of what I have learned about how powerful, pervasive, and susceptible to abuse face recognition is, I think we need to hit the pause button.”
Those sentiments were echoed by Rep. Jim Jordan (R-OH), ranking member on the committee, who called for a “timeout” on the use of the technology.