Exclusives : Facial-Recognition Bans Suggest Tech Is Far from Ready for Prime Time

Facial-Recognition Bans Suggest Tech Is Far from Ready for Prime Time

What’s holding back facial-recognition technology? Primarily perception, both of the public and functional variety. Its inability to accurately perceive and identify subjects (and potential suspects) has led to the technology being banned in places all over the world.

From Europe to the United States

For example, in Europe the recently-proposed Artificial Intelligence Act seeks to prevent the use of real-time biometric recognition systems for law enforcement with the exception of a select few circumstances. In response, the European Data Protection Supervisor (EDPS) called for an even stricter approach, to place a moratorium on the systems in publicly accessible spaces.

Meanwhile, in the United States, Minneapolis recently became one of an increasing number of cities to ban its police department from using facial-recognition software. In Massachusetts, a police-reform bill forces police to get a judge’s permission to use the technology. Furthermore, once permission is granted, state police, the Federal Bureau of Investigation, or the Registry of Motor Vehicles must run the search.

“[The technology] will result in inaccuracy again and again, falsely accusing innocent people of being the person of interest. I think law enforcement generally speaking does not believe that it results in so many inaccuracies. I think that’s really the flaw in all of this,” said Ann Cavoukian, a privacy advocate and the former Information and Privacy Commissioner of the Canadian province of Ontario. “They should simply discard it. I should mention in over 20 jurisdictions in the United States, there have been bans on facial recognition now.”

The aforementioned Massachusetts bill acknowledges the capacity for misuse. For example, a Detroit man, Robert Williams, was wrongfully arrested in 2019 as a result of an incorrect match. Facial-recognition software pointed to Williams as a shoplifting suspect based on unclear surveillance footage. A security guard not even at the scene of the crime then picked out Williams based on the footage and a photo lineup, despite Detroit police having since admitted the tech is inaccurate an incredible 96% of the time. Williams is currently suing the Detroit Police Department.

Problems with Privacy and Identifying People of Colour

The technology’s issues have been well-documented. For instance, in the case of the Detroit incident, the solution provider, DataWorks Plus said it’s intended more as a complementary crime-solving tool as opposed to a standalone one. Furthermore, it’s been proven the technology doesn’t work nearly as well identifying people of colour, with Williams, an African-American, living in a predominantly Black city, Detroit.

Craig Watson, of the National Institute of Standards and Technology (NIST) in the U.S., helps to test facial-recognition products in the proper context and put out reports on the technology as Manager of the agency’s Image Group. He told 6GWorld that there are certain factors that play into the technology’s problems recognising non-white suspects, likely including the data that is inputted to train it.

“We know things like image quality impact performance quality of the algorithms and image quality can vary across demographics for things depending on lighting or distance from the camera, depending on the application,” he said. “There’s certainly an indication that training data can have an impact as well. We noticed that algorithms from Asian countries tend to perform better on Asian faces than other faces… We haven’t delved too deeply into the overall impact, but I would anticipate the training data also impacts performance like that.”

Sam Trosow, an Associate Professor in the faculties of Media & Information Studies and Law at Canada’s University of Western Ontario, also talked to 6GWorldTM. He called the technology discriminatory as a result of those inaccuracies. However, he also pointed to an inherent lack of privacy with the technology, using Clearview AI as an example.

The American firm gained notoriety for scouring social media for images to form its database. The app was eventually declared illegal in Canada, initially having been banned by the Toronto Police Service when its chief discovered members of the force had been using it without his knowledge. Trosow nevertheless conceded there are potential legitimate use cases for the technology.

“I think we have to distinguish between situations where you have one photograph that’s being compared to many, where you actually have a suspect, where you have a candidate for something and you’re taking that person’s information and you’re comparing it to a database,” he said. “That is much less prone to abuse and error.

“The problem is when you do this broad sweep of millions of pictures from Facebook or the public flow of cameras in places like a sports arena or a shopping mall. That’s where you’re going to get into trouble.”

Room for Facial-Recognition Improvements

Talking to 6GWorld in a separate interview, Cavoukian also took issue with the general lack of privacy. As the developer of the Privacy by Design concept, which many high-profile projects and initiatives like Europe’s General Data Protection Regulation (GDPR) have incorporated, Cavoukian also made the distinction between one-to-many and one-to-one comparisons.

“Generally speaking, it takes place without your knowledge or consent. Now, here I’m talking about one-to-many comparisons, meaning someone getting their face captured and compared to a whole bunch of people in a database,” she said, and gave the example of using Passport Control in comparison. ”When you go there, you go to the kiosk and it compares the facial image that I have given to them, that they have on record with my total consent.”

Trosow did say the accuracy of the technology is likely to improve over time. However, that’s not the issue currently. Rather, policymakers need to concern themselves with regulation based on the current situation, according to Trosow. Meanwhile, Watson was optimistic improvements would be made from a technological standpoint.

“The NIST [Face Recognition Vendor Test Demographics Effects Report] that we put out in Dec. 2019 has been well-read and the issues are well-known from that. It’s our understanding that developers are working on these issues,” he said. “We are planning on doing an update to the demographics report some time [in 2021], which hopefully will show improvements in those areas to the algorithms. We don’t know that for sure, but past history[…]has typically shown that developers are paying close attention to their shortcomings and make improvements in those areas.”

However, he said, there are other improvements that can be made. One is, as logic might dictate, the ability to differentiate between twins. Another is morphing, where a combined image of two different people would incorrectly match to both subjects. Finally, he introduced presentation attacks as a problem, even if they can ironically be low-tech in nature.

“It’s about how you present the face to the algorithm. There’s been some stuff in the news as simple as presenting a photograph instead of the actual face. There’s an area called presentation-attack detection, which tries to focus on ‘liveness’ of the subject being captured and different things like are they wearing make-up to try and fool the system. There’s a program called Odin, which is getting close to releasing a report that has done significant work in presentation-attack detection.”

Judging from the news, Watson said he sees the lack of equitable performance across demographics as the primary problem preventing wider acceptance of the tech. Meanwhile, asked if there was a way she could get behind facial recognition if the accuracy issue were ever solved, Cavoukian argued the privacy concerns would then hold it back, even in such a hypothetical, unlikely scenario.

“It’s very difficult for people to clear their names and for law enforcement they’re getting the wrong guys. So they don’t want that either,” she said. “So, I think that’s why the facial-recognition bans have arisen in all these places in the United States. It doesn’t work. It’s inaccurate and also there are privacy concerns, but I give you that the primary reason is because it’s simply inaccurate and causes far more harm than good.”

Feature image courtesy of Andrey_Popov (via Shutterstock). 

SPONSORED BY:1
Share:
Share:

Insights

Registration

To reserve your ticket please fill out the registration form