Facial recognition software and its role in law enforcement has always been a controversial subject. Some cities, like San Francisco, have chosen to ban their police forces from using the technology. Opponents believe that the software is not only a violation of privacy but also claim that it’s racially biased and unreliable.
Now, as another man is falsely accused and arrested in Detroit, the call to eliminate facial recognition as a part of law enforcement is again loud and clear.
Most Recent Case
Robert Williams, a Detroit autoworker, has filed a complaint against the Detroit police department. Williams was arrested at his home in front of his young daughters and taken into custody after facial recognition software identified him based on his Michigan driver’s license photo. Almost immediately after being taken into custody it was clear that Williams did not match the video surveillance image of the suspect and he was released after spending the night in jail.
With the help of the American Civil Liberties Union (ACLU), Williams is now asking the Detroit police department for a formal apology, a final dismissal of his case, and to discontinue the use of facial recognition software. The complaint claiming that the police, “unthinkingly relied on flawed and racist facial recognition technology without taking reasonable measures to verify the information being provided,” according to the Associated Press.
Facial Recognition Law Enforcement Statistics
Facial recognition software has been proven to be most effective when identifying white males and least effective at identifying black females. Opponents’ overwhelming concern is that as ethnicities vary, so does the algorithm’s effectiveness.
The National Institute of Technology (NIST) confirmed these fears with its 2019 research report that tested 189 different algorithms and their ability to identify people of different demographics. Their testing revealed that many of the algorithms were anywhere from 10 to 100 times more likely to misidentify a black or East Asian face than a Caucasian.
Congruently, an MIT study found that some facial recognition software fared very well with light skin, male faces, some with a perfect record, or an error rate of as little as 0.3%. However, the same software had error rates of 21-35% when identifying darker skin females.
For their part, the companies that develop facial recognition software are working to continually improve the accuracy of the algorithms. Developers also contend that facial recognition software is meant to be used as only one piece of a law enforcement investigation. But opponents of the technology say there is no way to use it unbiasedly. The public controversy surrounding George Floyd, racial profiling, and law enforcement has prompted companies like Microsoft, IBM, and Amazon to announce that they would stop selling their facial recognition software to police.
While many contend that the software is inherently biased, proponents of the technology claim that the technology is already here and utilized, with positive outcomes in so many ways that eliminating it is not feasible. They suggest continued refinement of algorithms and regulation of how the technology is used by law enforcement.
Below is the ACLU video making its case against the use of facial recognition software in law enforcement.