February 7, 2020
ByJamie Del Grosso
Edenic Group Partner Jamie Del Grosso takes a look at the outcry to stop authorities using AI facial recognition systems in law enforcement. Are the algorithms used still too early in their infancy to be put to practical, real-world use?
Back in 2019 a US Government inquiry looking into the use of AI powered facial recognition being used in criminal cases was found to have racial bias, in particular to Asian and African-Americans far more often than Caucasians.
While firms working in facial recognition technology have never claimed that the technology is 100% accurate at identifying individual, research has found that is suffers from a significantly higher chance of misidentifying darker skin tones with one prominent case in the states seeing a man taken to court based on him being matched to CCTV footage of a crime through facial recognition tech. Only, to be revealed later that he not only had no involvement with the incident but wasn't even in the area at the time.
So, why has AI facial recognition tech found itself branded as a "racist tech" by some critics? The problem is still under some debate, some have accounted this bias to the technologies inability to recognise facial features on darker skin tones while others are blaming the engineers themselves. In an industry dominated by white males, the algorithms are making predictions and analysis based on the information they are given during the development phase, most likely, using the faces of the engineers themselves to understand how to form these comparisons.
Racial bias has been found in a high percentage of facial recognition tech with people like Amazon finding similar issues in their own tech.
Well, to put it briefly, this is a huge issue which offers a potential catalyst to wider tensions and anxieties. In a time when law enforcement is under huge scrutiny regarding racial bias in policing and crime prevention on a global scale, reports of potential racial bias and misidentification only serve as fuel to a flame.
With the added concern of tech companies who develop these algorithms seemingly unwilling to comment or address the concerns and the use of facial recognition on the rise, the issue is unlikely to go away any time soon.
Some of the larger tech firms have made efforts to improve the issue, with IBM releasing a huge data set which it claims can improve the accuracy of facial technology and Microsoft for better regulation of facial recognition and its uses. This however doesn't seem to have stopped governments and authorities rolling out the tech in various forms across the globe. Facial recognition comparisons are still being used in criminal cases as evidence regardless of these concerns.
The question is really whether this technology is ready for such purposes and the easy answer to this is no. Even without the issue of racial bias, no facial recognition algorithms have been found to have a higher enough accuracy output to justify their use in situations such as law enforcement. While it may be a handy tool to unlock your iphone, the technology needs much more research and further development before it should be adopted for more significant purposes.
View the original article here: https://www.jamiedelgrosso.co.uk/article/is-a-i-inherently-racist