1,000+ Tech Pros Publish Letter Bashing Crime-Predicting Facial Recognition AI

facial recognition tech
NICOLAS ASFOURI /Getty

More than 1,000 tech professionals have published a public letter criticizing a forthcoming paper which describes the development of a facial-recognition system that claims that it can predict if someone is likely to commit a crime.

The New York Post reports that earlier this week, a group over more than 1,000 tech professionals, including many working in artificial intelligence and machine learning, published an open letter criticizing an upcoming paper that claims to detail a facial recognition system claiming it can predict if an individual is likely to be a criminal.

The tech professionals have named themselves the Coalition for Critical Technology and signed a letter agreeing that criminality cannot be predicted without prejudice, and that although the research claims the facial recognition system can make predictions with “80% accuracy and with no racial bias,” it cannot possibly be accurate. The tech workers compared the system to debunked “race science.”

The paper was written by two professors and a graduate student at Harrisburg University in Pennsylvania and was set to be published by Springer Nature in an upcoming collection. However, a press release from May published by the university appears to have been deleted and Springer Nature has since tweeted that it will not be publishing the paper.

Springer Nature told the Post in a statement:

We acknowledge the concern regarding this paper and would like to clarify at no time was this accepted for publication. It was submitted to a forthcoming conference for which Springer will publish the proceedings of in the book series Transactions on Computational Science and Computational Intelligence and went through a thorough peer review process.  The series editor’s decision to reject the final paper was made on Tuesday 16th June and was officially communicated to the authors on Monday 22nd June.

Harrisburg Ph.D. student Jonathan W. Korn, who is a former New York police officer, said in the since-deleted press release from the university:

Crime is one of the most prominent issues in modern society. The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of [a] person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.

This is not the first time facial recognition software has been accused of having a racial bias. Breitbart News reported last year that following a study by the Massachusetts Institute of Technology (MIT) which claimed that Amazon’s facial recognition software, known as Rekognition, is both racially and gender biased, the e-commerce giant defended its software claiming that the study was “misleading.” The study found that while no facial recognition tools were 100 percent accurate, Amazon’s performed the worst at recognizing women with darker skin compared to software by companies such as Microsoft and IBM.

The study by MIT found that Amazon’s software has an error rate of approximately 31 percent when identifying the gender of images of women with dark skin while rival software developed by Kairos had an error rate of 22.5 percent and IBM’s software boasted a rate of just 17 percent. However, software from Amazon, Microsoft, and Kairos successfully identified images of light-skinned men 100 percent of the time.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com

COMMENTS

Please let us know if you're having issues with commenting.