New Republic: Some Algorithms Are Racist

Ian Waldie/Getty Images
Ian Waldie/Getty Images

The New Republic published an article recently claiming that algorithms used in artificial intelligence systems are racist.

The article, titled “Turns Out Algorithms Are Racist” was written by Navneet Alang who claims that due to the information being provided to A.I. systems, some of the results being returned are racist. “It turns out that artificial intelligence may be just as bigoted as human beings,” Alang states in the very first line. The citation for this claim links to a study on an artificial intelligence image recognition system that was “taught” to identify areas by scanning a number of images.

Professor Vicente Ordóñez noticed a pattern with the system, “It would see a picture of a kitchen and more often than not associate it with women, not men.” This seemed to be due to the image sets that the AI system learned from which predominantly featured women in roles such as cooking or shopping while men were associated with coaching or shooting.

Alang then notes examples of A.I. discriminating against black people stating, “A ProPublica investigation revealed that justice systems were using AI to predict that chance of reoffending, and incorrectly marked black defendants as more likely to be future criminals.” The article further goes on to explain that while many may believe that “artificial intelligence” describes a computer system that can think for itself, what the term A.I. usually describes is simply “machine learning”, a process by which a system can recognize or understand certain situations or photographs once it is given enough reference data to compare new information to.

“Obviously, our phones are not sentient little beings in our pockets. But many apps use parts of artificial intelligence to do things like recognize faces or images, react to context such as adding one’s location to messages, or understanding commands we give with our voice,” Alang states. Alang’s solution for combating inherent bias that may appear within A.I. systems is to approach A.I. from a specific social justice perspective.

“Since machine learning and A.I. operate through collecting, filtering, and then learning from and analyzing existing data, they will replicate existing structural biases unless they are designed explicitly to account for and counteract that,” writes Alang. “To address this situation, an approach would require a specifically social justice-oriented perspective, one that considers how economics intertwine with gender, race, sexuality, and a host of other factors.” It seems that some groups may already be taking steps to combine social justice and artificial intelligence.

AI Now, a New York-based research group led by Kate Crawford and Meredith Walker, have worked to further understand how A.I.works, different methods of A.I. implementation and how they can combat the prejudices that A.I. may have. The group recently published a series of suggestions on how to improve A.I. Alang writes that among these suggestions were, “dedicated resources to diversifying the range of inputs for AI systems, especially those related to marginalized groups—photos of men doing the dishes, say, or of two women getting married.”

Alang also states that hiring a more diverse group of A.I. engineers may help to combat A.I. prejudices as according to the author, “After all, if more of the developers were black, or women, then the programs might not reflect such a white, male worldview.”

Read the full article in the New Republic.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan_ or email him at lnolan@breitbart.com.

COMMENTS

Please let us know if you're having issues with commenting.