Google Claims Its Artificial Intelligence Will Not Be Used for Weapons

US officials say American firms are losing sales of armed drones like the MQ-9 Reaper -- seen here on an air base in Afghanistan -- to Chinese "knock-offs" because of the policy of the previous US administration to limit access to the weapons systems

Google has promised that its artificial intelligence will not be used for weaponry, following criticism over its contract with the Pentagon.

The promise was made in an official blog post on Thursday which featured seven A.I. principles, including, “Be socially beneficial,” “Avoid creating or reinforcing unfair bias,” “Be built and tested for safety,” “Be accountable to people,” “Incorporate privacy design principles,” “Uphold high standards of scientific excellence,” and “Be made available for uses that accord with these principles.”

In the section of the blog post titled, “AI applications we will not pursue,” Google listed, “Technologies that cause or are likely to cause overall harm.”

“Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” the company claimed, adding that they would also refrain from pursuing, “Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” “Technologies that gather or use information for surveillance violating internationally accepted norms,” and, “Technologies whose purpose contravenes widely accepted principles of international law and human rights.”

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Google clarified. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”

Google also claimed it would “avoid” making “unfair biases,” including political and religious biases through its A.I. algorithms.

“AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies,” the company expressed. “We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”

Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington, or like his page at Facebook.