Feb. 5 (UPI) — Google removed its pledge not to use artificial intelligence for weapons development and surveillance, but executives said it will base AI threats on its continued research.
The tech innovator on Tuesday removed the section titled “Applications will not be pursued” from its Google AI principles, which included weapons, technology and injured people, and surveillance.
Google AI head Demis Hassabis and James Manyika, senior vice president for technology and society, said on Tuesday that since the company wrote its AI principles in 2018 the technology has become “pervasive” and widely used, and it needed to evaluate if its benefits “substantially outweigh potential risk.”
They said Google’s AI would remain “consistent with widely accepted principles of international law and human rights.”
They said AI’s benefit to society along with safety remains at their research foundation.
“We are investing more than ever in both AI research and products that benefit people and society, and in AI safety and efforts to identify and address potential risk,” Hassabis and Manyika said in the Google blog.
Some, like British computer scientist Stuart Russell, have warned about autonomous weapons systems and the need for global control of such systems.
COMMENTS
Please let us know if you're having issues with commenting.