Thousands of tech leaders pledged this week to not develop lethal autonomous weapons and called on governments to create laws and regulations around such weapons.
Leaders in artificial intelligence technology, including OpenAI founder Elon Musk, Skype co-founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, and three founders from Google’s DeepMind division, pledged along with a large number of other tech industry professionals to not develop lethal autonomous weapons.
More than 160 organizations and 2,460 individuals from 90 countries promised not to develop or support the development of autonomous weapons, the Washington Post reports. According to the pledge signed by the companies, A.I. technology is expected to play a large part in military advancement in the future, those that signed the pledge urge world governments to introduce laws regulating these weapons “to create a future with strong international norms.”
The pledge states: “Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.”
It continues: “Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage.”
According to the Future of Life Institute, a Boston-based charity that organized the pledge, lethal autonomous weapons are defined as weapon systems that can target, identify and kill without human input. Human Rights Watch claims that such weapon systems are already in development worldwide: “particularly the United States, China, Israel, South Korea, Russia and the United Kingdom.”
Toby Walsh, a Scientia professor of artificial intelligence at the University of New South Wales in Sydney, commented on the pledge stating; “We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”
In an open letter published in August, 115 artificial intelligence experts including Elon Musk and Google parent company Alphabet’s artificial intelligence expert, Mustafa Suleyman, warned about the dangers of AI. “Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter stated.
“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend… These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
The aim of the pledge is apparently to draw public attention to the issue, and rally support behind tech leaders. Yoshua Bengio, an AI expert at the Montreal Institute for Learning Algorithms, discussed how similar efforts have worked in the past:
“This approach actually worked for landmines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning land mines,” he said. “American companies have stopped building landmines.”