Mattis: Artificial Intelligence Could Change ‘Fundamental Nature of War’

US Secretary of Defence James Mattis tells reporters NATO has struck and understanding with the EU on defence ties
AFP JOHN THYS

On his way home after the Munich Security Conference last week, Secretary of Defense Jim Mattis speculated that artificial intelligence could change the “fundamental nature of war.” The security conference gave the impression that no one is truly prepared for that change.

“I’m certainly questioning my original premise that the fundamental nature of war will not change. You’ve got to question that now. I just don’t have the answers yet,” Mattis replied when asked about the impact of A.I.

The famously well-read defense secretary took pains to distinguish between the character of warfare and its fundamental nature. He quoted Carl von Clausewitz to the effect that warfare is a “chameleon” that “changes to adapt to its time, to the technology, to the terrain,” but noted that was a reference to the character of warfare, not its deeper nature.

We went from cavalry charges to atomic bombs over the span of four decades in the last century, with social ramifications as profound as the implications for military strategy. Mattis, who consults with a team of Silicon Valley experts on cyber issues, would not lightly propose that the 21st Century could bring equally massive changes.

Mattis clearly envisions changes that will affect politics and society in addition to military strategy, as the 20th Century innovations did. “If we ever get to the point where it’s completely on automatic pilot and we’re all spectators, then it’s no longer serving a political purpose, and conflict is a social problem that needs social solutions,” he explained.

That is a provocative idea, and it is already getting a field test, as the advent of smart weapons makes it possible for the U.S. military to provide effective battlefield support for allied forces with minimal risk to the lives of American soldiers. Drones have enabled a form of precise surgical military intervention that was never possible before.

The next step, as Mattis alluded, could be entirely autonomous non-human military forces, including A.I.-controlled ground units that can take and hold territory, which is something drone aircraft can never do on their own.

The horrible human cost of total war between major powers largely eliminated such conflicts from the bottom half of the 20th Century, even as the lethality of mass-produced weapons and the spread of militarized totalitarian ideologies made proxy wars as bloody as the great power conflicts of previous eras. What happens when there is virtually no human cost to deploying automated ground forces that can capture territory and reshape battlefields?

Military planners are wrestling with the idea of A.I.-enhanced battlespace and cyberspace warfare that can change a conflict so rapidly it becomes impossible for human commanders to keep up. The game of war will henceforth be played with much bigger dice.

The Munich Security Conference was a rather somber affair this year, pervaded by the sense that no one is prepared to deal with the advent of militarized artificial intelligence. Defense News compared the atmosphere to that of conferences a decade ago that correctly concluded NATO was not ready to deal with militarized hacking and cyber warfare.

Estonian President Kersti Kaljulaid said that artificial intelligence needs the same kind of international standards as cybersecurity and nuclear non-proliferation, including rules about how A.I. can be used in combat, standards requiring human decision-makers at the key points in every system, and universal guidelines for “kill switches” that can shut down A.I. systems immediately.

Former NATO Secretary General Anders Fogh Rasmussen agreed, favoring legislation that would “prevent production and use of these kinds of autonomous lethal weapons,” with a total ban on systems that leave human commanders “out of the loop” after an A.I. system is ordered to conduct an attack.

Imagine tomorrow’s version of Syrian dictator Bashar Assad putting his arsenal in the hands of an A.I. and giving it vague instructions to suppress an insurrection at all costs. Now imagine the rebels have A.I. weapons, too, and they are instructed to overthrow the dictatorship at all costs.

It would be a grave mistake to assume the best A.I. weapons will be under U.S. or NATO control. A report released in November by a data analytics firm warned that the United States is in serious danger of falling behind Russia and China in A.I. weapons development, and called for a dedicated national strategy to aggressively develop such weapons rather than waiting for them emerge spontaneously or be developed first by hostile powers.

That will require us to go far beyond narrow applications that enhance the fighting ability of human soldiers, as with the most advanced jet fighters. We must develop what we would forbid, and be ready to deal with the threat that inevitably arises when aggressor nations fail to heed our warnings. Secretary Mattis sees that changing the nature of warfare will inevitably change the nature of humanity, as was the case in the 20th Century, but we should also have learned by now that human nature doesn’t change as much as our machines do.

.