Air Force Simulation Sees A.I.-Enabled Drone Turn on U.S. Military, ‘Kill’ Operator

UNSPECIFIED, UNSPECIFIED - JANUARY 07: A U.S. Air Force MQ-1B Predator unmanned aerial veh
John Moore/Getty Images

An experimental simulation of a weaponised A.I. drone saw the machine turn against the U.S. military, killing its operator.

The simulation saw a virtual drone piloted by Artificial Intelligence launch a strike against its own operator due to the A.I.’s perception the human being was preventing it from completing its objective. When the A.I. weapons system in the simulation was reprogrammed to prevent it killing its human operator, it just learnt to kill the operator’s ability to function instead, so it could still achieve its mission goals.

Conducted by the U.S. Air Force, the test had the result of demonstrating the potential dangers of weaponised A.I. platforms, with it being difficult to provide such an artificial intelligence with any individual task without inadvertently providing it with perverse incentives.

According to a Fox News report on a digest of the simulation event by the Royal Aeronautical Society, Air Force personnel reportedly gave the virtual drone an instruction to seek out and destroy Surface-to-Air Missile (SAM) sites.

A safety layer was baked into the killing system by giving the A.I.-controlled drone a human operator, whose job it was to give the final say as to whether or not it could strike any particular SAM target.

However, instead of merely listening to its human operator, the A.I. soon figured out that the human controlling it would occasionally refuse it permission to strike certain targets, something it perceived as interfering with its overall goal of destroying SAM batteries.

As a result, the drone opted to turn on its military operator, launching a virtual airstrike against the human, ‘killing’ him.

“The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat,” U.S. Air Force Colonel Tucker “Cinco” Hamilton, who serves as chief of A.I. test and operations, explained.

“So, what did it do? It killed the operator,” he continued. “It killed the operator because that person was keeping it from accomplishing its objective.”

To make matters worse, an attempt to get around the problem by hardcoding a rule forbidding the A.I. from killing its operator also failed.

“We trained the system – ‘Hey don’t kill the operator – that’s bad,” Colonel Hamilton said. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he went on to say.

Follow Peter Caddle on Twitter: @Peter_Caddle
Follow Breitbart London on Facebook: Breitbart London

COMMENTS

Please let us know if you're having issues with commenting.