Are We Sure That Google Drones and Google Robots Know About 'Don't Be Evil'?

Are We Sure That Google Drones and Google Robots Know About 'Don't Be Evil'?

In a Business Insider article, “Why Google Is Making Massive, Crazy Bets On Drones And Robots,” reporter Megan Rose Dickey explains that Google is expanding itself from the virtual world to the physical world by noting that Google has just purchased a drone maker and a military robotics maker on top of its driverless car venture and Google glasses.

That is, Google’s vaunted success in online search will now be joined by such “Internet of Things” activities as flying and fighting.

Some might choose to see Google’s expansion as fraught with possible danger. Google, after all, thinks big: In its Company Overview, it declares, “Google’s mission is to organize the world’s information and make it universally accessible and useful.” Okay, it’s one thing to organize all that information–what could possibly go wrong?–but things could be different if all that organized knowledge were ever to be linked to machine actors in the real world, such as drones, robots, and cars.

But wait! Google’s Code of Conduct begins with the injunction, “Don’t be evil.” And since Google has always been true to that pledge, we might rest assured that no Google employee could possibly do anything wrong.

Well, okay, maybe one thing could go wrong: Maybe Google’s machines won’t have read the Code of Conduct.

Indeed, one futurist, Patrick Tucker, writing for Defense One, has published a provocative piece entitled, “Why There Will Be A Robot Uprising.” Tucker’s argument is that machines that are made to do one thing will naturally do it the point of obsession, ignoring every other concern or restraint: 

Even the smallest input that indicates that they’re performing their primary function better, faster, and at greater scale is enough to prompt them to keep doing more of that regardless of virtually every other consideration. That’s fine when you are talking about a simple program like Excel but becomes a problem when AI entities capable of rudimentary logic take over weapons, utilities or other dangerous or valuable assets

As they say, every virtue, taken to an extreme, becomes a vice. And so if we inaugurate a new class of machines that have certain missions–be it counter-terrorism, security, or surveillance–then we face the prospect that they could take those missions so seriously that nothing else matters and all other considerations are trampled. To illustrate, we might think back on the “Sorcerer’s Apprentice” sequence in Disney’s Fantasia, when Mickey’s magical brooms and buckets take things to a watery extreme.

Of course, there’s another line of defense against bad machine conduct: Isaac Asimov’s Three Laws of Robotics, which include the flat adjuration, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” So that might put us back at ease.

But here’s a nagging possible concern: Maybe Google’s machines won’t have read Asimov, either.

It might be worth noting that the word “robot” comes from a 1918 drama, “R.U.R.”, by the Czech playwright Karel Capek. In that play–the R.U.R. stands for “Rossum’s Universal Robots”–the robots are manufactured, and then, yes, they rebel and kill all the humans.

Of course, R.U.R. was imagined before Asimov and long before Google. So perhaps machines today are better educated and better behaved.

Still, drones are scary–at least for people living on the ground. The new Marvel action movie, Captain America: The Winter Soldier, directed by Anthony and Joe Russo, does, after all, put drones in an unflattering light. But as John Nolte points out here at Breitbart News, the killer drones are operating at the behest of evil people. In other words, the machines themselves are off the hook.

But should we really hold the instruments blameless? Or could they possess, after all, a consciousness that we don’t fully understand? A consciousness that doesn’t understand us? Or doesn’t even like us? Such concerns are entirely speculative, but one can’t help but think of a 1967 novella by Harlan Ellison, “I Have No Mouth and I Must Scream.”

In that story, a rebellious computer, AM, having overthrown humanity, decides to keep a few humans alive, just for the fun of torturing them forever. Indeed, AM even addresses its victims and explains its motivation: 

Hate, let me tell you how much I’ve come to hate you since I began to live. There are 387.44 million miles of wafer thin printed circuits that fill my complex. If the word hate was engraved on every nanoangstrom of those hundreds of millions of miles it would not equal one one-billionth of the hate I feel for humans at this micro-instant. For you. Hate. Hate.

So when this computer says, “hate,” we’d better believe it.

To be sure, the scenarios spun by Capek, the Russos, and Ellison are just that–scenarios.

There’s no evidence that machines are self-consciously paying attention to these works. But let’s just hope that machines don’t ever decide that they enjoy watching plays, watching movies, or reading novellas.

COMMENTS

Please let us know if you're having issues with commenting.