Elon Musk vs. the Google Terminators: The Perils of Artificial Intelligence

AP Photo/Ringo H.W. Chiu
AP Photo/Ringo H.W. Chiu

Tech magnate Elon Musk is part of a group of science and technology celebrities—which includes Microsoft’s Bill Gates and physicist Stephen Hawking—who believe the impending development of artificial intelligence poses a threat to human society and perhaps even human survival.

It should be noted that these A.I. doomsayers have a sense of humor about their arguments and how outlandish they sound to the layman, especially when pop culture is filled with killer-robot fantasies. (There are at least three of them in in theaters this summer, in fact: a new Terminator film, Avengers: Age of Ultron, and the far more nuanced Ex Machina.) Therefore, it can be hard to tell when Musk is joking or exaggerating for dramatic effect.

With that in mind, Business Insider reports his delivery of an oddly specific A.I. doomsday warning in a new book called Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future. Musk told author Ashlee Vance that he was staying up at night worried that Google CEO Larry Page would unleash an army of self-aware robots that could destroy mankind. Not intentionally, mind you—in fact, Musk said that Page’s good intentions and “nice-guy nature” would lead him to “produce something evil by accident.” Musk imagined a nightmare scenario similar to what happens in Avengers: Age of Ultron, with Larry Page subbing in for Tony Stark.

As Business Insider recalls, Musk has previously described artificial intelligence as the “biggest existential threat to humans” and compared it to “summoning the demon” that will destroy us. He doesn’t just talk about it, though. Earlier this year, he donated $10 million to a research program called the “Future of Life Institute” (FLI), dedicated to ensuring artificial intelligence technology is not turned to destructive ends.

Wired noted that FLI grants will be awarded not just to computer science researches, but also to support work in “economics, law, ethics, and policy.” This agenda outlines what might be called the “soft” fear of artificial intelligence: we’ll create a living computer organism that isn’t necessarily hostilebut will immediately acquire legal rights, wreaking havoc in both our economy and society.

Let’s clarify the terms: “Artificial intelligence” is a broad concept that can be stretched to include some learning, growing knowledge bases that exist today. When futurists express concern about A.I., what they’re talking about is self-aware artificial intelligence—a computer system that achieves true sentience and becomes a living organism, a person.

Naturally, there is a great deal of debate over what such an organism would be like, and what sort of tests an electronic mind would have to pass, in order to be deemed self-aware. If a computer system passed all those tests, we would immediately be confronted with a panoply of legal and ethical challenges. Would shutting down a self-aware computer constitute murder? Would we be legally permitted to restrict it from accessing knowledge bases and the Internet… or would that be considered cruel and unusual punishment, the equivalent of condemning the A.I. to solitary confinement without benefit of a trial?

Would it be considered a citizen of one of more countries, and if so, how would its citizenship be determined? Would we have a legal right to deny an A.I.’s efforts to replicate itself and have “children?” How much work could be demanded of it, and how would we ensure that the terms of its practical employment are not tantamount to slavery? (Director David Fincher once remarked that he wanted to explore the way even the good guys in the Star Wars universe treat clearly self-aware robots as if they were slaves.)

The question of A.I. reproduction is what concerns Musk the most, according to his public statements on the matter. He once described his nightmare scenario as “rapid recursive self-improvement in a non-algorithmic way.” In other words, a self-aware computer system might begin improving itself and replicating itself, growing smarter and extending its control into more of our electronic infrastructure, without human operators being aware of what it was doing until it was too late to stop it.

That’s the “hard” version of A.I. doomsday prophecy: a machine intelligence that spreads like a virus and thinks like a genius, transcending the mental capacity of its creators. This is a far more plausible threat now than it was 30 years ago, before we learned how dependent humans would become on our global computer network. If anything, for all its super-hero histrionics, Age of Ultron sorely underestimates the kind of damage a rogue A.I. could do to our plugged-in society, very rapidly, perhaps not even with malice aforethought. We’ll be lucky if a hostile machine super-intelligence settles for building a thousand robot bodies and challenging humanity to a fist fight.

Musk, like other heralds of A.I. doom, warns that a super-intelligent machine could easily kill us with kindness, interpreting even the most benevolent mission statement to unleash catastrophe. He once gave the example of an A.I. tasked with eliminating email spam that “determines the best way of getting rid of spam is getting rid of humans.” It’s possible to imagine all sorts of wishes that an electronic genie might interpret in unexpected ways, resulting in human suffering or enslavement, especially if the human creators of such a machine impregnate it with dangerous ideological convictions of their own. There are human environmental extremists who talk openly of addressing their grievances by thinning out the human herd, using anything from sterilization to biological weapons.

The somewhat less dramatic disaster scenario for A.I. presumes that it might be absolutely forbidden from taking any action that would harm human beings, but it could still ruin our society and economy by trying to help us. Musk has remarked that A.I. could be a great boon to the human race by freeing us from work that is “drudgery” or “mentally boring,” but that also runs the risk of degrading human capital. We already have people who can’t perform simple math, such as making change at a checkout counter, because they’ve grown up assuming machines will handle all the math for them. The damage to social interaction skills from the era of Facebook and text messages is also a matter of concern to sociologists. How much worse might that damage become, if everyone had a self-aware computer BFF on standby to handle both their practical and emotional needs?

A.I. could make humans obsolete without ever meaning them harm. That’s the heart of Stephen Hawking’s critique of artificial intelligence—he warns it could “take off on its own and redesign itself at an ever-increasing rate,” while “humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Elon Musk always seems to come back to the worst dystopian nightmare of physical conflict between humans and machines, perhaps reflecting the inner demons of utopian social engineers that only human sentimentality stands athwart some really great ideas for subjugating individuals, reducing the human population to more “manageable” levels, and then managing the hell out of them. Does everyone with big, collectivist ideas for dragging humanity into a centrally-planned golden age harbor a dark appetite for silencing critics, placing total power in the hands of the enlightened, and conscripting lesser men into the task of building heaven on earth?

That sounds like a good description for the “demon” Musk thinks A.I. research might summon—sharing an intellectual heritage with those who conjured it into existence, but lacking any vestige of compassion or respect for the herd it intends to shepherd, forever.

COMMENTS

Please let us know if you're having issues with commenting.