Google’s DeepMind Using Games to Learn If A.I. Can Teach Itself to Break Rules

hal robot AI artificial intelligence 2001

Google’s artificial intelligence unit is using games to test whether AI can teach itself to cheat in hopes of heading off a robot apocalypse.

Before the advent of a much-theorizedsingularity,” in which AI meets — and ultimately surpasses — human intellect, DeepMind has diverted research into seeing just how far its machine-learning artificial intelligences will bend the rules to meet an objective. To do this, they are testing the AI in simple 2D grid-based games of calculation.

The tests are meant to set a concrete method for deactivating an AI that has gone off the rails, but inversely, to see if they can adapt when tests demand work outside of their training parameters. The line between adaptation and disobedience is a fine one, and particularly pertinent to the study of digital minds that may very well make life and death decisions on our streets and in our hospitals.

The primary concern here is that AI will become smart enough to complete tasks efficiently, without any compunctions about how those tasks are completed. Nick Bostrom’s “paperclip maximizer” presents a scenario in which an artificial intelligence is tasked with collecting paperclips but continues to evolve its singular intelligence toward that goal, destroying civilization in pursuit of paperclips.

Alongside the OpenAI initiative, DeepMind has already performed experiments in which an AI trained itself to backflip and play archaic Atari games by asking simple questions of the user about preference in its behavior. Now, we just need to be certain that when the AI are controlling traffic, they understand that the cost of an accident is more than just the combined worth of the cars involved.

Follow Nate Church @Get2Church on Twitter for the latest news in gaming and technology, and snarky opinions on both.

COMMENTS

Please let us know if you're having issues with commenting.