Long-Term Memory Added to DeepMind A.I. That Allows It to Learn from Previous Experiences

hal robot AI artificial intelligence 2001

Google subsidiary DeepMind Technologies Limited, in partnership with researchers from Imperial College London, have developed a functional long-term memory for the DeepMind AI.

The London-based company has taken a major step beyond the already impressive milestones they have achieved in machine learning thus far. Previously, DeepMind could learn to play Atari games better than any human but couldn’t apply that information elsewhere. Entirely separate artificial intelligence entities were required for each separate game. “AlphaGo” might be capable of beating a professional human at one of the most ancient and complex strategy games ever created, but the same AI couldn’t play Space Invaders to save its artificial life.

But DeepMind Technologies Limited and Imperial College London have broken through, creating an algorithm that effectively gives the DeepMind AI a long-term memory to store the knowledge it gains. DeepMind Research Scientist James Kirkpatrick told WIRED that the team “had a system that could learn to play any game, but it could only learn to play one game. Here we are demonstrating a system that can learn to play several games one after the other.”

It may not sound like much, but there is a massive difference between a computer that can learn something and one that can retain and apply that principle knowledge elsewhere. Instead of a “catastrophic forgetting” between the application of AI to one task and another, the retention of that knowledge means that the AI is capable of using what it has already learned to learn something else.

Now that they have breached what Kirkpatrick called a “significant shortcoming” in previous AI technology, they can demonstrate “continual learning” based on synaptic consolidation — in layman’s terms, the essential building blocks our own minds use to do the same thing.

The “elastic weight consolidation” (EWC) algorithm is all too similar to our own mental processes. Now, when it learns to do something, it sifts through that knowledge for the most broadly useful bits of what it has learned. By way of example: you may not remember every single math problem you completed in school, but you know how to use numbers to calculate things, even if they’re not laid out in a textbook.

The one thing this development lacks so far is efficiency. Neural networks that are set to a singular task — let’s say beating Atari’s Pitfall — are still much better at their work than one that uses their newfound capacity for memory to learn the same game.

Kirkpatrick says that while they have successfully demonstrated “sequential learning,” they still haven’t proven that it provides any more efficiency. He says that DeepMind’s next step will be to “try and leverage sequential learning to try and improve on real-world learning.”

Follow Nate Church @Get2Church on Twitter for the latest news in gaming and technology, and snarky opinions on both.

COMMENTS

Please let us know if you're having issues with commenting.