Google’s New A.I. Chip Saved Them from Building Data Centers

Like Amazon's Alexa, Google's digital assistant will allow buyers to add payment informati
AFP

With the introduction of Google’s new Tensor Processing Unit, the company has been saved from building twice as many data centers as they currently operate.

Wired reports that upon the development of Google’s new form of voice recognition on Android phones six years ago, the company discovered that their current network was not large enough to handle the number of data requests the new feature required. Engineers realized that if each Android handset in the world used the voice recognition software for just three minutes a day, Google would have to double the amount of data centers they operated.

Google had at this point begun developing their deep neural networks, mathematical systems that learn tasks by analyzing massive amounts of computer data. This technology is now used to streamline voice recognition, image recognition, machine translation and internet search engines. Although this method required a lot of extra data storage, Google did see error rates in programs using deep neural networks drop by at least 25 percent.

Instead of doubling the amount of data centers used to deal with the new demand, Google developed their own computer chip made specifically for operating deep neural networks, resulting in the creation of the Tensor Processing Unit. Norm Jouppi, one of the 70 engineers who worked on the chip said, “It makes sense to have a solution there that is much more energy efficient.” Norm notes that the TPU outperforms other processors by 30 to 80 times in the TOPS/Watt measure.

The processor was first introduced last May, and the engineers behind the processor have recently released a report detailing the project and explaining how the chip works. The chip is used solely for executing neural networks and activating them once they’re needed such as when someone uses the voice search function on their phone. Jouppi explained that this has saved the company quite a lot in data processing costs.

The TPU has been utilized by Google now for two years using it for image recognition, machine translation, and a multitude of other services, including AlphaGo, the machine that taught itself how to win the game of Go. Without the use of deep neural networks and the TPU, these services would have cost Google large expenses in data center construction and maintenance.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan_ or email him at lnolan@breitbart.com

COMMENTS

Please let us know if you're having issues with commenting.