Sunday / July 21.
HomeWhat's OnNew TPU Accelerator Chip from Google Speeds Machine Learning

New TPU Accelerator Chip from Google Speeds Machine Learning

At its I/O Developers Conference this week, Google announced a new product that had been an open secret for several months. The Tensor Processing Unit (TPU) is named after the TensorFlow software it uses for its machine learning programs. In announcing the chip, Google referred to the TPU as an accelerator chip, able to speed up a given task, like data analysis or voice translation.

When he introduced the TPU at the I/O conference, Google CEO Sundar Pichai said it provides an order of magnitude better performance per watt than existing chips for machine learning tasks. While it’s unlikely to usurp CPUs and GPUs already in use in the machine learning world, the TPU could potentially speed the machine learning process without using much more energy.

Google has been carefully guarding the details of the TPU project, but it’s been generally understood that the project was in progress. Based on the company’s job postings, it had become evident over the past year that Google was working on a chip of some kind. Now, Google confirms the chip has been under development for about two years. Pichai also revealed at I/O that the TPU chips were used to power the AlphaGo computer that beat Lee Sedol, the world’s “Go” champion, in their widely-publicized March match-up.

For Intensive Applications

The introduction of the TPU (pictured above) comes at a time when the pace of Moore’s Law — which long dictated that the number of transistors in a dense integrated circuit doubles about every two years — is slowing down. Products such as the TPU could help fill that gap, according to Bruce Daley, principal analyst for Boulder, Colo.-based market intelligence firm Tractica.

“The fact that this product uses TensorFlow tells us that it has applications in machine learning and deep learning,” said Daley, whom we reached Thursday. “These are very intensive applications that use a lot of floating-point computation.”

Machine learning is used in a variety of applications including voice recognition, translation software, and data analytics. The power required by such applications requires a chip as robust as the TPU in order to advance the technology. Google says its TPU provides the equivalent gains to moving Moore’s Law forward by three generations, or about seven years.

Already in Use

The TPU is now in use across Google’s cloud. Machine learning is being used by more than 100 Google development teams for their work on Street View, Inbox Smart Reply, the RankBrain search result sorting system, and other applications. This advance comes at a time when more and more applications are built in the cloud, leaving fewer concerns about hardware configuration and maintenance.

At the I/O conference, Urs Hölzle, Google’s senior vice president for technical infrastructure, said the TPU can augment machine learning processes but that there are enough functions that require CPUs and GPUs that the new product isn’t likely to replace them.

In fact, according to Tractica’s Daley, it probably won’t be that long until the TPU itself is replaced.

“Hardware like this is usually upgraded on a pace of every two or three months as the algorithms used to develop them are refined,” he says.

Image credit: Google Cloud Platform Blog.

Let’s block ads! (Why?)