Tensor Tiling
Download White Paper
What is Tensor Tiling?
In an electric car, every joule counts – and one of the major contributors to power consumption is accessing data from external memory. That’s why designers of the AI chips that will enable the advanced driver assistance systems (ADAS) and autonomous driving platforms of the future care about how much bandwidth in a system is consumed.
In these systems, the processing for the various ADAS and autonomous functions we will all come to reply on, such as speech recognition and multi-camera/sensor object detection and tracking, will require high-performance neural network inferencing, which typically places huge pressure on memory bandwidth inside SoCs. As bandwidth inside system-on-chips (SoCs) is precious, every bit saved reduces power consumption and helps extend the range of the car.
This is why Imagination Tensor Tiling technology is critical. Tensors are large, multi-dimensional arrays of data elements that constitute the key structures used in neural networks. Traditionally, these require frequent, repeated movement to and from main memory, which consumes significant amounts of bandwidth and power.
Imagination Tensor Tiling technology, intelligently “tiles” tensors together into groups, enabling them to be processed much more efficiently, which, in combination with the on-chip memory in the IMG Series4 neural network accelerator (NNA), provides a significant reduction in power consumption, as well as providing silicon area savings to reduce costs.