AI &

COMPUTE

Efficient and programmable processors for universal intelligence

We deliver scalable and power-efficient AI performance across a wide range of devices, from edge computing to data centres.

Adding Flexibility into Intelligent Systems

Whether it’s in your hand, behind your TV, or in factory robots, Imagination’s GPU IP delivers performance and efficiency. As AI becomes essential everywhere, system designers leverage Imagination’s ultra-parallel GPU architecture to accelerate AI models on power-constrained devices.

Imagination GPUs are highly flexible, running AI and graphics workloads simultaneously. Their programmability is enhanced by a developer-friendly compute software stack, helping devices stay ahead in the rapidly evolving AI landscape.

Discover our GPUs for AI

Our latest AI updates

Super-fast compute scheduling

Imagination DXTP delivers advanced graphics and compute acceleration to power-constrained devices and boosts the speed of work group item setup by 16x compared to its predecessor, IMG DXT. This helps avoid setup bottlenecks on compute tasks.
Learn more about DXTP GPU

OpenCL Compute Libraries

Imagination’s latest GPUs ship with a set of OpenCL compute libraries (imgBLAS, imgNN, imgFFT) that help software developers achieve up to 80% GPU utilisation for common compute workloads. They work alongside reference toolkits that help developers port their code to Imagination hardware via oneAPI or TVM TensorGraph.
Learn more about Compute Software

Expanded number format support

Imagination GPUs support industry standard number formats across FP32 / FP16 / INT8 / DOT8 in the hardware and comply with Vulkan and OpenCL extensions. Data compression solutions can be used to support the efficient movement of data on the chip.
Explore our software solutions

Lots of local memory

AI on power-constrained devices is primarily a data management problem. Imagination has been solving the problem of large-scale data movement and storage for thirty years. Our latest GPU solutions come with 512KB register space to enable fast, efficient compute processing at the edge.
Discover DXTP GPU

Why Imagination for AI?

Flexible graphics & compute processing

As the most advanced parallel processing architecture for power-constrained devices, Imagination GPUs can be deployed to accelerate either graphics or AI tasks – or both at the same time!

Our GPUs’ asynchronous processing of different task types – like rendering the smartphone’s UI and accelerating an LLM – makes them a very flexible silicon investment. They also come with hardware-based virtualisation for low-overhead GPU multitasking.

Graphics Processing with Imagination

Efficient by design

The PowerVR architecture is the foundation of all Imagination GPUs and is the gold standard of power-efficient graphics and compute processing for edge devices, from wearables to laptops.

Its tile-based approach to computing keeps as much data local to the GPU as possible and is just as applicable to compute workloads as to graphics.

Our PowerVR architecture

Masters at multitasking

As the most advanced parallel processing architecture for power-constrained devices, Imagination GPUs can be deployed to accelerate either graphics or AI tasks – or both at the same time!

Their support for asynchronous processing of different task types – like rendering the smartphone’s UI and accelerating an LLM – makes the m a very flexible silicon investment.

All Imagination GPUs come with hardware-based virtualisation for fully secure GPU multitasking.

Read more on our Virtualisation Technology

Key Markets for AI

automotive icon

Automotive

The journey from driver assistance to automated driving is centred on the evolution of the capabilities of AI – both on the hardware and the software side. From driver monitoring to path management, today’s cars are growing in intelligence supported by flexible, scalable and programmable accelerators from Imagination.

Learn more

Desktop

Generative AI is transforming business productivity and creative processes. A new era of AI laptops are emerging with new levels of compute power – but energy efficiency is still an essential feature. Imagination’s high-performance GPUs with DirectX are perfect for delivering AI-on-the-move.

Learn more

Mobile

Whether it’s removing an unwanted person from a selfie, adjusting the resolution of a favourite picture or delivering a high-quality voice-based user interface, AI on mobile is here to stay and requires exceptionally capable processors that can handle complex AI algorithms with a limited power budget. Imagination’s energy efficient processors are synonymous with smartphones and our technology can support OEMs find differentiation through advanced AI features.

Learn more
dtv icon

Consumer

On popular consumer devices from DTV to the Smart Home Hub, edge-based AI workloads are being introduced so devices can stand out from the competition, from gesture recognition to natural language processing. But this hardware market is cost-sensitive and needs to deliver new features within a constrained silicon area. Imagination’s CPU and GPU IP for consumer devices pack a huge amount of performance and AI features into a small package.

Learn more

DXTP GPU IP boosts performance while extending battery life on power-constrained devices

Read the full press release

“Zelos is integrating the advanced, power-efficient compute of Imagination DXTP into our upcoming chip. Imagination’s GPUs combine the performance of ultra-parallel processing with the flexibility that comes from a highly programmable architecture, making them the ideal platform for accelerating our AI models.” 

Cheng Chen

Technical Director

Frequently asked questions

AI Accelerator chips come under many names such as Neural Network Accelerators (NNAs), Neural Processing Units (NPUs) and Machine Learning Engines. They are specialised processors designed to handle the complex computations required for artificial intelligence (AI) applications. Some examples of products that use AI accelerator chips include:

  • Smartphones: Many high-end smartphones, such as the iPhone 12 and Samsung Galaxy S21, use AI accelerator chips to power features such as facial recognition, voice recognition, and augmented reality.
  • Smart home devices: Smart home devices such as Amazon Echo and Google Nest Mini 2nd gen use AI accelerator chips to process voice commands and provide intelligent responses.
  • Self-driving cars: Autonomous vehicles use AI accelerator chips to process sensor data and make real-time decisions based on the surrounding environment. Read more about AI in self-driving cars.

AI processors offer several benefits over traditional processors, including:

Faster performance: AI processors are designed to handle the complex computations required for AI workloads, such as deep learning and machine learning, much faster than traditional processors. This allows for more efficient processing of large datasets and faster training of AI models.

Energy efficiency: AI processors are optimised for processing large amounts of data in parallel, which can be done more energy-efficiently than traditional processors. This means that AI workloads can be processed more quickly and with less energy consumption helping companies achieve net zero.

Improved accuracy: AI processors are designed to handle the specific computations required for AI workloads, which can lead to improved accuracy in AI models. This is especially important in applications such as image recognition or natural language processing, where accuracy is critical.

Scalability: AI processors can be scaled more easily than traditional processors, which allows for faster processing of larger datasets and more complex AI models. This makes it possible to train and deploy AI models more quickly and efficiently.

Specialised design: AI processors are designed specifically for AI workloads, which means they can perform computations that would be difficult or impossible for traditional processors. This opens up new possibilities for AI applications, such as real-time object detection or speech recognition.

Overall, the benefits of AI processors make them essential for many AI applications, from self-driving cars to voice assistants to medical diagnosis tools.