AMD is taking on artificial intelligence, deep learning, and autonomous driving, aiming to get its new chips into the smarter tech of tomorrow. The graphics chip maker has launched AMD Radeon Instinct, a combination of hardware and open-source software which it hopes will make its GPU accelerators more broadly adopted in machine intelligence. While it may be lagging behind NVIDIA in tackling the segment, it’s hoping it has two big advantages that will give it the momentum to overtake.
GPU computing isn’t new, with everyone from university researchers to automakers figuring out that the parallel processing graphics chips are capable of makes for a potentially much more potent environment than traditional CPUs. Radeon Instinct, though, aims to make its adoption – and the adoption of AMD’s own silicon, of course – more straightforward. Rather than just being for in-lab crunching, it has potential across a wide variety of segments.
That includes everything from autonomous cars and drones, through robotics and smart home technology, to more industry-focused technologies like medical devices, energy, financial services, and security. “Recent advances in machine intelligence algorithms mapped to high-performance GPUs are enabling orders of magnitude acceleration of the processing and understanding of that data, producing insights in near real time,” AMD says of today’s news. “Radeon Instinct is a blueprint for an open software ecosystem for machine intelligence, helping to speed inference insights and algorithm training.”
On the hardware side, there are three Radeon Instinct devices. The Radeon Instinct MI6 accelerator is built around AMD’s Polaris GPU architecture, and is a passively-cooled card capable of 5.7 TFLOPS of peak FP16 performance. It’ll draw 150W board power and come with 16GB of GPU memory.
Then, there’s the Radeon Instinct MI8 accelerator, a small-form factor card built around the “Fiji” Nano GPU. Despite that, it’ll be capable of 8.2 TFLOPS of peak FP16 performance and draw under 175W board power, coming with 4GB of High-Bandwidth Memory (HBM). Finally, the Radeon Instinct MI25 accelerator will be built on AMD’s next-gen Vega GPU architecture drawing under 300W. It’s designed for deep-learning training, the company says.
Hardware is only half the platform, however; on the flip side there’s new open-source software. AMD’s ROCm software is already available, updated to work with Radeon Instinct, and intended to support the latest machine intelligence problem sets, with domain-specific compilers for linear algebra and tensors and an open compiler and language runtime. Meanwhile, AMD MIOpen will follow on in Q1 207, a free library for GPU accelerators, which the company says are for high-performance machine intelligence implementations.
AMD’s push comes as long-time rival NVIDIA is starting to see the fruits of its investment into GPU markets outside of traditional gaming and mobile devices. The chipmaker has been pushing its processors for in-car infotainment systems as well as in autonomous cars. Indeed, NVIDIA was recently granted permission to operate its own self-driving car research on California roads.
While NVIDIA has been focusing on that market for some time now, AMD is confident it has a couple of advantages that should help minimize its time disadvantage. That’s primarily down to the chip-maker’s “GPU and x86 silicon expertise,” it says, highlighting the fact that, though NVIDIA may have processors in its lineup, they’re focused on mobile devices. AMD argues that its x86 chips are better suited to drop into enterprise servers.
Either way, it’s a fledgling industry that’s likely to see significant increases in sales as more and more uses for deep-learning and machine intelligence are leveraged. Parallel processing of multiple sensors will be vital for everything from self-driving cars understanding their environment in real-time, through drones that are dispatched to autonomously monitor battlefields and crops, to smart cities tracking thousands of traffic signals, onramps, and more.