AMD’s 2016-2017 Datacenter Roadmap: x86, ARM, and GPGPUby Ryan Smith on May 6, 2015 3:12 PM EST
As part of AMD’s business unit reorganization in 2014, many of AMD’s high-growth businesses were organized into a new group at the company, the Enterprise, Embedded, and Semi-Custom Business (EESC). Now through their first few quarters, Forrest Norrod, the Senior VR and General Manager of the EESC was on-hand at FAD to present AMD’s specific plans for that business for the next 2 years.
Forrest’s comments on the embedded and semi-custom businesses generally reflected AMD’s earlier comments on their three growth opportunities, but I wanted to call specific attention to AMD’s datacenter plans, which Forrest we into in more detail.
AMD’s datacenter plans for the next couple of years will see AMD taking a three-pronged approach to the market. On the CPU side, AMD will of course be leveraging their forthcoming x86 Zen and ARM K12 CPU designs in various fashions. Zen, as previously discussed, should offer a significant increase in IPC and improve AMD’s competitive positioning in the x86 space. And since AMD is launching their high-performance desktop Zen CPU as the first Zen product, the implication is that the server version of that product should not be too far behind.
Meanwhile over the next couple of years AMD will be leveraging the Opteron A1100 “Seattle”, followed by the K12 in 2017. AMD is positioning their ARM datacenter products as being primed for efficiency, whereas their x86 products are primed for high native I/O capacity and high overall performance. Opteron A1100 will ship later this year, though I’m still unsure just how much adoption it is going to see as opposed to the K12 in 2017.
Last but certainly not least however, I wanted to call attention to AMD’s GPU/APU plans in the datacenter space. AMD already has a presence with GPU accelerator cards in the form of the FirePro S-series, with these cards being the basis of their high performance computing (HPC) and virtual desktop infrastructure (VDI) initiatives. AMD is expecting these markets to continue to grow as customers increasingly turn towards GPUs for better compute throughput, and VDI for remote client hosting and the server-side use cases it was designed for.
But the most interesting thing about this roadmap is AMD’s “high-performance server APU”. To date, the closest AMD has come to a server APU is are their FirePro APUs, which are versions of AMD’s standard desktop APUs with the ability to use their FirePro driver set and are intended for workstations. Consequently the creation of a high-performance server APU represents a new product within AMD’s portfolio, as they have never done something quite like this before.
AMD isn’t saying much about this APU, but it will be a multi-teraflops chip for both HPC and workstations, which implies something significantly more powerful than today’s Kaveri APUs, and much closer to the performance of some of AMD’s discrete GPUs. The end-game here of course is to leverage HSA and the close proximity of CPU and GPU in a way no other vendor currently can, to deliver high performance in workloads that benefit from close CPU/GPU interaction. The fact that AMD is targeting this at HPC makes me curious just what they’re planning for FP64 performance, but they could just as well be going after markets such as oil & gas and machine learning where FP32 and F16 are sufficient. Meanwhile there is also a very good reason to suspect that this may be the first place AMD implements HBM for an APU, as it would be in-line with their previous comments about doing HBM with more than just discrete GPUs, and this is a high-margin market that would be suitable for a higher-cost feature like HBM.
Ultimately for AMD their plans for the datacenter are ambitious, but if executed well seem to be achievable. After being pushed out of the x86 server market, AMD is making a concentrated effort to re-enter it via the Zen CPU, and AMD’s APUs offer an interesting, alternative take on getting there. That said, we should also expect to see AMD significantly leaning on open industry standards to get back into the datacenter space, and consequently making part of their argument the fact that they are not Intel, and consider themselves to be more open towards working with partners and customers.