Computex Chronicles Part 2: AMD Leaps Into Copilot+ PCs And Outlines Infrastructure GPU Roadmap
Summary:
- AMD CEO Dr. Lisa Su’s keynote official opening keynote for Computex included a new CPU architecture called Zen 5, as well as a new NPU architecture called XDNA2, which supports the Block Floating Point (Block FP16) data type.
- AMD also had new offerings on the AI PC space – a new series of mobile-focused parts for the new Copilot+ PCs that Microsoft and other partners just announced a few weeks ago.
- On the datacenter side of things, AMD previewed both their latest 5th Generation Epyc Gen CPUs (codename Turin) and their Instinct MI-300 series GPU accelerators.
The second in the series of CEO speeches but the official opening keynote for this year’s Computex was given by AMD (NASDAQ:AMD) CEO Dr. Lisa Su. AMD is in the unique position of being the primary competitor to Nvidia (NVDA) on GPUs (both for infrastructure-focused AI acceleration and in PCs) and to Intel (INTC) for CPUs (both for servers and PCs), so their efforts are getting more attention than they ever have.
Given the company’s wide product portfolio, it’s probably not surprising to hear that Dr. Su’s keynote covered a surprisingly broad range of topics – and even a few that didn’t have to do with GenAI! Key to the news was a new CPU architecture called Zen 5, as well as a new NPU architecture called XDNA2. What’s particularly interesting about XDNA2 is that it supports something called the Block Floating Point (Block FP16) data type, which offers the speed of 8-bit integer performance and accuracy of 16-bit floating point. According to AMD it’s a new industry standard – meaning it’s something existing models can leverage – and AMD’s implementation is the first being done in hardware.
The Computex show has a long history of being a critical launching point for traditional PC components, and AMD started things off with their next-generation desktop PC parts – the Ryzen 9000 series – which don’t have a built-in NPU. What they do have, however, is the kind of performance that gamers, content creators and other DIY PC system builders are constantly in search of for traditional PC applications – and let’s not forget, that’s still important, even in the era of AI PCs.
Of course, AMD also had new offerings on the AI PC space – a new series of mobile-focused parts for the new Copilot+ PCs that Microsoft and other partners just announced a few weeks ago. AMD chose to brand them as the Ryzen AI 300 series to reflect the fact that they are the third generation of laptop chips built by AMD with an integrated NPU. A little-known fact is that AMD’s Ryzen 7040 – announced in January of 2023 – was the first chip with built-in AI acceleration and it was followed by the Ryzen 8040 at the end of last year.
The fact that a new chip was coming wasn’t a big surprise – AMD had even said so at the 8040 launch – but what was unexpected is how much new technology AMD has integrated into the AI 300 (which was codenamed Strix Point). It features the new Zen 5 CPU core, an upgraded GPU architecture they’re calling RDNA 3.5 and a new NPU built on the XDNA 2 architecture that offers an impressive 50 TOPS (trillions of operations per second) of performance.
What’s also surprising is how quickly laptops with Ryzen AI 300s are coming to market. Systems are expected in July of this year, just a few weeks after the first Qualcomm (QCOM) Snapdragon X-powered Copilot+ PCs are going to ship. One big challenge, however, is that the x86 CPU and AMD-specific NPU versions of the Copilot+ software won’t be ready when these AMD-powered PCs first ship. Apparently, Microsoft didn’t expect x86 vendors like AMD and Intel to be done so soon and prioritized their work for the Arm-based Qualcomm devices. As a result, these will be Copilot+ “ready” systems, meaning they’ll need a software upgrade – likely in the early fall – to make them full-blown next-generation AI PCs.
Still, this vastly sped-up time frame – which Intel is also widely expected to announce for their new chips at their keynote this week – has been incredibly interesting and impressive to watch. Early on in the development of AI PCs, the common thought was that Qualcomm would have about a 12-18-month lead over both AMD and Intel to develop a part that met Microsoft’s 40+ NPU TOPS performance specs. The strong competitive threat from Qualcomm, however, inspired the two PC semiconductor stalwarts to move their schedules forward, and it looks like they’ve succeeded. It’s one of the many reasons why the AI PC market has already proven to be an exciting (and inspiring) development to watch.
On the datacenter side of things, AMD previewed both their latest 5th Generation Epyc Gen CPUs (codename Turin) and their Instinct MI-300 series GPU accelerators. As with the PC chips, AMD’s latest server CPU products are built around their new Zen5 core CPU architecture with competitive performance improvements for certain AI workloads that are as much as 5x faster than Intel equivalents. For GPU accelerators, AMD announced the Instinct MI325 that offers twice the HBM3E memory of any card on the market. More importantly, as Nvidia did last night, AMD also unveiled an annual cadence for improvements for GPU accelerator line and offered details up to 2026. Next year’s MI350, which will be based on a new CDNA4 GPU compute architecture, will leverage both the increased memory capacity and this new architecture to deliver an impressive 35x improvement versus current cards. For perspective, AMD believes it will give them a performance lead on Nvidia’s latest-generation products.
AMD is one of the few companies that’s been able to gain any traction against Nvidia for large-scale acceleration, so any enhancements to this line of products is bound to be well-received by anyone looking for an Nvidia alternative – both large cloud computing providers and enterprise data centers.
Taken as a whole, the AMD story continues to advance and impress. It’s kind of amazing to see how far the company has come in the last 10 years, and it’s clear they continue to be a driving force in the computing and semiconductor world.
Disclaimer: Some of the author’s clients are vendors in the tech industry.
Disclosure: None.
Original Source: Author
Editor’s Note: The summary bullets for this article were chosen by Seeking Alpha editors.