Amazon’s AWS Extends Computing Options And Services
Summary:
- At Amazon’s annual re:Invent conference this year, AWS debuted several new computing options as well as a staggering array of data organization, analysis, and connection tools and services.
- Amazon’s ongoing emphasis on custom silicon and its increasingly diverse range of computing options represent a comprehensively impressive set of tools for companies looking to move more of their workloads to the cloud.
- It now has over 600 different Elastic Compute Cloud (EC2) computing instances, each of which consists of different combinations of CPU and other acceleration silicon, memory, network connections, and more.
Ever since Amazon (NASDAQ:AMZN) launched its cloud computing Amazon Web Services (AWS) in 2006, the company has been on a mission to not only convert the world to its vision of how computing resources can be purchased and deployed, but also to make them as ubiquitous as possible. That strategy was on clear display at this year’s iteration of its annual re:Invent conference. AWS debuted several new computing options – some based on its own new custom silicon designs – as well as a staggering array of data organization, analysis, and connection tools and services.
The sheer number and complexity of many of the new features and services that were unveiled makes it difficult to keep track of all the choices now available to customers. Rather than being the outcome of unchecked development, however, the abundance of capabilities is by design. As new AWS CEO Adam Selipsky pointed out during his keynote and other appearances throughout the conference, the organization is customer-obsessed. As a result, most of its product decisions and strategies are based on customer requests. It turns out that when you have lots of different types of customers with different types of workloads and requirements, you end up with a complex array of choices.
Realistically, of course, that kind of approach will reach a logical limit at some point, but in the meantime, it means that the extensive range of AWS products and services likely represent a mirror image of the totality (and complexity) of today’s enterprise computing landscape. In fact, there’s a wealth of insight into enterprise computing trends waiting to be gleaned from an analysis of what services are being used to what degree and how it has shifted over time – but that’s a topic for another time.
In the world of computing options, the company acknowledged that it now has over 600 different Elastic Compute Cloud (EC2) computing instances, each of which consists of different combinations of CPU and other acceleration silicon, memory, network connections, and more. While that’s certainly a hard number to fully appreciate, it once again indicates how diverse today’s computing demands have become. From cloud-native, AI or ML-based, containerized applications that need the latest dedicated AI accelerators or GPUs to legacy “lifted and shifted” enterprise applications that only use older x86 CPUs, cloud computing services like AWS now need to be able to handle all of the above.
New entries announced this year include several based on Intel’s (INTC) 3rd-Generation Xeon Scalable Processors with various numbers of CPU cores, memory, and more. What received the most attention, however, were instances based on three of Amazon’s own new silicon designs. The Hpc7g instance is based on an updated version of the Arm-based Graviton3 processor dubbed the Graviton3E that the company claims offer 2x the floating-point performance of the previous Hpc6g instance and 20% overall performance versus the current Hpc6a.
As with many instances, Hpc7g is targeted at a specific set of workloads – in this case High Performance Computing (HPC), such as weather forecasting, genomics processing, fluid dynamics, and more. Even more specifically, thanks to optimizations to ensure the best price/performance for these HPC applications, it’s ideally designed for bigger ML models that often end up running across thousands of cores. What’s interesting about this is it both demonstrates how far Arm-based CPUs have advanced in terms of the types of workloads they’ve been used for, as well as the degree of refinement that AWS is bringing to its various EC2 instances.
Separately, in several other sessions, AWS highlighted the momentum towards Graviton usage for many other types of workloads as well, particularly for cloud-native containerized applications from AWS customers like DirecTV and Stripe. One intriguing insight that came out of these sessions is that because of the nature of the tools being used to develop these types of applications, the challenges of porting code from x86 to Arm-native instructions (which were once believed to be a huge stopping point for Arm-based server adoption) have largely gone away. Instead, all that’s required is the simple switch of a few options before the code is completed and deployed on the instance. That makes the potential for further growth in Arm-based cloud computing significantly more likely, particularly on newer applications. Of course, some of these organizations are working toward wanting to build completely instruction set-agnostic applications in the future, which would seemingly make instruction set choice irrelevant. However, even in that situation, compute instances that offer better price/performance or performance/watt ratios – which Arm-based CPUs often do have – are a more attractive option.
For ML workloads, Amazon unveiled its second-generation Inferentia processor as part of its new Inf2 instance. Inferentia2 is designed to support ML inferencing on models with billions of parameters, such as many of the new large language models for applications like real-time speech recognition that are currently in development. The new architecture is specifically designed to scale across thousands of cores, which is what these enormous new models, such as GPT-3, require. In addition, Inferentia2 includes support for a mathematical technique known as stochastic rounding, which AWS describes as “a way of rounding probabilistically that enables high performance and higher accuracy as compared to legacy rounding modes.” To take best advantage of the distributed computing, the Inf2 instance also supports a next-generation version of the company’s NeuronLink ring network architecture, which supposedly offers 4x the performance and 1/10th the latency of existing Inf1 instances. The bottom line translation is that it can offer 45% higher performance per watt for inferencing than any other option, including GPU-powered ones. Given that inferencing power consumption needs are often 9 times higher than what’s needed for model training according to AWS, that’s a big deal.
The third new custom-silicon driven instance is called C7gn, and it features a next-generation AWS Nitro networking card equipped with fifth-generation Nitro chips. Designed specifically for workloads that demand extremely high throughput, such as firewalls, virtual network, and real-time data encryption/decryption, C7gn is purported to have 2x the network bandwidth and 50% higher packet processing per second than the previous instances. Importantly, the new Nitro cards are able to achieve those levels with a 40% improvement in performance per watt versus the predecessors.
All told, Amazon’s ongoing emphasis on custom silicon and its increasingly diverse range of computing options represent a comprehensively impressive set of tools for companies looking to move more of their workloads to the cloud. As with many other aspects of its AWS offerings, the company continues to refine and enhance what has clearly become a very sophisticated, mature set of computing tools. Collectively, they offer a notable and promising view to the future of computing and the new types of applications they can enable.
Disclaimer: Some of the author’s clients are vendors in the tech industry.
Disclosure: None.
Source: Author
Editor’s Note: The summary bullets for this article were chosen by Seeking Alpha editors.