Amazon re:Invent creates positive implications for several semi, hardware stocks
Amazon Web Services (NASDAQ:AMZN) hosted re:Invent, its annual all things cloud related conference, last week, which provided positive implications for several semiconductor, networking and hardware stocks, according to Citi Research.
The event also proved to be a catalyst for Amazon, as it revealed a slate of new AI products and partnerships, prompting shares to climb 7.67% over the week.
Citi identified that AWS will likely remain the second most prolific buyer of Nvidia (NASDAQ:NVDA) processors. This is despite Amazon highlighting its own artificial intelligence chips, including the Trainium2 and the upcoming Trainium3.
“While the focus of the event was clearly on Amazon’s proprietary chips and servers, we estimate nevertheless that Amazon will continue to deploy an important number of NVDA chips in C2025 and remain the cloud provider with the 2nd largest NVDA AI GPU installed based, behind Microsoft (MSFT),” said Citi analysts, led by Atif Malik and Asiya Merchant, in an investor note.
The build out of AWS Trainium2 UltraServer pods will prove beneficial for multiple intra-server companies. Citi identifies Marvell (NASDAQ:MRVL), Broadcom (NASDAQ:AVGO) and Astera Labs (NASDAQ:ALAB) as likely key beneficiaries.
“Unlike recent narrative of a pivot to smaller models to the detriment of larger models, AWS’ Senior Vice President of AWS Utility Computing Peter DeSantis estimates that models are indeed getting larger,” Malik noted. “He estimates that frontier models will reach the trillions of parameters in the short-term … Peter highlighted a graph based on the Scaling Laws showing that to half current AI compute losses models 1Mx more compute is needed.”
Meanwhile, Wedbush highlighted the progress of AWS custom silicon, including Trainium, Graviton and Inferentia.
“Over 50% of new CPU capacity over the last two years in AWS datacenters has been on AWS Graviton and 90% of the top 1,000 EC2 customers use Graviton,” said Wedbush analyst Scott Devitt, in an investor note.
“While the vast majority of AI workloads are running on Nvidia GPUs, we are impressed by the progress Amazon has made with its custom AI chips (Trainium and Inferentia) and customer examples, including from Apple (AAPL), spoke to real-world cost savings achieved by leveraging Amazon’s custom silicon,” he added.
Amazon also demonstrated it is optimizing costs for enterprise customers while also providing them with the tools to drive efficiency in AI workloads.
“In addition to better price performance from AWS’ custom silicon, the company introduced the Amazon Nova family of foundation models which operate at a lower cost on Bedrock relative to leading peers,” Devitt said. “Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are at least 75% less expensive than the best performing models within their respective intelligence classes and run faster in Bedrock than competing models.”
The event showed demand was strong across all three layers of the generative AI stack as Amazon has secured tens of thousands of customers, an increase of 500% in the past year, on Amazon Bedrock.