Broadcom set to retain leadership position as AI server compute ASIC partner through 2027: Counterpoint

Broadcom is projected to retain its leadership as an artificial intelligence server compute ASIC design partner with 60% of market share in 2027, according to an analysis by Counterpoint Research.

AI server compute ASIC shipments are expected to triple by 2027 as Google (GOOG)(GOOGL), Amazon Web Services (AMZN), OpenAI (OPENAI), Microsoft (MSFT), ByteDance (BDNCE) and Apple (AAPL) are accelerating deployments for training and inference workloads, Counterpoint said.

The rampant growth stems from demand for Google’s TPU infrastructure to support Gemini, sustained scaling of Amazon’s Trainium clusters, as well as ramp-ups for Meta’s MTIA and Microsoft’s Maia chips as they expand their in-house offerings.

“In-house AI Server Compute ASIC design growth is validating the in-house custom XPU era, where AI accelerators are tailor-made for special and specific workloads (training or inference), structurally diversifying beyond solely relying on general-purpose GPUs,” said Counterpoint analysts Neil Shah. “As power and space become a bottleneck, moving some AI workloads to vertically integrated silicon gives hyperscalers more control and leverage, but it also comes with significant software plumbing to optimize the AI workloads and enjoy the power and performance benefits.”

What’s more, Counterpoint expects AI server compute ASIC shipments will reach more than 15M in 2028 and surpass data center GPU shipments.

“The top 10 AI hyperscalers combined will deploy more than 40 million AI Server Compute ASIC chips cumulatively during 2024-2028,” Shah noted. “What is also supporting this unprecedented demand is AI hyperscalers building significant rack-scale AI infrastructure based on their in-house stacks, such as Google TPU Pods and AWS Trainium UltraClusters, enabling them to operate as one supercomputer.”

Taiwan Semiconductor Manufacturing (TSM) holds nearly 99% of the wafer fabrication share for the top 10 companies in AI server compute ASIC shipments.

Although Google and Amazon dominated the AI server compute ASIC shipment share in 2024, that is diversifying rapidly with Meta, Microsoft and others entering the domain. Google’s market share is expected to drop from 64% to 52% by 2027, while Amazon’s is projected to decline from 36% to 29%.

“Although Google’s market share is expected to fall to 52% in 2027 due to the expanding TAM and competing hyperscalers adopting internal silicon in partnership with design houses such as Broadcom, Marvell (MRVL) and Alchip, its TPU fleet will remain the undisputed volume backbone and north star of the industry,” Shah said. “This baseline is underpinned by the massive and sustained compute intensity required for training and serving next-generation Gemini models, which necessitates a continuous, aggressive ramp-up of internal silicon infrastructure.”

This demonstrates hyperscalers slightly decoupling from an overreliance on Nvidia (NVDA) and pursuing internal, customized silicon to reach some of their computing needs. Broadcom and Taiwan’s AIchip are expected to retain the bulk of partner share with these hyperscalers for ASIC design services at 60% and 18%, respectively, by 2027. However, Marvell’s share is projected to decline to 8% from 12% over this time frame.

“Having said that, Marvell’s end-to-end custom chip portfolio looks more solid than ever, with its custom silicon innovations, such as its customized HBM/SRAM memory and PIVR solutions, and the Celestial AI acquisition broadening Marvell’s addressable market in scale-up connectivity,” said Counterpoint analyst Gareth Owen. “Celestial AI could not only add multi-billion-dollar increments to Marvell’s revenues every year but also potentially drive a leadership position in optical scale-up connectivity in the coming years.”

Leave a Reply

Your email address will not be published. Required fields are marked *