Bank of America reiterated its Buy ratings on Nvidia (NVDA), AMD (AMD), and Broadcom (AVGO), amid reports that Meta (META) is considering using Google’s (GOOG) (GOOGL) tensor processing units to supplement its existing Nvidia GPU supply.
“Late yesterday, media reports indicated the possibility of Google renting out the TPUs to Meta next year, potentially followed by on-premise deployments (with Meta and maybe others) in 2027,” Bank of America analyst Vivek Arya wrote in a note to clients. “Neither company has made any official comments regarding any such transaction, but if true, it can intensify the competitive landscape for Meta’s current GPU suppliers NVDA and AMD.”
Broadcom is the manufacturer of Google’s tensor processing units.
Despite that, Arya sees the AI accelerator race as a rising tide, lifting all boats. He expects the total addressable market for the AI data center to grow roughly five times by the end of the decade, reaching roughly $1.2T, up from $242B this year. As it does now, Nvidia is still expected to dominate the market, though it may only have a 75% market share, down from the estimated 85% it currently holds, Arya explained.
“There are advantages to a merchant GPU chip – off the shelf availability, multi-cloud portability, NVDA’s full-stack of software and developers, as well as a larger [total addressable market] with sovereign and enterprise on-premise customers who don’t have the expertise to build custom chips,” Arya added. “In addition, the tight supply chain and NVDA’s scale advantages make it tougher to ‘steal’ too much share since not enough components are available in the near/medium term and hence share changes if any are likely to be gradual in nature. Custom chips can be lower-cost for a specific range of internal workloads that might suit customers with large internal workloads such as Google and perhaps Meta. However, they are less useful in a public cloud such as at Microsoft Azure or Amazon Web Services or the 100+ neoclouds where intense levels of flexibility are required which is why even Google uses GPUs in its public cloud.”