
SKrow/iStock Unreleased via Getty Images
Microsoft-backed (NASDAQ:MSFT) OpenAI’s potential usage of Google’s (NASDAQ:GOOG)(NASDAQ:GOOGL) tensor processing units, or TPUs, for inferencing its artificial intelligence workloads would be a significant endorsement of Google’s hardware, according to Morgan Stanley.
The deal for Google’s TPUs marks a supplier diversification for OpenAI, which has relied on Nvidia’s (NVDA) chips in the past to train its AI models and for inference computing, a process where model execution takes place after the training is complete.
“Following reports earlier this month that OpenAI and GOOGL were finalizing a deal for OpenAI to use Google Cloud’s compute capacity, new reports suggest OpenAI will also be renting GOOGL’s TPUs to power its inference workloads as part of the agreement,” said Morgan Stanley analysts, led by Brian Nowak, in a Monday investor note. “This comes as OpenAI looks to meet its surging inference demand and manage inference costs as well as possible. Note that OpenAI would not have access to GOOGL’s most powerful TPUs, which GOOGL is reserving to train its own Gemini models.”
Morgan Stanley noted that this deal could drive Google Cloud acceleration and increase market confidence in Google’s AI chips.
“We view OpenAI as the most notable TPU customer to date, others include Apple (AAPL), Safe Superintelligence and Cohere, and this agreement would be a significant endorsement of GOOGL’s AI infrastructure capabilities which have been in development for a decade,” Nowak said. “This would also be the first time OpenAI has used non-NVIDIA chips in a meaningful way, particularly interesting given OpenAI would opt to use TPUs despite the fact that they will not have access to the most advanced versions, speaking again to GOOGL’s leading position within the broader ASIC ecosystem.”
Still, Morgan Stanley indicates that the limited availability of Nvidia’s GPUs due to high demand, likely played a role in OpenAI’s decision to utilize Google’s TPUs. The investment firm also said OpenAI’s decision is not a good look for Amazon Web Services (AMZN) and its Trainium custom silicon chips.
“With the OpenAI/GOOGL partnership (if confirmed), OpenAI would now be running AI workloads across most major cloud providers including Google Cloud, Azure, Oracle (ORCL) and CoreWeave (CRWV)…with AMZN the notable player missing from the list,” Nowak noted.
“In this regard, the most notable aspect is the fact that OpenAI is reportedly choosing to use a prior generation of TPUs over Trainium,” he added.
Google ticked up 1% during early market action on Monday.