Google’s Most Promising Source Of Growth Right Now
Summary:
- Google Cloud is procuring some key first-mover advantages to lead the AI race.
- Google is successfully driving network effects around its cloud platform.
- Google Cloud’s profitability could be difficult to sustain.
Cloud Service Providers (CSPS) will be some of the biggest beneficiaries of the AI revolution, as organizations across industries strive to produce generative AI-powered products and services, creating enormous demand for cloud computing power for training and inferencing their models. Google Cloud is advancing aggressively in providing optimized infrastructure to allow customers to easily build and deploy their own models through Vertex AI. Though Google will need to invest heavily in Cloud infrastructure to compete with competitors like Azure, making sustained profitability difficult.
Google Cloud is Google’s fastest-growing segment, with revenue growing at a 5-year CAGR of 45% (between 2017 and 2022), and the segment will continue enjoying high-growth rates amid the AI revolution.
Vertex AI
Google’s Vertex AI is a cloud-based machine learning platform, aimed at simplifying the development, deployment, and management of machine learning models and AI applications. Within Vertex AI, developers are able to access Google’s AI foundation models.
Foundation models refer to the fundamental building blocks that serve as a starting point for creating various AI applications and services in the cloud, which can then be adapted by developers to fit their own business/ industry needs. They essentially form the base of generative AI-powered products and services. For instance, Google’s PaLM 2 is a foundation model that powers Bard, as well as other Google services.
Foundation models require enormous amounts of computing power to train, making it a time-consuming and expensive process. The largest cloud providers are well-capitalized to embark on such capital-intensive endeavors. Though for smaller organizations, it would not be feasible to build their own foundation models (unless they receive financial-backing from a larger player).
Therefore, foundation models foster the competitive moats around the major CSPs’ cloud services aimed at AI innovation, with smaller organizations dependent on these models to form the base of their own innovations. For cloud providers, the goal is to offer numerous, versatile foundation models, accompanied by comprehensive software stacks and development tools, that can be applied to a wide range of use cases across industries, subsequently generating more diversified revenue.
Google Cloud is advancing aggressively in offering a comprehensive suite of foundation models within its Vertex AI platform to serve various end use cases. Google’s main foundation model currently is PaLM 2, a large language model with advanced capabilities such as multilingualism and reasoning.
Large language models offer a wide range of use cases, of which the most prominently discussed is the development of intelligent chatbots and virtual assistants that can understand and respond to user queries more seamlessly.
Though as large language models become more advanced, their applicability also advances. As generative AI becomes increasingly utilized in various industries, new products and services will arise. Consequently, new data sources are generated, enabling enterprises to retrain their models. With companies continuously refining their AI-based offerings, a growing array of data is generated, which steadily escalates the training workloads for enhancing these models. Consequently, the demand for computational power will persistently increase over time as enterprises frequently retrain and optimize models using larger datasets. The continuous need for training/ inferencing means sustained demand for Google Cloud’s computing power, conducive to recurring revenue over the long-term.
In fact, Google also introduced specialized versions of its PaLM 2 large language model in May 2023, including Sec-PaLM, which is aimed at cybersecurity analysis, and Med-PaLM 2, focused on answering medical questions. Google is exploring ways for Med-PaLM 2 to be used by healthcare experts to “find insights in complicated and unstructured medical texts… help draft short- and long-form responses and summarize documentation and insights from internal data sets and bodies of scientific knowledge”. Google’s Med-PaLM 2 advances the company’s push into the healthcare industry.
So, what does this mean for investors? As new types of diseases arise and medical professionals continuously strive to learn how patients’ bodies react to newly discovered drugs, the healthcare industry will persistently witness new forms of data being created. This subsequently requires regular re-training and fine-tuning of their AI models, conducive to continuous demand for AI-centric cloud services of Google Cloud, and therefore long-term revenue growth. Additionally, industry-specific models like Med-PaLM 2 also enhance Google Cloud’s network effects, as it attracts customers’ own developers, as well as third-party developers, to build healthcare-related applications and tools around Google Cloud, in turn attracting even more healthcare companies, and so the virtuous cycle continues, enhancing Google’s revenue-generation prospects.
Competition for cloud business from growing industries like healthcare will be intense, as Microsoft Azure and Amazon’s AWS boast their own generative AI capabilities in such specific industries. Nonetheless, Google Cloud is aggressively advancing the capabilities of its Vertex AI platform to help customers progress more rapidly in finetuning models, particularly with the introduction of Reinforcement Learning from Human Feedback:
Vertex AI is the first end-to-end machine learning platform among the hyperscalers to offer RLHF as a managed service offering, helping organizations to cost-efficiently maintain model performance over time and deploy safer, more accurate, and more useful models to production. This unique tuning feature lets organizations incorporate human feedback to train a reward model that can be used to finetune foundation models. This is particularly useful in industries where accuracy is crucial, such as healthcare
RLHF allows for more pinpointed fine-tuning with human language as opposed to coding, allowing companies to iterate products more comprehensively to ensure more accurate and reliable responses as generative AI-powered services continue to evolve, and businesses continuously learn how end-users are using these services. Being a first-mover here should allow Google Cloud to better attract organizations to the Vertex AI platform, particularly from industries where information accuracy is critical, allowing it to take market share in the cloud industry.
Furthermore, companies’ enhanced ability to iterate AI-powered products more effectively through RLHF enables them to roll out new improved products at a faster pace, which should be conducive to better pricing power and faster revenue generation. As Google Cloud’s customers generate higher revenue more rapidly thanks to generative AI advancements, it subsequently also improves Google Cloud’s pricing power for such tools and enables them to extract revenue from customers at a faster pace as these businesses continuously iterate their services to stay competitive themselves.
Earlier this year, Google Cloud also made another key first-move in relation to its cloud-based virtual machines (VMs), with the introduction of G2 VMs in March:
G2 is the industry’s first cloud VM powered by the newly announced NVIDIA L4 Tensor Core GPU, and is purpose-built for large inference AI workloads like generative AI. G2 delivers cutting-edge performance-per-dollar for AI inference workloads that run on GPUs in the cloud. By switching from NVIDIA A10G GPUs to G2 instances with L4 GPUs, organizations can lower their production infrastructure costs up to 40%. We also found that customers switching from NVIDIA T4 GPUs to L4 GPUs can achieve 2x-4x better performance.
By being the first CSP to offer specialized Cloud VMs powered by Nvidia’s L4 Tensor Core GPUs, it not only allows Google Cloud to lure more businesses to its platform for building, training and inferencing their models, but also enables it to grow the developer community around the platform. Moreover, the move should help Google better attract partners to its Google Cloud Partner Advantage Program, including system integrators that help integrate Google Cloud solutions with their existing IT infrastructure, applications, and data systems, and Technology Partners such as independent software vendors (ISVs) and software-as-a-service (SaaS) providers who develop applications, platforms, and services that enhance the functionality, performance, and interoperability of Google’s offerings. In fact, on the last earnings call, CEO Sundar Pichai proudly proclaimed:
We have also built a strong partner ecosystem. Over the last 4 years, the number of Google Cloud Partner certified practitioners around the world has increased more than 15x. The largest global system integrators have built 13 dedicated practices with Google Cloud compared to zero when we started. And today, more than 100,000 companies are part of our Google Cloud Partner Advantage program.
Being a first-mover in offering Nvidia’s most advanced GPUs for inferencing should further strengthen this network effect around Google’s cloud-based VMs, and the Google Cloud Platform more broadly, conducive to more revenue growth. The move also indicates that Google will not shy away from making the necessary capital-intensive investments to foster a leadership position in the cloud industry’s AI race.
Sustaining Google Cloud profitability
In Q1 2023, Google Cloud became profitable for the first time through effective cost management, delivering operating margin of around 3%, a notable improvement over the same period last year, when the operating margin was -12%. Though keep in mind that Google Cloud still only makes up 11% of total revenue, subduing its impact on company-wide profitability.
The test for Google now is how well it can sustain profitability given the onset of the AI revolution, which will require heavy capital investments to stay ahead in the race. On the last earnings call, when an analyst asked about the outlook for Google Cloud’s profitability going forward, CFO Ruth Porat said:
“I tried to make that clear in my opening comments as well. I think it’s a really important question. We are very pleased with the Q1 results. And as both Sundar and I noted, we are intensely focused on all elements of the cost space and the long-term path to attractive profitability. At the same time, I think at the core of your question, and what we were trying to convey is we will continue to invest to support long-term growth, in particular, given the opportunities we see delivering AI capabilities to our customers. So, as I have said in the past, you shouldn’t extrapolate from quarter-to-quarter, but we are very pleased to be at this level and are continuing to focus on profitability and long-term value creation here.”
Given the intensifying AI arms race, Google is right to prioritize long-term investments to fight for market share growth, at the expense of near-term profitability. However, the executives fell short of offering strong revenue growth outlook for Google Cloud, which investors expect to be a big beneficiary of organizations across the world transforming their businesses for the era of generative AI. Guidance suggesting that AI-driven revenue growth should help sustain profitability despite Google’s heavy investments in AI infrastructure would have been more ideal.
Keep in mind that while Google Cloud embarks in large-scale AI-related investments, other cloud players are investing just as heavily to grow their own market share, intensifying the pressure on revenue growth and sustained profitability. In fact, on Microsoft’s last earnings call, CEO Amy Hood emphasized:
“I’ll tell you that the energy and focus we put right now is on relative performance and share gains.
… it is, in fact, how we think about long-term success, in being well positioned in big markets, taking share in those markets, committing to make sure we’re going to lead this wave, staying focused on gross margin improvements where we can.”
Investors will be watching closely for cloud-related forecasts when they both report earnings next month. Nonetheless, consistent focus on cost efficiency should enable Google Cloud to achieve profitability over the long-term, and the segment is much less likely to be a drag on Google’s bottom line going forward. In fact, given that is still operating at around break-even, investors are essentially getting Google Cloud for free within Google’s valuation of 22.91x forward earnings.
Bottom Line
Google Cloud’s Vertex AI platform offers promising growth potential as organizations increasingly demand cloud computing power for generative AI-powered products and services. Google Cloud is aggressively advancing its foundation models and capabilities, such as RLHF and specialized Cloud VMs powered by Nvidia’s L4 Tensor Core GPUs, to compete with rivals like Azure and AWS.
Nonetheless, sustaining profitability may be challenging for Google Cloud due to heavy investments in AI infrastructure and intense competition, while cost efficiency and long-term value creation remain priorities.
Keep in mind though that at present, Google Cloud still only makes up 11% of total revenue, subduing its impact on company-wide profitability. Google’s main source of revenue remains advertising, contributing 78% to total revenue, and in a previous article Nexus Research discussed the challenges facing Google’s advertising division, inducing a ‘hold’ rating on the stock. Therefore, despite the promising growth potential of Google Cloud, Nexus Research maintains a ‘hold’ rating on the stock.
Analyst’s Disclosure: I/we have no stock, option or similar derivative position in any of the companies mentioned, and no plans to initiate any such positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Seeking Alpha’s Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.