The AI Gold Rush: How Nvidia Stands To Gain Amidst Surging GPU Demand
Summary:
- Nvidia’s strong performance is driven by the rapid advancements in the AI sector, leading to a potential shortage of server GPUs and benefiting the company’s entire GPU portfolio.
- The booming AI industry has driven major companies and startups alike to increase GPU demand, with even Elon Musk launching an AI platform called “TruthGPT.”
- Despite the fear of competition, Nvidia’s competitive advantage lies in CUDA, as millions of developers are trained on it, making it difficult for competitors to overcome Nvidia’s stronghold on developers.
Two weeks ago, we published an optimistic article on Nvidia (NASDAQ:NVDA), asserting that the stock remains appealing despite its 70% year-to-date surge, due to the enormous potential of AI in the future. A common counterargument we received was NVDA stock’s high valuation; however, we believe that consensus estimates, which determine the stock’s multiples, are more inclined to be adjusted upwards rather than downwards. Since our previous article, the stock has maintained its upward trajectory and now boasts a nearly 100% year-to-date increase. Although it might seem risky to pursue this rally, we encourage investors to take into account the rapid advancements in the AI sector.
In this article, we highlight an essential update on a notable yet often unnoticed development: it seems that there is a shortage developing for server GPUs, which we expect may lead to higher gaming GPU prices as they may be repurposed for AI development purposes. Consequently, we believe that almost the entire Nvidia GPU portfolio stands to gain from the burgeoning AI industry. On the supply side of the equation, unlike in previous growth periods when Nvidia struggled to meet demand due to foundry capacity constraints, the current weakening macro environment may lead to diminished competition for capacity.
Surging GPU Demand
Earlier this month, The Information reported that tight GPU supply is hindering the development of AI at OpenAI, Microsoft (MSFT), Meta (META), and others, as companies large and small rush to join the AI boom. For more information on large companies like Microsoft and Baidu (BIDU) declaring their intention to spend more on AI, please refer to our article Microsoft: Poised To Win With ChatGPT.
New players are entering the AI market. In recent months, the surge of investment into startups focusing on generative artificial intelligence has evolved into an unrestrained frenzy of deal-making. The enthusiasm has grown so quickly that AI startup valuations are eclipsing those of 2021’s “everything bubble.” Investors are actively searching through the ranks of companies such as Google, Meta, and OpenAI for AI specialists who might be inclined to establish their own businesses. We have now entered a market phase where the philosophy is to let a thousand flowers bloom, and all these “flowers” will need to buy GPUs to train their AI models.
Elon Musk is also joining the fray. Although Musk has previously called for a halt to AI training across the industry, he has reportedly initiated a substantial artificial intelligence project within Twitter. According to Business Insider, the company has acquired around 10,000 GPUs and enlisted AI experts from DeepMind to work on a large language model (LLM) project. Shortly after this report, Musk announced his intention to launch an AI platform called “TruthGPT” to compete with Microsoft and Google’s offerings.
Gaming GPUs Should Benefit Too
We believe as large companies and well-funded startups buy up server GPUs and cloud GPU capacity, smaller startups and more budget constraint researchers and entrepreneurs will likely buy up gaming GPUs for AI development.
Building your own deep-learning computer is significantly cheaper than using cloud services like AWS. For instance, constructing an expandable deep learning computer with a top-end GPU costs around $3k. This is far more cost-effective than renting GPUs on AWS, where you can be charged around $3 per hour or about $2100 per month.
One reason for this substantial cost difference is that cloud service providers like AWS, Google Cloud, and Microsoft Azure have to use more expensive GPU versions due to Nvidia’s contractual restrictions. These restrictions prevent the use of GeForce and Titan cards in data centers, forcing providers to charge more for renting their GPU-equipped machines.
When you build your own computer, you have the option to use gaming GPUs like the Nvidia 1080 Ti, which is powerful and performs at 90% the speed of the more expensive Nvidia V100 GPU used in cloud services. The performance difference is primarily due to slower input/output (IO) between cloud instances and GPUs. In contrast, when you build your own computer, you can use M.2 SSDs, which offer blazing-fast IO speeds.
Foundry Capacity Favorable
As the macroeconomic environment deteriorates, numerous TSMC customers are reducing their demand for foundry capacity. Apple, which accounts for approximately 26% of TSMC’s revenue, has reportedly scaled back its orders with the semiconductor giant. According to the report, Apple has decreased its orders with TSMC by up to 120,000 wafers. The canceled orders encompass chips that would have been manufactured using TSMC’s N7, N5, N4, and even some N3 nodes.
As the microenvironment is deteriorating, Intel’s (INTC) fray into foundry could make the capacity issue worse. The company’s ambitious plan to enter the custom chipmaking industry and increase foundry capacity is aimed at challenging the dominance of Taiwan Semiconductor Manufacturing Co. (TSM) and Samsung Electronics (OTCPK:SSNLF). The company’s goal is to become the second-largest foundry in the world by the end of the decade.
Intel’s plan, initiated by CEO Pat Gelsinger, involves a $20 billion investment into two new facilities in Arizona and another $20 billion plan for a site in Ohio. While Intel’s entry won’t impact capacity availability in the near term, investors could easily forecast when these new capacities comes online, which should help Nvidia well supplied at an attractive price.
Valuation
Note: all data in this section comes from FactSet.
Nvidia faced a sales decline of 6.8% in the fiscal year 2020 (ending January), which was attributed to data center digestion and an excess of gaming GPUs. However, the company experienced a strong rebound in sales, with a 52% increase in fiscal year 2021 and a 61% increase in fiscal year 2022. This recovery was driven by a combination of weak comparisons from fiscal year 2020 and accelerated demand for gaming and data center products during the Covid lockdowns. In fiscal year 2023, sales remained flat year-over-year due to the bust of the crypto market.
Considering this context, we agree directionally with the consensus estimates predicting an 11.4% sales increase, reaching $30 billion in fiscal year 2024. However, we believe that these numbers might be too conservative, as they do not fully account for the unexpected rise in AI demand this year. Furthermore, earnings-per-share (EPS) are expected to increase by 35% in fiscal year 2024, reaching $4.52, which again may be too conservative given the booming demand from the AI industry.
Currently, Nvidia is trading at a forward 12-month PE of 57 times, which is near its five-year high. Over the past five years, the stock has traded between 20x and 63x. It is important to note that we are entering a new era of AI demand. Relative to the S&P 500’s EPS multiple, Nvidia is trading at a 213% premium, which is near the higher end of its five-year range. We believe this valuation reflects the company’s strong growth prospects relative to the rest of the market.
Risks
While we maintain a bullish outlook on Nvidia, we must acknowledge the risks associated with owning its shares. One concern is the sustainability of its historically high valuation, which we discussed earlier. However, we believe that if forward EPS turns out to be 1/3 larger than expected, for example, and if AI demand could sustainably drive faster growth, then its multiple would reduce due to a higher denominator (earnings in the PE ratio), and the stock should re-rate to a higher multiple due to its higher expected growth. There is room for disagreement, of course, but we believe Nvidia’s high valuation is justified.
The second major concern is competition. While Nvidia has experienced significant gains due to its A100 chip used to drive ChatGPT’s generative AI, bears say Alphabet’s (GOOG) (GOOGL) release of its fourth-generation Tensor Processing Unit (TPU) poses a threat to Nvidia’s dominance in the AI chip market. Critics argue that Alphabet’s TPU-version 4 has proven to be faster and more power-efficient than Nvidia’s A100, but we think this comparison is flawed given that the A100 is an older product. If Alphabet had confidence, it would compare its TPU to the H100.
Additionally, Nvidia faces competition from Intel, whose Zeon 4th generation Sapphire Rapids CPU and HabanaGaudi 2 GPU have been shown to run inference faster by 20% compared to Nvidia’s A100-80G. As the AI chip market continues to grow, new players such as Advanced Micro Devices (AMD) are entering the field, producing Radeon GPUs and EPYC CPUs used for machine learning purposes.
We believe these concerns are overblown because they do not consider Nvidia’s real competitive advantage, which is CUDA. Millions of developers are trained on CUDA, and our due diligence suggests that nearly all AI engineers and scientists are trained on CUDA as well. Any serious discussion of competitive threats must address how these competitors could overcome Nvidia’s stronghold on developers.
Conclusion
The rapidly advancing AI industry continues to fuel demand for GPUs, with Nvidia standing to benefit significantly from this trend. The potential server GPU shortage could lead to higher gaming GPU prices, which further supports Nvidia’s growth prospects. Despite competition from Alphabet, Intel, and AMD in the AI chip market, Nvidia’s competitive advantage in the CUDA platform is a major factor that should not be overlooked. Millions of developers are trained on CUDA, solidifying the company’s dominance in the field. While concerns about valuation and competition are valid, we believe Nvidia’s position in the market is justified, and the company is poised to capitalize on the ever-growing AI industry.
Editor’s Note: This article discusses one or more securities that do not trade on a major U.S. exchange. Please be aware of the risks associated with these stocks.
Analyst’s Disclosure: I/we have a beneficial long position in the shares of NVDA either through stock ownership, options, or other derivatives. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Seeking Alpha’s Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.