Nvidia: Be Fearful When Others Are Greedy
Summary:
- Investor sentiment toward Nvidia remains bullish, but the stock has become more expensive, increasing the potential downside.
- Nvidia’s position in the AI market is reminiscent of Intel’s dominance in the data center, leaving room for competition to enter with similar products at lower prices.
- The risk of losing market share and pricing power could drastically reduce Nvidia’s revenue and earnings, despite the potential for further growth in the AI market.
- Given Nvidia’s excessive pricing and margins, and the ramp in competition from Intel and AMD, this downside risk has become much greater than any further upside after the 3x increase in total revenue.
Investment Thesis
Even after an extraordinary rally to join the club of trillion-dollar market cap companies, it seems investor sentiment regarding Nvidia (NASDAQ:NVDA) remains very bullish, with some even claiming that the stock has only become cheaper.
However, it has not. In absolute terms, the stock has only become more expensive (obviously), which means the possible downside has only become greater, arguably more so than the upside going forward, even despite the valuation at first sight not being excessive.
One cannot predict how much demand Nvidia will see in the future for its chips. While, sure, the explosion of demand due to GAI has accelerated what was already a multiyear uptrend, investors do have to weigh the growth of AI (on one hand) against the competitive dynamics of a player with very high market share and pricing, which it has the risk of losing (on the other hand).
Background
To be sure, the thesis here isn’t that Nvidia is necessarily overvalued or in a bubble. Given the explosion in sales and profits, since that is of course what the stock price reflects (with a certain multiple of sales and/or profits), it has indeed pretty much grown in line with these results. So already in previous coverage of the stock, I had conceded to being wrong about Nvidia: Nvidia: All Bets Are Off.
History and Competitive Landscape
A while ago, Intel (INTC) CEO Pat Gelsinger said that, while he acknowledged Nvidia’s success in this market, it was nevertheless the result of luck. To recap, AI is a highly parallelizable workload. As such, when it started gaining prevalence (since 2012), it was found to run faster on GPUs (originally created for gaming where many pixels need to be rendered simultaneously) than on the more general-purpose CPU. That’s why Nvidia mostly benefited from the rise of AI rather than Intel.
It wasn’t until 2017, however, that Nvidia introduced the first deep learning-specific acceleration into its GPUs with its Tensor Cores. By that time, Intel too had already introduced the VNNI instruction into its Xeon Phi line-up (which was its many-core CPU at the time that competed against Nvidia in HPC) and had also acquired Nervana in 2016, the predecessor to its 2019 Habana acquisition that it currently positions as its competitive solution to Nvidia.
In that regard, my own view would be that it has been a combination of luck and skill. While Nvidia clearly has strongly executed on its roadmap, Intel on the other hand stumbled (both with its own 10nm process, which caused the delay and reduced the competitiveness of practically all of its products, as well as by axing all Nervana development in favor of restarting with Habana) right as the demand for AI (hardware) first started meaningfully increasing around 2017.
Besides Intel and AMD (AMD), which like Intel has also been late to launch a competitive product, there had also been a boom in AI chip start-ups. None of those has been very successful either. Lastly, many bigger companies also started developing their own chips with more mixed results, such as the Google (GOOG) TPU and Amazon (AMZN) Trainium and Inferentia.
Implications
Nvidia’s position in the AI market seems reminiscent of Intel in the data center up to about half a decade ago. Intel had the market completely for itself, as AMD was still in the midst of developing Epyc and getting some initial adoption. While this resulted in some criticism as Intel rolled out its highest-specced Xeons for $10k or even more at the time, arguably those dwarf in comparison to the pricing of Nvidia’s latest H100 (and upcoming H200) series, which during this time of shortages has ballooned to tens of thousands of dollars (and probably was never less than $20k or so, to begin with). For comparison, TSMC (TSM) charges less than $20k for a N5-class wafer, which contains on the order of 70 chips.
In my view, this means that Nvidia is leaving the market wide open for competition to enter with competitive (similarly performing) products at a potentially substantially reduced price. This is starting to happen from both AMD and Intel. Although many bulls have argued that Nvidia has a moat with its CUDA software, as Pat Gelsinger has said this is a shallow moat at best as software development mostly occurs in higher-level languages such as Python.
Since for any rational business there is no fundamental reason to buy an Nvidia chip over a similarly performing (but much cheaper) Intel or AMD chip, this means that traditional competitive market dynamics at some point should kick in. This will most likely result in both market share and pricing power loss for Nvidia.
From that view, investors should ask themselves if they would rather invest in a ~$200B or ~$1.35T company. Note that both companies, so also the smaller one, have more than enough resources to develop an AI chip. And since the power and performance characteristics of these chips are largely determined/limited by process technology, the resulting chips should indeed be quite similar (except for price). That is exactly what it seen in the market. In fact, since Intel’s Gaudi series lacks traditional GPU functionality, the Gaudi chips are actually noticeably superior to their Nvidia competitor on the same process technology (see MLPerf).
To use some concrete math, if Nvidia would lose half of its market share and had to reduce prices by half, then its revenue would drop by 75%. Note that Nvidia’s margin is so high that reducing the price by 2x is still a very conservative scenario. If one treats these chips as commodities such as in the memory (DRAM/NAND) space (which arguably isn’t all that far-fetched), then Nvidia’s prices would literally have to drop by on the order of 10x. (Note that this estimate has been obtained by only considering the cost of the silicon chip, any other components such as the HBM memory and CoWoS packaging are neglected.)
Ultimately, this is the kind of risk, which is far from unrealistic, that investors at least must consider, and then weigh this compared to any possible further upside that has not yet been baked into the stock price. Since being valued as a growth stock means there is already some considerable further increase in demand baked into the price, demand would have to surpass even these levels for the share price to increase anything at all.
Additional Considerations
As stated, it is not possible to predict the future demand for AI hardware. Even if the risk thesis as described so far exactly plays out, there is still the possibility that the market might grow faster and/or become larger than any theoretical loss in revenue from lower prices and market share, as, for example, AMD has predicted at its event late last year. Also, similar to Intel, inertia would already go a long way towards preventing too sudden market share shifts.
So, to recap some possible drivers of demand for investors to consider for themselves. One is the difference between training and inference. Training a model must be done only once, so the demand for training hardware likely has a quite finite limit. Nevertheless, the training workload is such that it could quite literally use an unlimited amount of computing power, since training the largest models on enormous amounts of data could take many months, and even then it would still be possible to further increase the amount of data and/or parameters.
Hence, this likely has some implications with regard to price elasticity. For example, instead of companies buying a certain number of chips, instead, they might have a certain budget in dollars, and they might just buy as much chips as they can within that budget. So, if Nvidia would reduce the price by 2x, they might just buy 2x more GPUs.
Note, though, that in this case, since even the largest clouds and big tech companies only have a finite budget, there should indeed be some upper limit on demand, and investors would have to answer why, besides the shortages, this limit would not have been reached yet in the wake of the ChatGPT hype, with Nvidia’s run rate approaching $80B (which in terms of the total cost of a data center only represents spending on GPUs and networking), and instead would continue to grow considerably for years to come.
On the other hand, and perhaps this is the answer to the previous question, while inference (using the trained model) is generally considered to be the main AI application in the long-term, it nevertheless has somewhat different compute requirements, as the computation cost of a single inference is generally much lower. Instead of training a model for months, in inference one requires results in seconds or milliseconds.
This in turn means that it could be more suitable for other chips than GPUs, such as CPUs with on-chip accelerators. Intel has such hardware with its latest Xeon and Core Ultra server and client/edge CPUs. Indeed, inference in some/many cases/applications could simply happen on-device instead of on an expensive Nvidia GPU in the cloud.
Financial Discussion
As a reminder, the Nvidia rally started due to the guidance for $11B revenue in the (calendar) Q2 2023 quarter, up from around $7B. Since then, the revenue has surged to over $18B in the most recently reported quarter, with guidance for $20B in Q4, implying a tripling in revenue in just a few quarters. Given its extremely high margins, EPS has similarly surged from ~$1 to ~$4 (which has allowed the stock to quadruple without changing the valuation multiple).
The fact that basically all this (additional) revenue and profit is generated in/from the data center also shows both the data center revenue TAM expansion as well as the increase in revenue market share, as just a few years ago basically the vast majority of the data center silicon logic market was captured by Intel.
The forward estimate is for over $90B in revenue in 2024. While this is very large in dollar terms, unlike anything even Intel reported when it still had a CPU monopoly (Intel’s highest was on the order of $30B data center revenue, which included its telco networking business), in terms of units this would correspond to roughly 2 million H100/H200s at a price of $40k per unit. This is an ASP (average selling price) that is on the order of 40x larger than either AMD or Intel. This means Intel will actually still sell a lot more silicon into the data center (millions of Xeons each quarter), just at a much lower price.
While bulls may see this disparity in ASP between both as Nvidia’s crowning achievement, for more sceptical investors this should present one large red flag. As a monopolist, Nvidia enjoys an absolute pricing power, which has resulted in prices very far from having any relation to the actual cost of goods sold of the hardware. As mentioned, Nvidia could likely drop its prices by 10x and still end up with margins in line or above the industry. Clearly, selling 2 million H100/H200s at $4k per unit would provide a very different financial picture. Although, as also discussed there could be some price elasticity considerations in that case, as Nvidia would then probably sell a lot more of those $4k chips.
Valuation
The valuation is a $1.35T market cap. Based on Q4 guidance of $20B revenue and analyst expectation of $4.5 EPS, this represents a valuation of 17x P/S and 30x P/E. This is not excessively expensive given the supposed further growth through 2024, catching up from the shortages. While 17x P/S normally should be considered expensive, the fact that EPS is only a 30x multiple again shows its extremely high margins.
In the picture of the overall thesis, the main point here hasn’t been that the stock is overvalued, it is rather the risk that Nvidia is currently capitalizing as basically the only scaled supplier of chips powering the LLM/AI explosion. So as more suppliers (read: Intel and AMD) ramp their competitive products, more healthy market dynamics should kick in, forcing prices downwards and reducing Nvidia’s market share. Both (for now hypothetical) trends would drastically reduce revenue and earnings going forward. It is only then that the stock would become expensive.
So even if demand for these AI chips/workloads increases further in the years ahead, in principle confirming the thesis that AI is and remains a growth opportunity, this potential scenario could nevertheless mean that Nvidia’s revenue might be near an all-time/long-term high. This is similar to what has been discussed in the All-In podcast.
In essence, the question is how to bring Nvidia’s monopolistic position into the valuation, with some of the scenarios ranging from maintaining market share and pricing power (which given the billions of dollars Nvidia’s current customers could potentially save by switching suppliers seems very unlikely), to seeing a dramatic decline in both, so something in between.
Risks
Besides the risks (for both bull and bear cases) already discussed, given that Nvidia has been in supply constraint, at least some of the future growth is already pretty much locked in. Nevertheless, this still does not rule out the possibility of a correction in demand later.
Secondly, one of the main arguments is that Nvidia does not have a moat since/and chips for AI are basically commodities. While this might be the case to approximation, factors such as inertia could still prevent quick market share swings.
Why Would Chips Be Commodities?
In extension of the previous risk section, a major part of this thesis hinges on the statement that AI chips should to at least some extent be considered commodities, where healthy competition between multiple vendors is possible, which obviously has major implications on the possible margins that can be obtained in this market.
This might be a bit of a more controversial statement, since after all chip design often takes many years and millions up to hundreds of millions of dollars. In addition, aspects should as programmability also require a development burden. Indeed, these observations alone clearly indicate that it can’t be a pure commodity.
The main argument is that AI is a highly parallel workload. It is basically just pure (repetitive) math, for which hardware capability is measured in operations per second (OPS). This means that the base unit (the equivalent of a single core in a multi-core CPU) is a very simple core, which is then repeated a gazillion times in order to create a large, powerful chip. So while the details may be a bit more complicated, there really isn’t much room for differentiation since the goal is simply to obtain as many TOPS (Tera-OPS) as possible. As discussed, both AMD and Intel by now have launched very similarly performing chips.
Also note that this idea of a small core (“cell”) is quite similar to the main commodity market in semiconductors, which is memory and storage. In memory, a small memory cell is indeed also repeated a gazillion times. While, again, the details may be a bit more complicated (for example, in NAND there are things like controllers), there really isn’t much level for differentiation, at least not beyond the process technology level.
Overall, with reported price tags ranging from $20-40k per chip, even if this includes things like HBM and a PCB, in the CPU space not too long ago $4-8k (about a 5-10x lower price) already would have been considered very expensive even for the most high-end chips. Again, there is no fundamental technology in an AI chip that warrants such an excessive premium (compared to already premium-priced CPUs). In fact, as just argued CPUs are actually more advanced than GPUs/NPUs (since a single CPU core is orders larger than one GPU/NPU core).
Investor Takeaway
Be fearful when others are greedy. Various anecdotal reports keep surfacing from people considering Nvidia to be cheaper than it was before its rally, and hence, keep buying Nvidia at the current price. However, with a $1.35T market cap, Nvidia is far from cheap in absolute terms.
The main warning is that using forward estimates as gospel entails serious risks, and as discussed regarding Nvidia and the LLM hype, these risks are very legitimate. Note that risk could simply mean an unknown, and indeed, even before any other considerations, there are many unknowns regarding the evolution of AI hardware demand. While, granted, in the bull case the potential demand for AI computation/hardware is virtually limitless, there are nevertheless various economic constraints and realities. With revenue already in the tens of billions of dollars, this is charting into the unknown of what has ever been seen before in the data center market, never mind any further upside.
Nevertheless, that is exactly what is required to capture investor-grade returns going forward. So as discussed, with legitimate competition from both Intel and AMD seriously ramping up, and no moat (having debunked the CUDA myth) that would fundamentally prevent competitors from taking market share, nor any real possible differentiation that would in principle prevent prices from going to the bottom, Nvidia’s margins at the end of the day are simply too high for this not to be a real and large concern. There is simply nothing at all in Nvidia’s chips that warrants the excessive prices/margins that it asks its customers. The only logical expectation is for natural competitive market dynamics to kick in, to Nvidia’s shareholders’ large detriment, with the first signs of this indeed becoming visible from AMD and Intel.
Investors who feel lucky (as Pat Gelsinger has argued) might buy Nvidia based on the case of ever-growing demand and miraculously maintaining market share and pricing power. But any rational and risk-aware investor should probably avoid the stock based on too many unknowns, uncertainties, and risks of a stock with a very elevated valuation.
Analyst’s Disclosure: I/we have a beneficial long position in the shares of INTC either through stock ownership, options, or other derivatives. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Seeking Alpha’s Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.