Nvidia’s cuLitho Proves The Use Case Genius Of Jensen Huang
Summary:
- Nvidia Corporation’s GTC conference last week had plenty of highlights and interesting topics, but one caught my attention.
- Nvidia revealed cuLitho, a GPU-acceleration library for lithography with material advancements for computing lithographic components.
- cuLitho proves software is king, and Nvidia is the king of GPU software, but it also demonstrates the genius of the company’s CEO, Jensen Huang.
- Don’t buy Nvidia stock at just any price, but don’t call AI hype – you’re unaware of what you don’t know.
The Nvidia Corporation (NASDAQ:NVDA) GTC conference last week showcased quite a bit in the AI department, including partnerships, hardware, software, and advancements in each of those areas. But what caught my attention was a specific AI-oriented software called cuLitho aimed at the biggest chip manufacturers in the world and their WFE (wafer fab equipment) suppliers. This software has a very intriguing use case because it creates a positive feedback loop to improve the full stack of Nvidia’s AI system. It proves Nvidia is in the driver’s seat leading the AI movement, while others like Advanced Micro Devices, Inc. (AMD) and Intel Corporation (INTC) latch onto what Nvidia does, if at all.
Now, don’t get me wrong. I like AMD and its business; I’m long all three companies I just mentioned, in fact (though my Intel position has been trimmed significantly over the years). AMD has great CPU products for desktops and servers, and it’s doing a great job eating away at Intel’s market share, with more to come. However, when it comes to AI and the race to expand use cases and improve AI, there is no one but Nvidia. And the lead is growing after what I heard at GTC.
Why I Specifically Honed In On This AI
While there are a lot of topics one could cover coming out of the conference, there’s just one I found to be not only underrated but game-changing for the industry. This area of AI made perfect sense and flipped the tables with how Nvidia moves forward competitively. With cuLitho optimization software for the likes of Taiwan Semiconductor Manufacturing Company Limited (TSM), ASML Holding N.V. (ASML), and others, it brings a complete positive feedback loop for Nvidia’s benefit.
Before I dive deeper into cuLitho, the high-level understanding you need to know is it provides acceleration for lithography designs and manufacturing. Being able to produce chips, especially as physical constraints reach an insurmountable limitation, requires new techniques and better optimization. Therefore, Nvidia thought to improve the chip manufacturing process to produce better chips. This directly benefits it in developing the next-generation AI hardware product.
That’s a brilliant idea; help those who make your product make it better so you can make a better product. Rinse and repeat.
This is also the epitome of AI – producing better products through its own existence.
Owning The Systems Of The AI Market
Chip producers like Taiwan Semiconductor have a lot of work to get a raw wafer from one end of the production line to the other, with limited defects, mind you. Lithography techniques, photomasks, and many other components must be produced and refined to generate the resources and materials needed to etch. Some pieces of the puzzle, like a photomask or other component, take several weeks to output with current software systems.
Nvidia said why not accelerate these processes and use GPU accelerators instead of CPU power in the cloud?
But I’m working at this a little backward from the practical standpoint.
GPU accelerators are great if the users can program their software to tap into the power of the GPU. And this is where cuLitho comes in. It’s an acceleration library, and it’s called in the lithography software to process these computational requests. Then, the programmer or end user calls the library’s methods to complete the tasks while the library navigates the acceleration side and handles the driver and requests to the GPU.
Much like adding a library to an application’s scripts to do certain calculations or run certain features, cuLitho lends itself to the semiconductor engineer the same way. For example, if generating a photomask is required, the cuLitho library has methods to do the specific steps in the process, called as and when needed by the photomask software system to direct these processes to the GPU.
And the results have been mindblowing, reducing photomask generation time from two weeks to overnight. And according to Nvidia, it also reduces the energy required to produce the same result, going from 40,000 CPUs to 500 Hopper GPU systems. And, again, according to the company, that’s a ninth of the power and an eighth of the data center space. Of course, this is still to be proven in the real world, but even if these claims are off by 50%, it’s still an aggressive advancement worth the return on investment by its customers.
But Nvidia isn’t the only one expecting material changes. The CEO of Taiwan Semiconductor, Dr. C.C. Wei, said it was transformational:
The cuLitho team has made admirable progress on speeding up computational lithography by moving expensive operations to GPU. This development opens up new possibilities for TSM to deploy lithography solutions like Inverse Lithography Technology (ILT) and Deep Learning more broadly in chip manufacturing, making important contributions to the continuation of semiconductor scaling.
Similarly, Peter Wennink, CEO of ASML, shared the path forward rests with GPU-accelerated lithography (emphasis added):
We are planning to integrate support for GPUs into all of our computational lithography software products. Our collaboration with NVIDIA on GPUs and cuLitho should result in tremendous benefit to computational lithography and therefore to semiconductor scaling. This will be especially true in the era of High-NA EUV lithography.
The concept of accelerating the very thing responsible for designing and manufacturing the chip used in the acceleration is business genius on the part of Nvidia’s CEO Jensen Huang. Creating the acceleration library aimed at lithography and running it on the same GPU chips its vendors produce is seriously forward-thinking.
Because Nvidia is a pioneer for GPU software in every way, including desktop graphics cards and AI acceleration in the cloud, it can push into more refined areas like lithography. AMD doesn’t have this prowess. And before anyone starts commenting about open source this or that, answer me how software meant for multiple hardware platforms will be the best driver of performance. It won’t.
Nvidia is writing drivers and software meant specifically for its hardware. There’s no better way to write the code than with engineers under the same roof the chips are designed. It wasn’t AMD who collaborated to create the Microsoft Corporation (MSFT) DX12 ray tracing libraries; it was Nvidia.
Now, you might say all other customers of TSM will benefit from this chip design and lithography advancement. And you are correct. Apple Inc. (AAPL) should see a benefit as well as others like AMD, who all have chips produced by TSM. But Nvidia gains the closest advantage, as it will work alongside the chip producers and WFE manufacturers to enhance the software and hear firsthand what its customers now need – an interesting point all its own.
So, while it helps its competitors very near directly, it gains the advantage of making its vendors its customers. It also allows Nvidia to understand how it can design chips to best utilize the improvements in lithography.
This also proves the importance of software. Hardware is needed, but software is king. Without software, Nvidia and AMD have expensive paperweights (for your Zoomers, we used to have documents on our desks and had to keep the fan from blowing our papers). This GPU-accelerated software library opens the door to more ways to use the same product. Nvidia has been second to none in software, and why it’ll continue to find more use cases – and revenue.
Not Factored Into The Investment Case
Since this advancement from CPU to GPU-powered lithography is happening in early 2023, the vendors implementing this new technology won’t have it in production for likely a year. This isn’t a flip-the-switch kind of transition; there needs to be a software upgrade among all the vendors TSM uses, along with any software systems TSM uses directly. It requires new versions of this software to integrate the cuLitho library, not to mention install Nvidia’s DGX H100 systems or utilize Nvidia’s new AIaaS.
It’s reported TSM “will start to qualify cuLitho in mid-2023, so expect the platform to be available to the company’s customers beginning in 2024.” This timeline means it adds another dimension to the bull case, not factored into estimates, especially since the company hasn’t gone beyond its fiscal year in terms of color of how the year will shape up.
Not only will the company need to provide another set of customers DGX H100 systems, but it’ll also provide yet another software package. This continues the virtuous cycle Nvidia has been striving toward in the last few years, where it brings the full stack of expertise to the customer, from hardware to software. It’s another step in expanding the usefulness of Nvidia’s AI system, gaining precious data for training.
While competitors work toward open source AI operating systems and libraries at a speed of slow on a technology scale, Nvidia continues to create new use cases, providing entire packages from the start. All the work is done by Nvidia and supplied to the customer; the customer just pays them. And the case to “just pay them” is easy when it saves companies energy and time and produces better results immediately and in the long-term – the ROI is there.
The bottom line is Nvidia is leading the charge into how its AI accelerators are used, writing more libraries and software, and this time advancing the very same chips the software is running on. That’s near the top of my list of genius use cases for a technology product in the last decade. Another one on the list? Using GPUs in the data center.
Does this mean you buy Nvidia at any price? No, but it means the long-term is much steadier than the crypto boom or even the GeForce desktop and laptop GPU cycle. Nvidia’s ability to advance its chips every two years provides its own data center refresh cycle, which is predictable for its customers to prepare for. Intel can’t say the same – how are those delays working out?
This is a secular trend, and Nvidia Corporation is far and away leading the initiative to find the next place to implement its stack. AI hype? If you believe it’s hype, you don’t know what you don’t know. You don’t know the possibilities for the next use case. You know who does? Jensen Huang.
Editor’s Note: This article discusses one or more securities that do not trade on a major U.S. exchange. Please be aware of the risks associated with these stocks.
Disclosure: I/we have a beneficial long position in the shares of AMD, INTC, NVDA, TSM either through stock ownership, options, or other derivatives. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Decrypt The Cash In Tech With Tech Cache
Do two things to further your tech portfolio. First, click the ‘Follow’ button below next to my name. Second, become one of my subscribers risk-free with a free trial, where you’ll be able to hear my thoughts as events unfold instead of reading my public articles weeks later only containing a subset of information. In fact, I provide four times more content (earnings, best ideas, trades, etc.) each month than what you read for free here. Plus, you’ll get ongoing discussions among intelligent investors and traders in my chat room.