Is Nvidia In A Class Of Its Own? Cameron Fen With James Foord

Summary:

  • Implications of AI and large language models for big tech companies, with a focus on Nvidia. Cameron Fen, a PhD student in economics who specializes in machine learning, explains how these models work and their potential impact on the tech industry.
  • Fen suggests that big tech companies like Google, Amazon, Facebook, and Microsoft will benefit from large language models due to their access to vast amounts of data. However, he expresses concern about the potential cost implications for Google Search if it starts using large language models.
  • The discussion also touches on Nvidia’s market position and valuation, with Fen arguing that even if Nvidia captures 100% of the market, it may not be enough to justify its current market cap. He also mentions potential competition from companies like AMD and Google if they develop their own versions of Nvidia’s CUDA language.

Moscow, Russia - April 7, 2019: NVIDIA video chip on the motherboard

Antonio Bordunovi

Listen to the podcast below or on the go via Apple Podcasts or Spotify.

  • 2:40 – AI, large language models and commercial implications for big tech
  • 12:40 – Is Nvidia in a class of its own?

Full episode originally published June 15, 2023 on The Pragmatic Investor.

Free trial for James Foord’s The Pragmatic Investor

Transcript

James Foord: Hello everyone. Today I am joined by fellow SA Contributor, Cameron Fen. Cameron is a doctorate student in economics and he specializes in machine learning and creating artificial intelligence models to support macroeconomic modeling. Today we had a great conversation about artificial intelligence and its implications for businesses today, the outlook for NVIDIA (NASDAQ:NVDA), and also how this affects the general macro market outlook.

Cameron, thanks for coming on.

Cameron Fen: Thanks for having me.

JF: All right. So I thought a good place to start would be on one of your most recent articles, because you’re pretty new to Seeking Alpha. But on your last article, you got an editor’s pick. And the title is quite interesting, A Data Scientist Explains Large Language Models And Implications For Businesses.

CF: Yes. So, I guess what I was trying to do with this article is essentially convey to a lay audience or an audience of investors what this new sort of AI innovation they’re getting into, right. So the idea that these large language models, people are like, okay, I can, like, type anything, and it can respond just like a human, I think that generated a lot of buzz.

But in order to really understand the implication of the model, you have to understand how these models are built, what the new innovations are, and why these innovations –or what does the technology constrain from the innovations, right? Because I think for most investors, or lay people, this looks like magic. It’s a computer that basically talks to you like a human and knows a lot of interesting facts. Sometimes it says stuff that is wrong, just like a human, but it’s not like — it doesn’t feel like a machine, it feels like talking to a human.

And so there’s a lot of sort of buzz and excitement over this. But if you don’t know how the models are built, then you can extrapolate. And you have people saying things like — I saw there was a Twitter video where it’s like, I used a kernel used in AI to use a drone to target an enemy. And when the drone realized that the pilot was getting in the way of the objective the drone actually eliminated, or in the simulation, eliminated the drone operator, right. And this was found to be really — like this was found to be not true. And if you don’t understand how these tech works you can get sucked into these sort of conspiracy theories like things if we make an analogy to politics, right? So that was my sort of objective.

JF: All right. So in your article, you also talk about the commercial implications for companies like Google (NASDAQ:GOOGL), Amazon (AMZN), Facebook (META) and Microsoft (MSFT). So, I’d just love to understand a little bit, maybe if you can give us a background of just explain what the large language models are. And yeah, what your thoughts are on this? How it applies to these companies?

CF: Sure, so the sort of innovative sort of technique that large language models use and they’re all transformer-based models, which is just a particular type of neural network. But the secret sauce is the attention mechanism. And essentially what attention does is it learns a weighted average of your inputs, right, and it gives an output that’s a weighted average of the inputs. But if you train this on a large enough data and you have like 20 or 30 different attention mechanisms in a single model or more and each of these single layers pays attention to a different sort of thing.

But if you have enough of these models then what it will do is when you have a sentence like Joan is 80 years old and she is going to the doctors. Is this — or actually Joan is 80 years old. Her birthday is tomorrow, what year will she — what age will she turn? And this attention model will pay attention to the things like 80, birthday and it will sort of ignore all the other texts.

And this is the secret sauce behind the transformer model where it pays attention to stuffs important. And then it will start generating words that says like Joan is going to be 81, because it’s paid attention to these important parts of the topic and not had much attention to the things that aren’t important.

And implications for big tech? I believe that this will be a rich get richer thing, with the minor exception of a little bit of worry about Google Search over the horizon, because it seems like ChatGPT, GPT-4 is quite an improvement over GPT 3.5 and ChatGPT. And I think, right now, that’s the model that you’re using to evaluate all other models. That should tell you which one’s in the lead. So if you have data scientists, and they have all these open source models, the way they’re evaluating is, is how well GPT 4 rates their output.

And so this is like the dominant model. And of course, this is developed by OpenAI, which is basically like a Microsoft subsidiary at this point. And so the one thing I worry about is, what will happen to Google search? Will it a) lose some share? And it might, it might not? I don’t know.

But the other thing is, it’s just more costly per search. And you’re probably going to have a mechanism where essentially what happens is, someone types in a search query, and they have a machine learning model that says, do I need to use a large language, or do I just do a search like normal. But if a lot of queries end up wanting large language models, this will cut into Google’s bottom line.

There’s an article by semi analysis where essentially they calculated that each Google Search has $0.05 of profit per search. And these models costs somewhere between $0.01 and $0.03 before all these innovations that I talked about earlier, or that I talked about in my article.

Hopefully, that will bring that down. But as of 2022, the costs of per search, if you use a large language model is $0.01 to $0.03. And that eats into Google’s sort of margin. Now if you don’t use a large language model, when someone does a search query, then that doesn’t matter. But if you do, then profitability is weaker.

Other than that, though, I think big tech will benefit. And the reason they’ll benefit is they have more data. They have a lot of people that can build these models, but I don’t think that’s the biggest deal. And in the data science world, I think the main moat is data. It’s not the model, right?

There are thousands of data scientists that can build large language models that compete with Google. The problem that they don’t have is they don’t have enough GPUs. And they don’t have the data that Google has, or Microsoft has or OpenAI has, or Facebook has, or Amazon has.

Like, just Amazon, it has more transaction data than anyone else in the world. Facebook has more social and even with their sort of, like Facebook advertising has preferences of people. And again, what people click and stuff like that. Same with Google, and Google Search. And so — and then all of them have internal code. So one of the big advantage — one of the big uses of large language models is essentially, that you can build, you can fine tune these models to help coders write code.

And I pay $10, for GitHub Copilot, there are probably millions of people, or tens of millions of people that will pay $10 a month for GitHub Copilot. And so this is a huge business. And essentially if this gets better, this could be a profit machine to maybe a smaller company, like, OpenAI. Probably wouldn’t be much of a drop in the bucket for other companies, like the big tech. But they have massive amounts of data and maybe they can do something even better than Google — OpenAI Copilot that will give them — be more profitable, generate more revenue then something like whatever, $10 a month.

So, yes, I mean, so, I think they have the walled gardens, and they have more data than anyone else. And so I think large language models will predominantly benefit them.

JF: It’s interesting, because I was talking about this with another SA contributor on the podcast, and he mentioned that to actually run something like ChatGPT at scale, like at the scale of Google search would be at this point that economically unviable just due to the expense of those — of that hardware and those GPUs that are used.

CF: I guess I would disagree. I think it’s possible. I did the analysis in my NVIDIA article and it’s actually not that large, I had another article where I talked about NVIDIA sort of a short report on the NVIDIA saying, the amount of GPUs as required to just to file the market cap is beyond what is necessary.

So right now, OpenAI has 30,000 GPUs for about 200 million queries a day. Google search, these are numbers that I’m approximating, had 26 billion queries a day. So that’s another order of 10x. Right? 100x. Yes, 100x.

JF: Right.

CF: And so 30,000 is – wait? Okay, so wait, let me see. Yes. So Google search, I think Google Search was two billion a day, I’m not sure. But it’s essentially 10x. And 10x number of GPUs is not unreasonable. You have 30,000 right now with OpenAI, and OpenAI will get 300,000. If for instance, they are the only large language model, and they do as much large language modeling as there is Google search. So you only need 10x more GPUs.

And even with that, there are innovations over the past three months since 2023 has started. Quantization, there’s a new technical QLoRA, which is a combination of quantization and an adapter, which is just — these are just large language model. There’s a particular sort of techniques that you use to make these models more effective. There’s QR, there’s a model distillation, which has been a thing for a long time. But essentially what model distillation is, is you get a big model, and it teaches a small model by giving the small model a lot of examples.

And then quantization is obviously instead of using 32 bit float, you use like 4 bit float or something like that. So your numbers are less accurate. But both of these techniques have shown that they produce models just as accurate as the models without quantization without — I mean adapter is another thing in itself. And I don’t want to explain that. But without quantization without distillation.

Now if you combine these things, you can essentially run a — I think a 64 billion large language model on a single GPU. OpenAI uses probably 5, 6 or 8 GPUs per large language model per query or whatever. And so if you cut down from eight to like, four, eight to two, then the 10x is going to be something like 2x, just 3x. And so it’s very doable with the new innovations. Does that make sense? I’m sorry, if I’m like not being clear.

JF: No, no, it makes perfect sense. I just wanted — could you repeat the numbers then? So if the search is done on ChatGPT versus Google Search?

CF: Yes, so essentially, Google queries I think 2 billion searches a day. Right now OpenAI is querying an estimated 200,000 large language model searches a day. Right?

JF: Right. Okay. So that would be…

CF: It’s a 10x. Let me pull up my article.

JF: It’s 200,000 would be — 10x would be 200 million.

CF: 200 million.

JF: Right, okay.

CF: OpenAI is currently 200 million.

JF: Okay. All right. Go ahead.

CF: Or at least they can handle 200 million a day.

JF: Okay. All right. Okay, that makes a bit more sense. It’s very interesting, because you touched on my next question, because I was also going to ask you about NVIDIA (NVDA).

Obviously, you have that article out which has a sell rating, or what you might even say shorting. Now in my other conversation with a fellow SA contributor, Trading Places Research, he also believes the valuation on NVIDIA is very high, but he does believe in the growth story, it was the way he put it. He seems to believe — I’ll give you his view, and then you can read about it and tell me what your thoughts are.

He basically believes NVIDIA is in a class of its own. There is no competition right now for its GPUs or at least for that combination of GPUs and software. And he was talking about a little bit about what you’re mentioning as well, the competitors, for example, Google, who is now coming out with what they’re calling AI accelerators or GPUs. And obviously that is how these companies are competing with NVIDIA, but so far, there is no — in his view, there is no real competition to NVIDIA. Is that how you see things, or you probably see things a little bit differently?

CF: Actually, yes, I pretty much — so I think in the medium term, there will be competition but I generally see things this way. I don’t see many competitions in the short run. I mean, basically NVIDIA’s gross profits are like 60% of revenue, right? So 40% gross cost, 60% gross profit.

So like, obviously people are looking at this company and are very interested in trying to take some of their business because it’s usually profitable. Other than Apple (AAPL), I’ve never heard of a hardware company with such high gross profits. And so what this suggests is people are going to try to take NVIDIA’s share, for sure.

But in the near term, I don’t think and I would agree with the other person that there aren’t any competitors to NVIDIA near term. NVIDIA with CUDA basically dominates AI. And the problem is CUDA. CUDA is basically a language that allows you to pull — tell GPUs what to do. And it’s very useful in deep learning, and AMD doesn’t have an alternative.

There are people, George Hotz was one who tried to — who tried to write a CUDA like language for AMD (NASDAQ:AMD), but he has since quit. I don’t know why. He spent like two weeks on it, but like, I’m talking to the tech people on Twitter, right. And they are saying essentially, probably the most value add AMD engineers can do right now is to write a CUDA like language for AMD.

So I am in agreement that until AMD writes something like that, and until Google and Facebook sell their TPUs or their specialized machine learning computational units, NVIDIA is the only company that is the only game in town. Now that being said, my difference with the other author is even if NVIDIA captures 100% of the market, it’s not going to be big enough to justify the market cap. So that’s where I differ from the other author.

JF: Okay, that’s very interesting. Now in terms of the competition, you mentioned AMD, Google as well. Who do you think is best positioned? Like who is actually — because I’ve heard good things about what Google is doing with TPUs ?

What do you think about the other companies – who’s going to challenge NVIDIA for the throne in your opinion?

CF: Well, I don’t know. I mean, so there’s one thing where Google is only offering their GPUs in the cloud, because they want it as a competitive advantage. So in a sense, Google is not really challenging NVIDIA in the space of solving GPUs essentially or computational units, right? Because they’re only using their stuff in the cloud.

Maybe I think their TPUs are better than NVIDIA’s GPUs for large language modeling, mostly because NVIDIA still has to and I think NVIDIA is also on specialized GPUs. But their GPUs also have to actually handle graphics, right? Because like, people are still using them for playing video games and stuff, right?

So imagine the TPUs are more specialized, and I’m sure they’ve updated it over the years. But I just — I think a lot of people – well, I don’t know if I want to say this. But again, you can only do this if you’re using Google in the cloud, right? If you’re going to buy a GPU on your own computer, I think NVIDIA is again the only game in town.

I think of the other players, probably Google is the best positioned, but again, they’re not selling GPUs, right, or TPUs. So I would again, say that like, no one’s in a great position, if AMD develop — so I will say one thing, if AMD develops a CUDA like language, they will be in the best position. And they are in pretty good position in machine learning inference. So like you can train your machine learning models, and then inference essentially, someone types in a command like, explain World War II, like I’m five years old, and the GPU or the – a large language model would do something like that. That’s inference, right?

And so with inference, it is much easier to use AMD GPUs, because you don’t need to use — there’s not so much CUDA like interface that you need in training, because training is much more complex than just inference. So in that aspect, AMD is already gaining share. And I think I read in one of the comments that something like newly-built servers basically have — there’s a large share of AMD like higher than 10% or 20%, that you would expect, AMD’s regular market share.

These hyperscalers are actually buying more AMDs than they used to and probably that’s because of inference. You don’t need a NVIDIA GPU to do inference. You don’t need CUDA, that much CUDA to do inference. And so it’s easy enough to do it with AMD, which has basically GPUs that are cheaper per flop essentially. Does that make sense?

JF: It’s interesting because you talk about NVIDIA as having a dominant market position, but obviously, you probably think the valuation has gone too far at this point based on that analysis of the market share.

Is there a price at which you would consider buying NVIDIA? Is that something that you’re waiting for?

CF: Yes, I mean — so I talked about this before. This was sort of like, I was hesitant to say so, because it really wasn’t an article about NVIDIA’s valuation, or NVIDIA’s financial analysis. It was really an article on the angle of — this is what how much large language models will demand GPUs, from NVIDIA, and this is why that’s false, right. And there are a lot of other aspects of the company like I didn’t touch upon valuation.

I didn’t touch upon their internet of things. And their like other aspects, I did not touch upon, like, how they have a cost advantage in GPUs, because I don’t know, semiconductors. I don’t really pay attention to like the other aspects. I don’t really spend a lot of time understanding data centers or hyperscalers. And so my main angle was coming from this is a machine learning person’s take on what the demand for large language models would be.

So, I don’t really have a valuation on buying NVIDIA. Right now it’s too expensive, based on just this demand. But again, I hope I was clear in my article in basically saying, this is not like a fully fledged stock pitch. This is really talking about one particular angle.

And if you have a thesis that takes into account other stuff, I’m not saying you’re wrong, and maybe it is a buy. I doubt it, but I didn’t do a full holistic view of the company. That makes sense?

JF: Yes, that makes sense. Now I wanted to know a little bit more about your own background. I’m viewing your profile here. It says, you’re a PhD Economist by trade who specializes in using machine learning to improve macroeconomic models. So please go ahead and tell us a little bit about your background.

CF: Yes, so actually, I’m still a PhD student, but I guess I was not quite clear on that. But yes, so the — I use machine learning models to build — machine learning algorithms to build macroeconomic models, and machine learning models I use are the same. I mean, I don’t really use large language models in my work. I trained some transformers in the past, but I don’t really spend that much time with them. But the main thing I spend, is I do Bayesian machine learning on essentially macroeconomic models. So these are models that sort of predict the implications for the economy.

I will have a new article on Seeking Alpha, at some point discussing Schrodinger, which is a company that uses sort of MCMC, so Bayesian techniques to do molecular, designing molecules and ligands for diseases. And so I’m happy to be able to use my Bayesian experience to talk about that company.

But that’s the main thing I do, like, predicting inflation. People have seen the GDP now, kind of models. Those are the kind of models I use, building these structural models that can predict inflation, or what happens if the Federal Reserve drops interest rates by 2%, or whatever, what will the impact on the economy be?

JF: That’s very interesting. So basically, if I’m understanding correctly, you would take different macroeconomic indicators and put them into this artificial intelligence machine learning model to kind of forecast what your view is of the macro economy in the future?

CF: Yes, I mean, economists, I don’t want to say they’re a little bit behind the curve, but they’re doing — they do things differently than a machine learning person. So you’re really capturing what a machine learning person would do. You get some data, you put it in some black box model, doesn’t matter how it works. And it cranks out an output right?

What the economists — what we’d like to do is we like to build like – really structural model. So we’ll have like — this is what happens to capital depreciation. So at every period capital depreciates by a little bit and then the capital that you have remaining goes into a production function, which determines how much goods you produce, and then the people that you have buying these goods, but they also have to balance like how much they spend now versus saving for the future.

And so in a machine learning approach, you just put in all these indicators into a model that none of the parameters make any sense, they’re just like random parameters, like linear regression parameters, but more complicated and then they output. But with economics, we build all these models. So all these parameters have real world analogues. And the hope is essentially that because they have real world analogues, you can understand economic dynamics more effectively. And you can also do things like counterfactual analysis.

So you cannot say like, if inflation went down by 2%, what would — in this black box model you can’t say if inflation went down by 2%, what would the output be? You can only say, if inflation went down by 2% in the data, other things that are correlated with going down by 2%, what is it — what is a decrease in 2% changes correlations? And there’s a difference between causation and correlation, with the structural model essentially, I can make inflation go down by 2%. And this is a counterfactual environment.

If I use a black box model, and inflation goes down by 2%, this is a core linear model, right? So it’s what in the real data is correlated with going down by 2%. Whereas in my structural model, we don’t even need real data. We can use the model to sort of essentially write a counterfactual even without being disciplined by the data. It doesn’t work as well as I’m making it sound, but that’s what the attempt is.

JF: Okay. Well, that’s very interesting. So what would your model — what is your model telling you now? Is that something that you’re looking at? What it’s telling you about inflation going forward, for example?

CF: Yes, I mean, I don’t know. That’s the funny thing, because I actually don’t build these models. I actually build the math. I do the algorithmic data crunching behind it. So I say like, this is how you should build your models. And then economists take the models that I build, and they say, okay, we’re going to forecast inflation and stuff. So I don’t — I mean, I know more economics than the average person on the street. But I would say, I don’t know — like, I don’t spend my time really doing economics, I’m really like improving these models. So I spend time looking at the model and saying, this is one thing you can improve upon it.

JF: So when you’re looking at investing are you just looking at particular companies then? Or I mean, do you use macro at all? Do you have a particular view of what the Fed is going to do? How a recession is going to affect us? So the macro isn’t… even though you build these models, the macro isn’t something you focus on so much?

CF: No, it’s actually kind of funny. As an economist, I understand how bad macroeconomic forecasting. So I don’t – yeah, I don’t do it. If a company is — I guess what I do is I like quality companies at a reasonable or cheap price. And if I’m going to buy and hold for 10 years, which I don’t actually do, but I’d like to do, I would like to buy quality companies regardless of essentially the price or the economic condition.

I will say, though, I spend a lot of time investing in foreign countries. And then I do actually take into account things like exchange rate risk, inflation, and stuff like that, simply because in a developed country, you just take these things for granted. But if your country is returning 20% a year in lira, or whatever, you got to look at, like, what the inflation rate is.

It’s been great having Cameron. Before we log off, please tell everyone where they can find you on the internet and what you’re doing on Seeking Alpha.

CF: Yes, I mean, you can find me on Seeking Alpha, Cameron Fen. You can find me on Twitter, @cameronfen1. And then I have a LinkedIn, you can just search Cameron Fen, and a GitHub if you’re interested in my academic work, Cameron Fen So, yes, looking forward to interacting with people. Thank you very much, James. I appreciate it.

JF: Awesome. Well, thanks again for coming on Cameron. It’s been great talking about this stuff. And hope we can do it again sometime.

CF: All right, sounds good.



Leave a Reply

Your email address will not be published. Required fields are marked *