We also take a look at the Gartner Hype Cycle.
In 2023, the AI industry spent an estimated $50 billion on Nvidia chips, with the purpose of training AI models. The payoff for all that spend, according to Sequoia Capital, is $3 billion in revenue. Is that a return worth bragging about?
In this podcast, Motley Fool host Ricky Mulvey talks with analyst Asit Sharma about how investors might think about companies’ AI spend.
They also discuss:
- The rate of improvement for AI models.
- How companies not in the “Magnificent Seven” of stocks are using AI
- One company that’s spending smartly on the new technology.
- The Gartner Hype Cycle.
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our quick-start guide to investing in stocks. A full transcript follows the video.
This video was recorded on June 22, 2024.
Asit Sharma: The basis of all this is that the processes are much more intensive on servers. They cost a lot more because they’re delivering some magic to us that we didn’t have before. There’s hallucinations in there, there is perceived inaccuracy of some responses. It’s early stages. But so far companies are willing to invest, and some consumers are willing to pay for this. Many of us are getting it free as an offer just now, and it is being integrated with products that we use every day. But the general gist of this is that people are willing to pay for that extra bit of magic.
Mary Long: I am Mary Long, and that’s Asit Sharma. Artificial intelligence is everywhere, and all that data is expensive. A KPMG survey suggests that 43% of US companies with over a $1 billion in revenue plan to invest at least $100 million in generative AI over the next 12 months. But how to tell whether all that investment is actually paying off, my colleague, Ricky Mulvey caught up with us to talk about how investors might consider a company’s AI spend. They also discuss how non Mag Seven companies are using AI, the difference between large and small language models and the future of AI friendships.
Ricky Mulvey: Let’s start with the positive. Artificial intelligence represents world-changing technology breakthroughs. It’s already providing a co-pilot, at my job, it’s changed the way I search on the Internet, probably for you as well, if you’re on Google [Alphabet], which I assume you’re not a Bing guy? You’re not out there on the Ask Jeeves, Yahoo, Bing platforms.
Asit Sharma: You read me right, brother. I am a Google person. I don’t have anything against Bing. I do use it from time to time, but I’m a Google person first.
Ricky Mulvey: I’m trying to get us to this place where we’re in the middle where we recognize the extraordinarily positives and also some of the risks that come with AI in this investment hype, what I would call boom cycle. Let’s start with the positive. What’s AI allowing you to do that you weren’t doing just a couple of years ago?
Asit Sharma: Ricky, one of the things it’s allowing me to do is to catch up on a business story that might have changed in a really fast way. I cover many companies for different services that I work on. You can’t keep track from day to day with what’s going on in every last business. Sometimes I’m called upon to find information quickly and try to make a quick assessment. In the old days, I would have to go to Google, SEC filings, interviews, etc. Now, I can just ask LLM to brief me on what happened during this time frame when X event occurred. That’s one thing. I think that’s really cool. Another use is to use large language models as a surfacing tool to try to find great investments. Let’s be real, their reasoning capabilities aren’t better than mine or other analysts yet. They get a little better every day, but they’re extremely useful if you know how to point them in a direction to show you companies that might be worthy of further research. I think lastly, there’s a foreign language I’ve been trying to learn, and just being able to ask ChatGPT, here’s the sentence. I just don’t understand what’s going on here with this past participle. Can you explain this to me? It gives a really cogent answer, that helps me fill in the blanks, and so, I think that’s really fun.
Ricky Mulvey: I think it’s your point, though. It’s good for idea generation, but it requires further skimming. For my job, I’ll use it, I’ll ask it, how does this company make money? If I need to get up to speed on a company that I may be talking about in the future, it’s almost like a handyman at times. Where I’ve asked it, I have this thing going wrong with a piece of software that I use for work, what can I do to accomplish the thing that I’m trying to accomplish? Sometimes it’s with audio editing. Then the other thing I’m excited that I’ve used it for is cooking. I have these ingredients in my refrigerator. What can I do with it? It can give you some different options. It’s that idea generation stuff that is great. But you know what else is? Every time I ask it a question, every time you ask it about a piece of investing news, it costs someone money because these artificial intelligence programs are really expensive to run. The infrastructure is really expensive. That’s why NVDIA is making so much money right now because you have so much energy on the spending side, according to a research firm KPMG. About 43% of companies who are making $1 billion a year in annual revenue. Big companies making billions of dollars are expected on average to spend, $100 million on generative AI. Companies aren’t stupid, generally. Make it general statement, what’s the reasoning or fear for spending this much money? Why not take a more wait and see approach?
Asit Sharma: Let’s tackle the reasoning part of it first, Ricky. One is that companies of this size, are pretty good at understanding where they can get a return on investment. They’ll do a lot of experimentation, conduct experiments, have smart people like software engineers try to run things in a different way, and what they’re finding is the small experiments are paying off. Could we save 4,000 person hours by having the large language model solve this coding problem for us rather than hiring people to do it next year? The answer to the experiment may be, yes, we definitely can do that. Second, is that the same types of software engineer, IT people have already identified where proprietary data can be leveraged, so whatever your industry is, you’ve got something in your data stream; historical data stream, ongoing data stream, that’s yours, that’s proprietary that you can use to further sales if you can only find the right product fit or service fit or extend your current lineup. Many companies are starting to see how they can use large language models to make sense of that proprietary data and sell that upstream. That makes the investment promising. Lastly, there’s a lot of opportunities for cost savings. If you Ricky, or me can now hold up a camera to our fridge and get a recipe out of that, that turns out to be a remarkably tasty dish, we’ve saved ourselves a heck a lot of time. Those types of optimizations are happening all over the business world. To get to the first part of your question, there’s a fear case there. There’s a Fomo case here. Some companies are making decisions because they don’t want to get left behind, even though they haven’t really pinned down the return on investment.
Ricky Mulvey: There’s definitely some pivoting, where what was machine learning just a few years ago makes you an artificial intelligence thought leader now, Asit, because people want to see those mentions on the earnings calls even for non computing companies. Let’s talk about the spend. Sequoia Capital estimates that the industry spent about $50 billion on chips from NVDIA to train AI, to train these models in 2023. That spending only brought in $3 billion in revenue. I say only with some quotations, it’s a lot of money. It’s also 6% on the total spent. We got a 6% return. Asit, you’re the CFO of all of AI now. Are you satisfied with that return? Is this inefficiency? Is this overspending, or is this just the cost of growth in emerging technology, darn it, and you got to spend money to make money?
Asit Sharma: I’m taking the pencil from between my ear and my shock of hair and waving it back at the CEO and saying, this is about what we expected, boss. Now, I don’t mean to make fun of CFOs there. I’m an accountant myself. But, this sounds to me where we should be in this stream. Let me give you an example in the real world. If you and I are real estate developers and we’re looking at a big parcel that we can develop on the edge of a city where the downtown is petering out a bit. We’re going to have to buy that land, put up a few buildings, maybe a commercial building, some housing, but we’ve got this long term master project going. In the first year, the income stream that we can capitalize ain’t all that much. We might be losing money, and we probably are, overall, if we consider the cost of our debt to pull this project off, but over time, Ricky, businesses move in, subdivisions get built, restaurants come in, Then that part of the city gets big enough that the government wants to put a public library there. Everyone can benefit. There are a lot of players that come in and we start making money in year X. We see we’ve recouped our investment, and then we’re a little bit off to the races. When we call it the AI industry, this is how the hyperscalers, the companies that are buying the GPUs: the Amazons, Metas, Microsoft, Oracles, and other companies, private businesses as well, enterprise businesses are treating it, like a long-term development, and they’re not expecting to be making a bunch of money up front in Year 1, Year 2, Year 3.
Ricky Mulvey: The Internet can make every year feel like a dog year. In a way, it’s easy to forget that this boom has only been around for about 18 months now, since, I think it was November of 2022-ish. Maybe you’re not getting that immediate return. These applications are very expensive. We mentioned that earlier. For just one use case, it’s asking, like a ChatGPT question versus Google a question. It’s much more expensive to ask ChatGPT a question than it is to ask the Google Search engine. I want to use that as an example to get into the cost of it. Why are these AI applications so much more expensive to run than the computer applications that we’re used to?
Asit Sharma: When we ask Google a question, we’re asking basically a computer program a series of algorithms to run that look at an index that’s been built up over a long period of time. It’s a predictable workload. There’s this really neat framework of knowledge that Google dips into. It has something called relevance. The latest algorithms tell it which search results to put up top? But that workload was figured out a long time ago, and Google knows how to make money off of that. When we ask a large language model, say ChatGPT to do the same thing, the process is totally different. First of all, it’s inferential. The large language model is taking our question and using a probability based decision making metrics to give us an answer. It also consults Google. You can see that now when you do certain web searches with different LLMs, different chatbots, they’re consulting Google. This is happening. It’s also serving it up to you in natural language, which is another inferential process, which is a harder workload on the servers. That costs more money, too. Then there’s the constant training behind the scenes of the models. There are so many engineers who are working on this and making sure that the AI version of Google doesn’t tell you to do something weird with your pizza to cool it down.
Ricky Mulvey: Add glue to it.
Asit Sharma: Add glue to it. The basis of all this is that the processes are much more intensive on servers. They cost a lot more because they’re delivering some magic to us that we didn’t have before. There’s hallucinations in there, there is perceived inaccuracy of some responses. It’s early stages. But so far companies are willing to invest, and some consumers are willing to pay for this. Many of us are getting it free as an offer just now, and it is being integrated with products that we use every day. But the general gist of this is that people are willing to pay for that extra bit of magic.
Ricky Mulvey: Well, in a lot of tech adoption cycles, you deliver something of value to your customers below cost, hope they enjoy it enough, and then eventually, you figure out how to make a profit on it. We’ve seen a lot of, the rate of growth on these models, these video, generative AI models have been just incredible Asit. You’ve seen the differences in video that have come from these tools just a year ago to now. Have you seen the video of Will Smith eating noodles? Back when these models first started?
Asit Sharma: You mentioned this, but I haven’t seen it.
Ricky Mulvey: It doesn’t quite look human. It looks very much like it came from, a weird computer algorithm. Now there’s another image of a man eating noodles and it looks high definition, It looks like it came from someone shot in a cafe. Rates of growth like this always have to slow down. Christopher Mims of the Wall Street Journal. He’s a tech columnist there. He’s been a guest on Motley Fool. I really admire his column and his writing. Basically points out that all of these AI models have been trained on, more or less the entire Internet, and are running out of additional data to hoover up. There aren’t 10 more Internet’s worth of human generated content for today’s AI models to inhale. Basically saying that these AIs are going to start learning off each other, and that may be a little recursive, and you’re not going to get the same rate of change and improvement. Do you think this data limit could put a damper on the rate of improvement for AI models, or maybe are we underestimating how much improvement we have left to go?
Asit Sharma: On the consumer facing frameworks that are iterating on answers to everything, I think Christopher Mims has a great point here. We’ve already trained most of the prodigious, the biggest LLMs on the bulk of human knowledge. That’s on the Internet. There’s so much human knowledge that is on the Internet. But having said that, it’s hard to see how the quality will improve over time without having a phase of degrading first. We see this in large language models, too, as some of their own output starts to creep in. They get a little dumber, and then the engineers go in and tweak the models, they get a little bit smarter again. I think it’s a great point. But for things that aren’t trying to be all things to all people in terms of question to answer frameworks, The underlying technology behind this is strong. It’s getting stronger. There are so many specialist companies that are working on various ways that information is analyzed and regurgitated back to us. Synthetic data is one solution partial to this problem. If you’re a business that has a specific use case, synthetic data is data that’s created to train a model like human data or human generated data and this could be important, in a manufacturing industry where you don’t necessarily need private data, but you want to have outputs of a certain process to make something better. Synthetic data is going to be more important to such models. Then there’s also quality over quantity.
Asit Sharma: If you’re working on specific questions in, let’s say, the pharmaceutical industry, you don’t really need to keep feeding more and more information into a model. What you’re trying to do is to make sure you have a better tool for analyzing the information. There’s a couple of things I want to mention. There are small language models which don’t need as much information or even processing power to run, like Microsoft has a couple that are pretty interesting. These are solving the problems of large language models through reasoning. So some models are trying to get better at reasoning the way our minds reason and that could be a way that all of this ecosystem gets a little better without having to digest so much information and shove a bunch of information down a pipe and get smarter. Instead of information, let’s work on the thought process. Let’s work on how these models think. Then finally, of course, you’ve heard me say this and other analysts say this before. Proprietary data sets for companies are very powerful. I was just talking about this a moment ago. That’s another case where at least in business, you don’t necessarily need to worry about the problem that Chris Mims is talking about. You’re focused on your data stream and how you can derive insights and maybe better products.
Ricky Mulvey: I want to talk about small language models for a moment. Lot more familiarity with large language models. How might someone encounter a small language model? What are these looking like? What problems are they addressing?
Asit Sharma: Small language models, I think we’ll see them more in the real world now, but I think that Microsoft who I name has a couple of these. We’re going to see these with devices in the future, so you can spin a small language model from a device. That merges with chip design. Companies from ARM to AMD to Intel are designing their chips to have some neural processing units right on the chip right on the phone. So when you combine these two concepts, you get devices that are smarter. They don’t necessarily have to go to a server out in the Cloud to answer a big question, but they could tell you, Ricky, if you take that photograph of your fridge, how to make a better recipe based on information that’s already stored on your phone, themselves without you having to go to a ChatGPT or whatever model you’re using. So a bit of reasoning capability, plus information that already exists on device, plus information that’s taken in spins out something that’s reasoned and solves your quick problem.
Ricky Mulvey: Then to your earlier point, there are times where synthetic data would work for something like a digital twin. You have a manufacturing floor, and it doesn’t matter if you’re using recursive data. You’re just trying to make the model better so you can make maybe an assembly line take two tenths of a second quicker than it normally runs.
Asit Sharma: Totally. I should say we’re already at that point that Jensen Huang predicted a couple of years ago, and he said, in the future, there’s going to be more virtual robots than there will be physical robots. Maybe we haven’t reached it quite yet, but we’re headed there. Pretty quickly.
Ricky Mulvey: We’ll see if Elon gets his way with the humanoids. That’s a separate show, though. We talk often about how the MAG 7 companies seem to be using AI. There’s clear winners, which if you’re a smaller company, and you need to build an LLM, you probably need Nvidia’s chips for that. Maybe you’ll use AMD’s chips if the Nvidia’s chips are not available for sale. But I’m curious about how the other companies seem to be using it, and it breaks down into a few categories. It comes down to essentially, I would say, personalized chat bots. If you go on Airbnb and you need help with something, an Airbnb chat bot might be able to help you a little bit quicker or a little bit more efficiently than a human agent will, I should say more efficiently for the business. Simplifying contracts. That’s how Cisco is using it. Co-piloting. Microsoft is famous for this. I’ve already used a MAG 7 in the example. But, Microsoft copilot on Office 365 to help you write emails a little bit more quickly and then forecasting is another example where if you’re a retailer and you want to forecast how many genes you’re going to sell, in let’s say Indianapolis, Indiana, in May, and you can pull all of the data that you have on similar regions, Indianapolis, and then maybe make a better prediction about how many genes you need to sell to your retail store. Any reflections on these use cases or anything else you’d want to add for these smaller companies spending money on AI.
Asit Sharma: They’re all fun. Let’s look at that personalized chatbot one. It’s intriguing to me into it, which you and I had talked about before the show is really interesting because they have those proprietary data sets that are in consumer-facing applications. So if you use QuickBooks online, they have information not only about you, but about every other business in your ZIP code that’s similar to you. This is interesting. I had a conversation with the CEO of Intuit here on Motley Fool Money and just about a year and a half ago, Sasan Goodarzi, and he was telling us, Ricky, that, look, we’re going to combine all of this interesting data and help you learn how to manage your cash flow better. Are you running out of funds? Well, the chat bot will be able to tell you, every other one wing business in this Zip code doesn’t spend quite as much on seed. Maybe you’re buying a premium seed and your profit and loss would be better if you dropped down one level. Now, he didn’t use that very specific example, but he was spinning up a few similar types of scenarios. I think that’s a great illustration of a use case, of a company that’s big into it has a big balance sheet, they could go out and buy a bunch of GPUs, but really what their focus is is, let’s make something that will help our end user, the consumer. I think that’s going to make them some money over time. I do like, also, you mentioned co-piloting of code. There’s so many companies that are offering co-pilots now in various walks of business life and this is something not just that we’ll see, on the consumer side, Ricky, but people in business organizations will see. I think it almost becomes commoditized though at some point. These models become so powerful. We don’t even think of it as co-piloting anymore. It’s just part of our daily routine you just type in or talk to or show an image to your personal co pilot for whatever your workflow is in any industry, and it’s going to automatically do things that advance your day. Now, hopefully, that results in a little more free time in our day. Doesn’t mean that we don’t have work. But we’ll have to see what that future looks like.
Ricky Mulvey: The Jetsons future has always been promised with every advancement. We’ll see.
Asit Sharma: It has such stars in music. I could not help but think that that’s an optimistic case because of the Jetson music. But now, we grew up, Ricky. We’re no longer watching the cartoons, and now we wonder is the music going to be really peppy? Is it gonna be a little ominous? I don’t, it’s time for real work.
Ricky Mulvey: You hit on basically two areas that Sequoia Capital is bullish on, which is customer support, and AI enterprise knowledge in the near term. The other area that Sequoia is bullish on is AI friendship, pointing out that when people use these AI chat box as friends, they have maybe stickier adoption rates, which I understand the business perspective, but also a little concerning and terrifying because, let’s find more friendships through these LLMs that really know me for me, and, you don’t necessarily have the consequences of human friendships and relationships. That’s one thing to flag. Any companies impressing you with how they’re spending on AI. We’ve talked about the amount of spending. We’ve talked about how companies are creating these new applications for customers and businesses, but are there any companies impressing you with the way that they’re spending on AI in this boom cycle?
Asit Sharma: Wait a minute Ricky. I’m going to answer your question. But you can’t bring up such an intriguing point for me and our listeners without let’s just talk about two seconds.
Ricky Mulvey: We have got some. Let’s talk about AI friendships.
Asit Sharma: I find it also fascinating. The transformer model, the technology that underlies all LMS is just sneakily good at figuring out what’s important in a group of words, in a text, in any visual data you give it. That’s why they’re so powerful, and I’ll use this word again, so magical. But it also means that they’re sneakily good at understanding the buttons to push to get certain responses. If you build an AI with a lot of data to be able to manipulate emotion, you can develop AI that many people will respond to and feel up to a point, like this is very similar to relationships I have in real life, and actually, I’m lonely, and I want to go home and talk to my bot. I find that interesting. I don’t know quite what to make of it. What do you think about this? What do you make of it?
Ricky Mulvey: I think it’s hugely concerning at a societal level, because it’s going to offer people facsimile of human relationships that may be serviceable to you. What happens when you don’t need to go out on a date because you have a chatbot who’s interested in you and nice to you and always texts you back and is interested in the same things that you are. That’s so hard to find in the real world. The chatbot is never going to betray your trust. I think that it’s one of those things that seems silly on the surface level. There’s a movie Her that covers it a little bit. But it’s going to be one of those things that continues the trend of, I would say, social media and a lot of life becoming more digitized, which is that people aren’t going out as much as they used to, because you can still get those dopamine buttons pushed on your phone. I think these AI relationships are going to continue to be an amplification of that.
Asit Sharma: The police said, or maybe it was stink. A scientific means of bliss will supersede the human kiss. This is that coming true a little bit to me. I also find it unsettling. I think, too, the pendulum swings both ways. I just see younger people getting fatigued of social media and going out more. I think there comes a point where the emotional support you may get from a bot just starts to feel synthetic, and you crave that human touch, that human conversation. But what it means is to get to that point, many people may dive into this stuff just like they did with social media. That’s a scary thought. Then we should also say on a societal level, it may be very helpful for some people who are say elderly or otherwise, unable to do what they used to do in terms of social connection. So it may provide a use there. Again, not a perfect use, but of some comfort or to the sick. There are some really interesting philosophical questions had been posed over time by people who thought this out before the technology reached the point where it is to date.
Ricky Mulvey: Asit, how about that for a setup for a company discussion about which companies are spending on AI in an intelligent way. I’m looking at the outline. You have one. We’ve talked about on the show previously. But what company is impressing you with how they are spending on artificial intelligence?
Asit Sharma: I’m going to be boring here, but I’m going to say it because it’s true. I think Oracle is really impressing me and some other investors. They had earnings recently come out, and I think the stock was up 12 or 13%. But all along Larry Ellison, the chairman of Oracle and Safra Catz, the CEO, have been saying that their technology that Oracle has, DMA architecture is just faster. It’s less expensive for companies to train on AI and they just signed a bunch of contracts this last quarter and added in the tens of billions to their future revenue that they’re going to recognize out of this. So there’s some of that return coming in, Ricky, It was in the first year, but companies like Oracle, they’re going to see that return. I think that’s a really smart case. They had a little bit better technology than some other web service providers. I would say that in some ways, their server configurations are technically superior to some of Amazon setups, some of Microsoft setups and that’s why those companies are forming relationships with Oracle. So there’s one. I think that to you and I, we’re chatting about the hype cycle. I think, me, personally, and I’d love to hear your opinion. I think we’re where that trough is. Look up, those of you who are familiar with this, the Gartner Hype cycle in particular, because these are industry experts who really nail what a hype cycle looks like. So according to the way they label these, I think Ricky, we’re somewhere between the trough of disillusionment and the slope of enlightenment. People are past the hype, and many investors are starting to recognize that, there’s only a few concentrated winners in this game, so they’re getting disillusioned. But that’s a great point of the cycle because it means that they are companies that are working on stuff now that we’re going to buyer and invest in as time goes on. Where do you think we are in this cycle?
Ricky Mulvey: I like how you separated that out for the biggest companies and then the smaller ones. I think for the smaller ones, we might be moving more toward the trough of disillusionment. But for those big companies, like Nvidia, where at the time of this recording, their market cap is around $100 million per employee. That sounds like some inflated expectations to me, Asit.
Asit Sharma: Yeah, probably. This is not news to anyone, but Nvidia is a cyclical company. So at some point in time, demand supply will hit equilibrium, and we’ve seen this story with Nvidia. They’re going to sell off some. Over time, I think it’s still a smart investment, but sure, things are so concentrated now. But that bodes well for the future if this suburban development works out, Ricky, because I’ve seen him where they were just midlink or they hit the wrong part of real estate outside that city center, which was piddling around a bit.
Ricky Mulvey: Asit. Thank you so much for your time and your insight. I love these longer-form conversations. Thanks for being here.
Asit Sharma: Same here Ricky. I appreciate it.
Mary Long: As always, people on the program may have interests in the stocks they talk about, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell stocks based solely on what you hear. I’m Mary Long. Thanks for listening. We’ll see you tomorrow.