If things seem too good to be true on Wall Street, they usually are.
Roughly three decades ago, the advent of the internet changed the growth trajectory for corporate America. Since then, numerous new technologies, innovations, and next-big-thing trends have come along promising to be the greatest thing since sliced bread. However, none have come close to matching the game-changing potential that connecting the world via the internet brough to the table… until now.
The arrival of artificial intelligence (AI) opens an unlimited number of doors in virtually every sector and industry. When discussing “AI,” I’m talking about the use of software and systems for tasks that would normally be overseen or undertaken by humans. Giving software and systems the ability to learn and evolve over time without human intervention is what gives AI such broad-reaching utility.
Based on a report released last year by the analysts at PwC, AI can add an estimated $15.7 trillion to the global economy by the turn of the decade. That’s an enormous amount of money and a pie large enough to allow for multiple winners.
But for the moment, no company has been a bigger AI winner than semiconductor company Nvidia (NVDA 1.75%).
Nvidia has ridden its competitive advantages to a much-needed stock split
In short order, Nvidia’s graphics processing units (GPUs) have become the standard in high-compute data centers. According to a study conducted by semiconductor analysis company TechInsights, Nvidia was responsible for shipping 3.76 million of the 3.85 million GPUs for AI-accelerated data centers in 2023. That’s a cool 98% market share and effectively borders on a monopoly.
There’s absolutely no question AI-driven businesses want Nvidia’s GPUs in their data centers to help train large language models (LLMs) and oversee generative AI solutions. This is plainly evident when considering that about 40% of Nvidia’s net sales are derived from “Magnificent Seven” members Microsoft, Meta Platforms, Amazon, and Alphabet. During Meta’s first quarter, the company announced plans to increase its capital expenditures, with the express idea of springboarding its AI ambitions.
Having the first-mover advantage has also fueled Nvidia’s pricing power. Demand for the company’s H100 GPU has completely overwhelmed its ability to meet orders. Even with Taiwan Semiconductor Manufacturing beefing up its chip-on-wafer-on-substrate capacity, Nvidia can’t satisfy all of its customers. As a result, it’s been able to dramatically increase the selling price on its GPUs, which led to a scorching-hot gross margin of 78.4% during the fiscal first quarter (ended April 28).
Nvidia’s first-mover advantages in AI-accelerated data centers, coupled with the mountain of cash flow it’s generating from its high-priced H100 GPUs, is helping to support ongoing innovation, as well. In March, the company unveiled its Blackwell GPU architecture, which is designed to further accelerate computing capacity in data processing, quantum computing, and generative AI.
Earlier this month, CEO Jensen Huang unveiled Nvidia’s newest AI architecture, which is named “Rubin.” Rubin will be helping to train LLMs and operate on a new central processor known as “Vera.” While Blackwell is expected to make it into customers’ hands later this year, Rubin won’t be making its official debut on a commercial scale until 2026.
This combination of innovation, first-mover advantages, and otherworldly pricing power, has sent Nvidia’s share price soaring by more than 700% since the start of 2023. With the company’s shares recently running to well over $1,000 per share, its board of directors approved and executed a 10-for-1 stock split. Nvidia joined more than a half-dozen other high-flying companies in conducting a stock split in 2024.
On paper, you couldn’t ask for a more perfect sales ramp-up than what Nvidia has offered. But when things seem too good to be true on Wall Street, they almost always are.
Even if Nvidia maintains its compute advantages, shareholders can still lose
As an investor, you should be seeking out businesses that have well-defined competitive advantages, if not impenetrable moats. At the very least, Nvidia accounting for 98% of AI-GPUs shipped during 2023 lands it in the former category.
But even if Nvidia retains its competitive GPU advantages, it may not be enough to keep its stock from eventually tumbling.
For example, it’s well-documented that Nvidia is set to face its first real bout of external competition from the likes of Intel (INTC -0.03%) and Advanced Micro Devices (AMD -0.17%). Intel’s AI-accelerating Gaudi 3 chip will be broadly shipping to customers during the third quarter. Intel claims that Gaudi 3 has inference and energy efficiency advantages over Nvidia’s H100.
Meanwhile, AMD is ramping up production of its MI300X AI-GPU, which is designed as a direct competitor to Nvidia’s highly successful H100. The MI300X is superior to the Nvidia H100 in memory-intensive tasks, such as simulations.
Even if Blackwell and Rubin ultimately blow Intel’s and AMD’s chips out of the proverbial water on the basis of compute capacity, GPU scarcity suggests Intel and AMD can still be big-time winners. A significant backlog of Nvidia’s chips should roll out the red carpet to external competitors like Intel and AMD.
And it’s not just external competitors that are a potential problem for Nvidia. The company’s aforementioned top customers, which comprise around 40% of its sales, are all internally developing AI-GPUs for their data centers. This includes Microsoft’s Azure Maia 100 chip, Alphabet’s Trillium chip, Amazon’s Trainium2 chip, and the Meta Training and Inference Accelerator (MTIA) from social media juggernaut Meta Platforms.
Are these AI chips currently a threat to Nvidia’s compute advantage? No. But their mere presence as complements to the H100 GPU removes valuable data center “real estate” and signals that Nvidia’s top customers are purposefully lessening their reliance on the AI kingpin.
With more AI-GPUs becoming available from Intel and AMD, and many of the Magnificent Seven internally developing AI chips, the GPU scarcity that has lifted Nvidia’s pricing power into the stratosphere is going to ebb. The company’s 75.5% adjusted gross margin forecast (+/- 50 basis points) for the fiscal second quarter — a decline of 235 to 335 basis points from the sequential quarter — likely signals that these pressures are taking hold.
Furthermore, every next-big-thing technology, innovation, or trend over the last three decades has succumbed to an early-stage bubble-bursting event. Investors have consistently overestimated how quickly a new technology, innovation, or trend would be adopted by businesses and/or consumers.
Although AI looks like the second coming of the internet in terms of changing the growth arc for corporate America, most businesses have no well-defined game plans for how they’re going to deploy the technology to improve sales and profits. We see this overzealousness repeated cycle after cycle with next-big-thing investments.
Mind you, this doesn’t mean Nvidia won’t be wildly successful over the long run or that its stock can’t further increase in value when looking back 10 or 20 years from now. But it does suggest that, even if Nvidia maintains its competitive GPU advantage, the immaturity of the technology in its early stages has set Wall Street up for another bubble.
The leader of every next-big-thing innovation for three decades has seen its share price eventually decline by at least 50%. I believe Nvidia’s shareholders are on track to suffer this same disappointment.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Sean Williams has positions in Alphabet, Amazon, Intel, and Meta Platforms. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Intel and recommends the following options: long January 2025 $45 calls on Intel, long January 2026 $395 calls on Microsoft, short August 2024 $35 calls on Intel, and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.