BTCC / BTCC Square / yahoofinance /
Nvidia’s ‘Most Underappreciated’ Business Is Soaring Like a ‘Rocket Ship’—Why Wall Street Missed the Boat

Nvidia’s ‘Most Underappreciated’ Business Is Soaring Like a ‘Rocket Ship’—Why Wall Street Missed the Boat

Published:
2025-08-06 18:55:18
8
3

Nvidia's ‘most underappreciated’ business is taking off like a 'rocket ship'

Nvidia’s stealth powerhouse just hit escape velocity—while analysts were busy staring at GPUs.

The ‘quiet giant’ in Nvidia’s empire

Forget gaming chips and AI accelerators. The real action’s in a division so overlooked, even the company’s earnings calls barely mention it. Yet here we are: revenue trajectories mimicking SpaceX’s Falcon 9, margins that’d make a SaaS CEO blush, and exactly zero credit from the suits on Wall Street.

Fueling the silent surge

No fancy product launches. No keynote hype. Just enterprise solutions printing money as industries from healthcare to automotive realize they’ve been underbuying for a decade. The kicker? These contracts lock in recurring revenue—something even crypto bros can’t meme into existence.

Wall Street’s favorite blind spot

Analysts still can’t decide if it’s ‘diversification’ or ‘distraction.’ Meanwhile, the segment quietly became the profit engine Nvidia never knew it needed. Cue the institutional downgrades in 3…2…1.

Connecting thousands of chips

When it comes to the AI explosion, Nvidia senior vice president of networking Kevin Deierling says the company has to work across three different types of networks. The first is its NVLink technology, which connects GPUs to each other within a server or multiple servers inside of a tall, cabinet-like server rack, allowing them to communicate and boost overall performance.

Then there’s InfiniBand, which connects multiple server nodes across data centers to form what is essentially a massive AI computer. Then there’s the front-end network for storage and system management, which uses Ethernet connectivity.

Nvidia CEO Jensen Huang presents a Grace Blackwell NVLink72 as he delivers a keynote address at the Consumer Electronics Show (CES) in Las Vegas, Nevada on January 6, 2025. (Photo by PATRICK T. FALLON/AFP via Getty Images) · PATRICK T. FALLON via Getty Images

“Those three networks are all required to build a giant AI-scale, or even a moderately sized enterprise-scale, AI computer,” Deierling explained.

Story Continues

The purpose of all of these various connections isn’t just to help chips and servers communicate, though. They’re also meant to allow them to do so as fast as possible. If you’re trying to run a series of servers as a single unit of computing, they need to talk to each other in the blink of an eye.

A lack of data going to GPUs slows the entire operation, delaying other processes and impacting the overall efficiency of an entire data center.

“[Nvidia is a] very different business without networking,” Munster explained. “The output that the people who are buying all the Nvidia chips [are] desiring wouldn't happen if it wasn't for their networking. “

And as companies continue to develop larger AI models and autonomous and semi-autonomous agentic AI capabilities that can perform tasks for users, making sure those GPUs work in lockstep with each other becomes increasingly important.

That’s especially true as inferencing — running AI models — requires more powerful data center systems.

Inferencing powers up

The AI industry is in the midst of a broad reordering around the idea of inferencing. At the onset of the AI explosion, the thinking was that training AI models WOULD require hugely powerful AI computers and that actually running them would be somewhat less power-intensive.

That led to some trepidation on Wall Street earlier this year, when DeepSeek claimed that it trained its AI models on below top-of-the-line Nvidia chips. The thinking at the time was that if companies could train and run their AI models on underpowered chips, then there was no need for Nvidia’s pricey high-powered systems.

But that narrative quickly flipped as chip companies pointed out that those same AI models benefit from running on powerful AI computers, allowing them to reason over more information more quickly than they would while running on less-advanced systems.

“I think there's still a misperception that inferencing is trivial and easy,” Deierling said.

“It turns out that it's starting to look more and more like training as we get to [an] agentic workflow. So all of these networks are important. Having them together, tightly coupled to the CPU, the GPU, and the DPU [data processing unit], all of that is vitally important to make inferencing a good experience.”

Nvidia’s rivals are, however, circling. AMD is looking to grab more market share from the company, and cloud giants like Amazon, Google, and Microsoft continue to develop their own AI chips.

Industry groups also have their own competing networking technologies including UALink, which is meant to go head-to-head with NVLink, explained Forrester analyst Alvin Nguyen.

But for now, Nvidia continues to lead the pack. And as tech giants, researchers, and enterprises continue to battle over Nvidia’s chips, the company’s networking business is all but guaranteed to keep growing as well.

Sign up for Yahoo Finance's Week in Tech newsletter. · yahoofinance

Email Daniel Howley at [email protected]. Follow him on X/Twitter at @DanielHowley.

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users