Nvidia’s $26 Billion Bet: Beyond Hardware, Open-Source AI Models Drive 990% Surge
Nvidia has issued a strategic warning to competitors, signaling a seismic shift beyond chip dominance with a $26 billion investment in open-source AI models. The company's filings reveal a five-year plan to secure its software supremacy, as its CUDA platform and AI ecosystem fuel a staggering 990% growth since ChatGPT's 2022 launch, with revenue projected to hit $358.7 billion by 2026.
Nvidia’s new model takes a middle road
To build on that software side, Nvidia recently released a new open-source AI language model called Nemotron 3 Super. The model is built for enterprise-grade AI systems that involve multiple AI agents working together.
It carries 120 billion parameters and uses a design called Mixture-of-Experts.
One of its key features is a context window of up to one million tokens, meaning it can process an entire book or thousands of pages of financial records in a single run.
Nvidia has taken what might be called a middle road with this model.
Unlike OpenAI, which keeps its models closed, or Meta, which fully opens its Llama models, Nvidia will release the model’s key parameters publicly. Businesses and developers can download and run them for free, or adjust them to fit their own needs.
If Nvidia can hold onto its lead in hardware and grab 10% of the foundational model market, financial analysts say the move could bring in an extra $50 billion in yearly revenue within three years.
Partners build out the hardware side
On the hardware deployment side, Nvidia does not build data centers itself. Partners, including Dell, Hewlett Packard Enterprise, and Foxconn do that work.
Arthur Lewis, who heads infrastructure at Dell, said his company assisted one customer set up 100,000 GPUs in just six weeks.
Concurrently, NTT DATA revealed a plan to implement what it refers to as “AI factories powered by Nvidia hardware.” These are complete configurations that integrate governance systems, software tools, infrastructure, and data.
The program makes use of Nvidia’s NeMo and NIM software tools in addition to its hardware.
A cancer research hospital that uses Nvidia platforms for radiology and diagnostics, a car parts supplier that uses Nvidia-powered cloud services to reduce production setup time from months to days, and a U.S. manufacturer currently testing battery production lines using Nvidia-accelerated simulation are just a few examples of early customer results.
Kari Briski, Nvidia’s vice president of enterprise generative AI software, noted that building these cutting-edge models puts enormous pressure on storage, networking, and computing systems, and that pressure helps shape the direction of future hardware.
CEO Jensen Huang described AI as more akin to fundamental infrastructure rather than a software fad.
“AI is one of the most powerful forces shaping the world today,” Huang stated. “It is not a clever app or a single model; it is essential infrastructure, like electricity and the internet.”
Huang described the AI stack in five layers: energy at the base, then chips, then physical infrastructure such as land and cooling systems, then AI models, and finally applications at the top, where he said actual economic value gets created, through things like drug discovery, industrial robots, and self-driving vehicles.

He acknowledged the build-out is still in early stages.
A few hundred billion dollars have been spent so far, but Huang said the total will require trillions, calling it potentially the largest infrastructure build-out in human history.
He added that AI models have recently crossed a key line, becoming reliable enough to be widely useful, and that open-source models are helping speed up adoption across the board.
Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.