BTCC / BTCC Square / Cryptopolitan /
Tech Giants Forge Their Own AI Chips to Slash Nvidia Dependence

Tech Giants Forge Their Own AI Chips to Slash Nvidia Dependence

Published:
2025-10-20 05:15:08
20
3

Big Tech firms are building their own AI chips to reduce dependence on Nvidia

Silicon rebellion brews as industry titans take semiconductor matters into their own hands.

The Great Decoupling

Massive tech corporations are pouring billions into custom AI processor development—cutting the cord on Nvidia's dominance. They're building proprietary silicon architectures specifically tuned for their AI workloads, bypassing the one-size-fits-all approach that's made Nvidia indispensable.

Architectural Arms Race

These aren't mere copycat designs. Companies are engineering chips from the ground up—optimizing for everything from cloud inference to edge computing. The move signals a fundamental shift in how tech giants view computational sovereignty.

Supply Chain Calculus

Beyond performance gains, this strategic pivot addresses deeper concerns about hardware availability and pricing power. When you control the silicon, you control your destiny—and your margins.

The financial types are already placing bets on who'll win the fab-less fab war—because nothing says innovation like Wall Street's favorite game of 'pick the monopoly.'

Big Tech turns inward with custom chip designs

While Nvidia’s GPUs still dominate the AI market, cloud providers are now designing their own chips with Broadcom and Marvell Technology.

These custom processors are cheaper, tuned for their software, and make it easier to control performance costs. The chips aren’t sold to outsiders like Nvidia’s GPUs are, they’re used internally to run AI systems, and offered to cloud clients at lower prices.

In a June research note, JPMorgan projected that chips from Google, Amazon, Meta, and OpenAI will make up 45% of the AI‑chip market by 2028, compared with 37% in 2024 and 40% in 2025. The rest of the market will remain with GPU producers such as Nvidia and AMD.

Jay Goldberg, an analyst at Seaport Research, said the hyperscalers are building custom silicon because “they don’t want to be stuck behind an Nvidia monopoly.” He added that Nvidia now has to “compete with its customers.”

That’s already happening. Google reportedly began selling its TPUs, or tensor processing units, to a cloud provider in September, a decision that puts it in direct competition with Nvidia.

Gil Luria, an analyst at DA Davidson, estimated Google’s TPU and DeepMind units could be worth $900 billion, calling them “arguably one of Alphabet’s most valuable businesses.” He wrote that Google’s chips “remain the best alternative to Nvidia, with the gap between the two closing significantly over the past nine to twelve months.”

Goldberg predicted “a lot of activity around custom silicon” by 2026, reflecting conversations throughout the AI chip supply chain. Analysts said Big Tech companies are progressing at different speeds.

Google began developing TPUs more than a decade ago and remains the leader. Amazon entered the space a year after Google launched its first TPU, buying Annapurna Labs in 2015 and releasing Trainium in 2020. Microsoft, which only launched its Maia AI chip in 2023, still lags its peers.

Analysts warn of slower growth for Nvidia

Developers often prefer Nvidia’s GPUs because of the software stack that comes with them, but analysts said the competition will still eat into profits. David Nicholson, from Futurum Group, said the margin risk is real.

“Over time, the margins that Nvidia can command right now get degraded,” said David. “It will be sort of death by a thousand cuts because you have all of these different custom silicon accelerators that exist because there’s such an opportunity.”

When asked about this in a September podcast, Jensen Huang downplayed the threat. The Nvidia CEO said, “We’re the only company in the world today that builds all of the chips inside an AI infrastructure.”

He argued that while rivals are building single chips, Nvidia produces complete systems combining Blackwell GPUs, Arm‑based CPUs, and networking units that work together across entire racks.

Some on Wall Street don’t share the same fear. Vivek Arya of Bank of America and Luria both said the rise of custom chips “doesn’t matter.” Arya said Nvidia keeps expanding the total market, noting that the company invested $47 billion into AI and “neocloud” startups from 2020 through September 2025, according to PitchBook data.

Luria added, “The growth and demand is so substantial. We’re going to need a lot more compute, and the AI models are getting more useful, which means the pie is gonna get a lot bigger.”

Luria said Nvidia may not grow as fast as the market itself, but will still expand as overall demand rises. Still, Goldberg warned that designing chips isn’t easy. “The drawback of doing your own silicon, though, is that it’s hard,” he said. “I think ultimately what will happen is not all of them will succeed.”

Sign up to Bybit and start trading with $30,050 in welcome gifts

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users

All articles reposted on this platform are sourced from public networks and are intended solely for the purpose of disseminating industry information. They do not represent any official stance of BTCC. All intellectual property rights belong to their original authors. If you believe any content infringes upon your rights or is suspected of copyright violation, please contact us at [email protected]. We will address the matter promptly and in accordance with applicable laws.BTCC makes no explicit or implied warranties regarding the accuracy, timeliness, or completeness of the republished information and assumes no direct or indirect liability for any consequences arising from reliance on such content. All materials are provided for industry research reference only and shall not be construed as investment, legal, or business advice. BTCC bears no legal responsibility for any actions taken based on the content provided herein.