OpenAI Poaches Intel’s CTO in Major Compute Infrastructure Power Play

OpenAI just made a move that'll send shockwaves through Silicon Valley—snagging Intel's chief technology officer to supercharge its compute infrastructure team.
Why it matters: This isn't just another hiring story. When AI's most aggressive player raids a semiconductor giant's brain trust, it's signaling an arms race for hardware dominance.
The cynical take: Another 'strategic hire' that'll probably get announced right before OpenAI's next funding round—because nothing juices valuations like hardware theater for soft VCs.
TLDRs:
- OpenAI recruits Intel CTO Sachin Katti to enhance compute infrastructure for AGI research.
- Intel CEO Tan assumes AI oversight amid executive departures and chip production struggles.
- Katti’s move highlights OpenAI’s aggressive hardware expansion beyond traditional GPU setups.
- Intel faces pressure to unify AI projects like Jaguar Shores after CTO exit.
OpenAI has secured Intel’s chief technology officer, Sachin Katti, as part of its strategy to bolster compute infrastructure for artificial general intelligence (AGI) research.
Katti, who assumed the role of Intel CTO and chief AI officer earlier this year, will now focus on designing and building systems to support OpenAI’s growing computational demands.
Intel CEO Lip-Bu Tan, who took the helm in March 2025, will personally oversee the company’s AI and advanced technologies groups following Katti’s departure. This MOVE comes amid a broader reshuffling at Intel, where several top executives have exited since Tan’s appointment, raising questions about the company’s long-term AI ambitions.
Intel’s AI Challenges Intensify
Intel has struggled to produce data center AI chips that can compete with Nvidia’s dominance. Despite the widespread use of Intel processors in AI server systems, its specialized accelerators have lagged in sales and deployment efficiency.
The company’s Gaudi 3 AI accelerator missed 2024 revenue goals due to software and implementation complexities, while Falcon Shores has been scaled back from a commercial product to an internal engineering platform.
With Katti leaving, Intel faces a leadership gap in its next-generation AI projects, particularly Jaguar Shores, a rack-scale design leveraging the 18A node with high-bandwidth memory (HBM). This project aims to unify multiple IP blocks and collaborate with custom silicon partners to handle workloads ranging from edge devices to large-scale data centers.
Excited for the opportunity to work with @gdb, @sama and the @OpenAI team on building out the compute infrastructure for AGI! Very grateful for the tremendous opportunity and experience at Intel over the last 4 years leading networking, edge computing and AI. Privilege of a… https://t.co/TkyPrNYRkt
— Sachin Katti (@sk7037) November 10, 2025
OpenAI’s Aggressive Hardware Expansion
Katti’s transition signals OpenAI’s increasing investment in custom compute infrastructure beyond conventional GPUs.
The company’s Abilene hub is operational, with a planned expansion capable of delivering nearly 1 gigawatt of compute power. Additional U.S. sites could raise the network’s capacity to approximately 7 gigawatts, representing a potential $400 billion market over three years.
OpenAI is exploring broader hardware solutions, including advanced power delivery, liquid cooling, and high-speed interconnects. These efforts are expected to provide opportunities for suppliers and colocation operators to expand globally, with planned hubs in the United Arab Emirates, Norway, and Argentina, collectively requiring tens of thousands of construction and onsite workers.
Implications for the AI Industry
Katti’s move underscores a wider trend in AI, where leading firms aggressively recruit hardware and compute experts to gain competitive advantage.
For Intel, this departure intensifies pressure to deliver on high-profile AI projects while maintaining credibility with enterprise clients and data center operators. Meanwhile, OpenAI positions itself to accelerate AGI research with bespoke infrastructure tailored to massive-scale computation, potentially reshaping the competitive landscape.
The departure also reflects the growing complexity and cost of AI development. As AI models require more specialized and powerful hardware, companies that can integrate computing expertise with scalable infrastructure will have a clear edge.