ADVERTISEMENT

Former Intel CEO Pat Gelsinger Says Nvidia AI GPUs '10,000 Times More Expensive', Jensen Huang 'Got Lucky'

Former Intel CEO Pat Gelsinger admitted that Intel missed the AI boat.

<div class="paragraphs"><p>(Photo source: X/@PGelsinger)</p></div>
(Photo source: X/@PGelsinger)

Pat Gelsinger, former CEO of Intel, has slammed Nvidia's present pricing strategy for its AI GPUs, which are used to train AI models. Gelsinger asserted that Nvidia's AI GPUs are 10,000 times more costly than what is required for AI inferencing workloads. Speaking at a podcast during Nvidia’s GTC 2025 in San Francisco, Gelsinger said the market will eventually come to understand Nvidia's dominance in AI GPUs.

"Today, if we think about the training workload, okay, but you have to give away something much more optimised for inferencing. You know a GPU is way too expensive; I argue it is 10,000 expensive to fully realise what we want to do with the deployment of inferencing for AI and then, of course, what's beyond that," Gelsinger said.

Jensen Huang ‘Got Lucky’

The former head of Team Blue hasn't minced words about Nvidia’s dominance in the past too, calling Nvidia and its CEO Jensen Huang “extraordinarily lucky.” During the podcast, he reiterated his claims.

“The CPU was the king of the hill, and I applaud Jensen for his tenacity in just saying, ‘No, I am not trying to build one of those; I am trying to deliver against the workload starting in graphics.’ And, then, he got lucky, right with AI,” Gelsinger remarked.

Why Intel Lagged In The AI Race

Gelsinger’s reference to the CPU comes in the backdrop when Intel CPUs dominated the computing industry 15 to 20 years ago. However, as AI advanced, GPUs took the lead, making Nvidia among the most valuable businesses in the world even as Intel struggled to play catch up.

Gelsinger admitted that Intel missed the AI boat even though it was working on the Larrabee project. “I had a project that was well known in the industry called Larrabee and which was trying to bridge the programmability of the CPU with a throughput oriented architecture (of a GPU), and I think had Intel stayed on that path, you know, the future could have been different.”

Larrabee used the x86 instruction set architecture (ISA) with Larrabee-specific extensions, in contrast to AMD and Nvidia’s GPUs, which employ proprietary ISAs. Because Larrabee and its successor, the Xeon Phi processor, were built on a CPU ISA that—while ideal for general-purpose computing—did not scale well for graphics, AI, or high performance computing, they didn’t gain traction and largely failed.

With the recent scrapping of its Falcon Shores GPUs for data centres, Intel’s push to implement a more traditional GPU design for AI have in a way failed. Rather, Intel is now placing a wager on the AI chip Jaguar Shores, which is likely to be available in 2026.

OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit