Final week, The Info reported that Meta is in talks to purchase billions of {dollars}’ value of Google’s A.I. chips beginning in 2027. The report despatched Nvidia’s inventory sliding as traders frightened the corporate’s decade-long dominance in A.I. computing {hardware} now faces a severe challenger.
Google formally launched its Ironwood TPU in early November. A TPU, or tensor processing unit, is an application-specific built-in circuit (ASIC) optimized for the sorts of math deep-learning fashions use. In contrast to CPUs that deal with on a regular basis computing duties or GPUs that course of graphics and now energy machine studying, TPUs are purpose-built to run A.I. methods effectively.
Ironwood’s debut displays a broader trade shift: workloads are shifting from large, capital-intensive coaching runs to cost-sensitive, high-volume inference duties, underpinning every thing from chatbots to agentic methods. That transition is reshaping the economics of A.I., favoring {hardware} like Ironwood that’s designed for responsiveness and effectivity slightly than brute-force coaching.
The TPU ecosystem is gaining momentum, though real-world adoption stays restricted. Korean semiconductor giants Samsung and SK Hynix are reportedly increasing their roles as element producers and packaging companions for Google’s chips. In October, Anthropic introduced plans to entry as much as a million TPUs from Google Cloud (not shopping for them, however successfully renting them) in 2026 to coach and run future generations of its Claude fashions. The corporate will deploy them internally as a part of its diversified compute technique alongside Amazon’s Trainium customized ASICs and Nvidia GPUs.
Analysts describe this second as Google’s “A.I. comeback.” “Nvidia is unable to fulfill the A.I. demand, and options from hyperscalers like Google and semiconductor corporations like AMD are viable when it comes to cloud companies or native A.I. infrastructure. It’s merely clients discovering methods to realize their A.I. ambitions and avoiding vendor lock-in,” Alvin Nguyen, a senior Forrester analyst specializing in semiconductor analysis, informed Observer.
These shifts illustrate a broader push throughout Large Tech to scale back reliance on Nvidia, whose GPU costs and restricted availability have strained cloud suppliers and A.I. labs. Nvidia nonetheless provides Google with Blackwell Extremely GPUs—such because the GB300—for its cloud and information middle workloads, however Ironwood now gives one of many first credible paths to higher independence.
Google started creating TPUs in 2013 to deal with rising A.I. workloads inside information facilities extra effectively than GPUs. The primary chips went dwell internally in 2015 for inference duties earlier than increasing to coaching with TPU v2 in 2017.
Ironwood now powers Google’s Gemini 3 mannequin, which sits on the prime of benchmark leaderboards in multimodal reasoning, textual content technology and picture modifying. On X, Salesforce CEO Marc Benioff referred to as Gemini 3’s leap “insane,” whereas OpenAI CEO Sam Altman stated it “appears to be like like a terrific mannequin.” Nvidia additionally praised Google’s progress, noting it was “delighted by Google’s success” and would proceed supplying chips to the corporate, although it added that its personal GPUs nonetheless provide “higher efficiency, versatility and fungibility than ASICs” like these made by Google.
Nvidia’s dominance underneath stress
Nvidia nonetheless controls greater than 90 % of the A.I. chip market, however the stress is mounting. Nguyen stated Nvidia will possible lead the subsequent section of competitors within the close to time period, however long-term management is prone to be extra distributed.
“Nvidia has ‘golden handcuffs’: they’re the face of A.I., however they’re being compelled to maintain pushing state-of-the-art when it comes to efficiency,” he stated. “Semiconductor processes must preserve enhancing, software program advances must preserve taking place, and so on. This retains them delivering high-margin merchandise, and they are going to be pressured to desert much less worthwhile merchandise/markets. This can give opponents the power to develop their shares within the deserted areas.”
In the meantime, AMD continues to achieve floor. The corporate is already effectively positioned for inference workloads, updates its {hardware} on the identical annual cadence as Nvidia, and delivers efficiency that’s on par with or barely superior to equal Nvidia merchandise. Google’s latest A.I. chips additionally declare efficiency and scale benefits over Nvidia’s present {hardware}, although slower launch cycles may shift the stability over time.
Google could not dethrone Nvidia anytime quickly, however it has compelled the trade to think about a extra pluralistic future—one the place a vertically built-in TPU–Gemini stack competes head-to-head with the GPU-driven ecosystem that has outlined the previous decade.

