- AI groups nonetheless favor Nvidia, however rivals like Google, AMD and Intel are rising their share
- Survey reveals funds limits, energy calls for, and cloud reliance are shaping AI {hardware} selections
- GPU shortages push workloads to cloud whereas effectivity and testing stay ignored
AI {hardware} spending is starting to evolve as groups weigh efficiency, monetary concerns, and scalability, new analysis has claimed.
Liquid Net’s newest AI {hardware} examine surveyed 252 skilled AI professionals, and located whereas Nvidia stays comfortably essentially the most used {hardware} provider, its rivals are more and more gaining traction.
Almost one third of respondents reported utilizing options comparable to Google TPUs, AMD GPUs, or Intel chips for at the least some a part of their workloads.
The pitfalls of skipping due diligence
The pattern dimension is admittedly small, so doesn’t seize the total scale of world adoption, however the outcomes do present a transparent shift in how groups are starting to consider infrastructure.
A single workforce can deploy lots of of GPUs, so even restricted adoption of non-Nvidia choices could make a giant distinction to the {hardware} footprint.
Nvidia remains to be most well-liked by over two-thirds (68%) of surveyed groups, and plenty of patrons don’t rigorously evaluate options earlier than deciding.
About 28% of these surveyed admitted to skipping structured evaluations and in some instances, that lack of testing led to mismatched infrastructure and underpowered setups.
“Our analysis reveals that skipping due diligence results in delayed or canceled initiatives – a expensive mistake in a fast-moving business,” stated Ryan MacDonald, CTO at Liquid Net.
Familiarity and previous expertise are among the many strongest drivers of GPU selection. Forty three p.c of individuals cited these elements, in contrast with 35% who valued value and 37% who went for efficiency testing.
Finances limitations additionally weigh closely, with 42% scaling again tasks and 14% canceling them solely because of {hardware} shortages or prices.
Hybrid and cloud-based options have gotten commonplace. Greater than half of respondents stated they use each on-premises and cloud methods, and plenty of count on to extend cloud spending because the 12 months goes on.
Devoted GPU internet hosting is seen by some as a approach of avoiding the efficiency losses that include shared or fractionalized {hardware}.
Power use continues to be difficult. Whereas 45% acknowledged effectivity as vital, solely 13% actively optimized for it. Many additionally regretted energy, cooling, and provide chain setbacks.
Whereas Nvidia continues to dominate the market, it’s clear that the competitors is closing the hole. Groups are discovering that balancing value, effectivity, and reliability is sort of as vital as uncooked efficiency when constructing AI infrastructure.