- HPE will ship 72-GPU racks with next-generation AMD Intuition accelerators globally
- Venice CPUs paired with GPUs goal exascale-level AI efficiency per rack
- Helios depends on liquid cooling and double-wide chassis for thermal administration
HPE has introduced plans to combine AMD’s Helios rack-scale AI structure into its product lineup beginning in 2026.
The collaboration offers Helios its first main OEM associate and positions HPE to ship full 72-GPU AI racks constructed round AMD’s next-generation Intuition MI455X accelerators.
These racks will pair with EPYC Venice CPUs and use an Ethernet-based scale-up cloth developed with Broadcom.
Rack structure and efficiency targets
The transfer creates a transparent industrial route for Helios and places the structure in direct competitors with Nvidia’s rack-scale platforms already in service.
The Helios reference design depends on Meta’s Open Rack Extensive customary.
It makes use of a double-wide liquid-cooled chassis to accommodate the MI450-series GPUs, Venice CPUs, and Pensando networking {hardware}.
AMD targets as much as 2.9 exaFLOPS of FP4 compute per rack with the MI455X era, together with 31TB of HBM4 reminiscence.
The system presents each GPU as a part of a single pod, which permits workloads to span all accelerators with out native bottlenecks.
A purpose-built HPE Juniper swap supporting Extremely Accelerator Hyperlink over Ethernet varieties the high-bandwidth GPU interconnect.
It affords an alternative choice to Nvidia’s NVLink-centric method.
The Excessive-Efficiency Computing Heart Stuttgart has chosen HPE’s Cray GX5000 platform for its subsequent flagship system, named Herder.
Herder will use MI430X GPUs and Venice CPUs throughout direct liquid-cooled blades and can substitute the present Hunter system in 2027.
HPE said that the GX5000 racks’ waste warmth will heat campus buildings, which exhibits environmental concerns along with efficiency targets.
AMD and HPE plan to make Helios-based methods globally obtainable subsequent yr, increasing entry to rack-scale AI {hardware} for analysis establishments and enterprises.
Helios makes use of an Ethernet cloth to attach GPUs and CPUs, which contrasts with Nvidia’s NVLink method.
The usage of Extremely Accelerator Hyperlink over Ethernet and Extremely Ethernet Consortium-aligned {hardware} helps scale-out designs inside an open requirements framework.
Though this method permits theoretically comparable GPU counts to different high-end AI racks, efficiency below sustained multi-node workloads stays untested.
Nonetheless, reliance on a single Ethernet layer may introduce latency or bandwidth constraints in actual purposes.
That mentioned, these specs don’t predict real-world efficiency, which can rely upon efficient cooling, community site visitors dealing with, and software program optimization.
Through Tom’s {Hardware}
Observe TechRadar on Google Information and add us as a most popular supply to get our professional information, critiques, and opinion in your feeds. Be certain that to click on the Observe button!
And naturally you can too comply with TechRadar on TikTok for information, critiques, unboxings in video type, and get common updates from us on WhatsApp too.
