- Sandisk and SK Hynix suggest flash-powered high-bandwidth reminiscence to deal with bigger AI fashions
- Excessive Bandwidth Flash might retailer way more knowledge than DRAM-based HBM for AI workloads
- Vitality financial savings from NAND’s non-volatility might reshape AI knowledge middle cooling methods
Sandisk and SK Hynix have signed an settlement to develop a reminiscence know-how which might change how AI accelerators deal with knowledge at scale.
The businesses goal to standardize “Excessive Bandwidth Flash” (HBF), a NAND-based different to conventional high-bandwidth reminiscence utilized in AI GPUs.
The idea builds on packaging designs much like HBM whereas changing a part of the DRAM stack with flash, buying and selling some latency for vastly elevated capability and non-volatility.
AI reminiscence stacks to deal with bigger fashions at decrease energy calls for
This method permits HBF to offer between eight and sixteen occasions the storage of DRAM-based HBM at roughly comparable prices.
NAND’s capacity to retain knowledge with out fixed energy additionally brings potential vitality financial savings, an more and more necessary issue as AI inference expands into environments with strict energy and cooling limits.
For hyperscale operators operating massive fashions, the change might assist deal with each thermal and finances constraints which are already straining knowledge middle operations.
This plan aligns with a analysis idea titled “LLM in a Flash,” which outlined how massive language fashions might run extra effectively by incorporating SSDs as an extra tier, assuaging stress on DRAM.
HBF primarily integrates that logic right into a single high-bandwidth package deal, doubtlessly combining the storage scale of the most important SSD with the pace profile wanted for AI workloads.
Sandisk offered its HBF prototype on the Flash Reminiscence Summit 2025, utilizing proprietary BiCS NAND and wafer bonding methods.
Pattern modules are anticipated within the second half of 2026, with the primary AI {hardware} utilizing HBF projected for early 2027.
No particular product partnerships have been disclosed, however SK Hynix’s place as a significant reminiscence provider to main AI chipmakers, together with Nvidia, might speed up adoption as soon as requirements are finalized.
This transfer additionally comes as different producers discover comparable concepts.
Samsung has introduced flash-backed AI storage tiers and continues to develop HBM4 DRAM, whereas firms like Nvidia stay dedicated to DRAM-heavy designs.
If profitable, the Sandisk and SK Hynix collaboration might create heterogeneous reminiscence stacks the place DRAM, flash, and different persistent storage varieties coexist.
By way of Toms {Hardware}