At CES 2026, AMD CEO Lisa Su used the trade’s greatest stage to stipulate the place the subsequent period of A.I. is headed. The A.I. trade, she stated throughout her keynote yesterday (Jan. 5), is getting into the period of “yotta-scale computing,” pushed by unprecedented development in each coaching and inference. The constraint, Su argued, is not the mannequin itself however the computational basis beneath it.
“Because the launch of ChatGPT a number of years in the past, we’ve gone from about 1,000,000 individuals utilizing A.I. to greater than a billion energetic customers,” Su stated. “We see A.I. adoption rising to over 5 billion energetic customers because it turns into indispensable to each a part of our lives, identical to the mobile phone and the web immediately.”
World A.I. compute capability, she famous, is now on a path from zettaflops towards yottaflops inside the subsequent 5 years. A yottaflop is 1 adopted by 24 zeros. “Ten yottaflops is 10,000 occasions extra computing energy than we had in 2022. There has by no means been something like this within the historical past of computing, as a result of there has by no means been a expertise like A.I.,” Su stated.
But Su cautioned that the trade nonetheless lacks the computing energy required to assist what A.I. will in the end allow. AMD’s response, she stated, is to construct the inspiration end-to-end—positioning the corporate as an architect of the subsequent A.I. section slightly than a provider of remoted parts.
That technique facilities on Helios, a rack-scale information heart platform designed for trillion-parameter A.I. coaching and large-scale inference. A single Helios rack delivers as much as three A.I. exaflops, integrating Intuition MI455X accelerators, EPYC “Venice” CPUs, Pensando networking and the ROCm software program ecosystem. The emphasis is on sturdiness at scale, with methods constructed to develop alongside A.I. workloads slightly than locking prospects into closed, short-lived architectures.
AMD additionally previewed the Intuition MI500 Collection, slated for launch in 2027. Constructed on next-generation CDNA 6 structure, the roadmap targets as much as a thousandfold enhance in A.I. efficiency in contrast with the MI300X GPUs launched in 2023.
Su burdened that yotta-scale computing won’t be confined to information facilities. A.I., she stated, is turning into a neighborhood, on a regular basis expertise for billions of customers. AMD introduced an enlargement of its on-device A.I. push with Ryzen AI Max+ platforms, able to supporting fashions with as much as 128 billion parameters utilizing unified reminiscence.
Past business merchandise, Su tied AMD’s roadmap to public-sector priorities. Joined on stage by Michael Kratsios, President Trump’s science and expertise advisor, who’s slated to talk at CES later this week, she mentioned the U.S. authorities’s Genesis Mission, a public-private initiative aimed toward strengthening nationwide A.I. management. As a part of that effort, AMD-powered supercomputers Lux and Discovery are coming on-line at Oak Ridge Nationwide Laboratory, reinforcing the corporate’s position in scientific discovery and nationwide infrastructure.
The keynote closed with a $150 million dedication to A.I. training, aligned with the U.S. A.I. Literacy Pledge—signaling that, in AMD’s view, sustaining yotta-scale ambition will rely as a lot on expertise growth as on silicon.

