- Huawei makes its CANN AI GPU toolkit open supply to problem Nvidia’s proprietary CUDA platform
- CUDA’s close to 20-year dominance has locked builders into Nvidia’s {hardware} ecosystem solely
- CANN offers multi-layer programming interfaces for AI functions on Huawei’s Ascend AI GPUs
Huawei has introduced plans to make its CANN software program toolkit for Ascend AI GPUs open supply, a transfer aimed squarely at difficult Nvidia’s long-standing CUDA dominance.
CUDA, typically described as a closed-off “moat” or “swamp,” has been seen as a barrier for builders in search of cross-platform compatibility by some for years.
Its tight integration with Nvidia {hardware} has locked builders right into a single vendor ecosystem for practically twenty years, with all efforts to convey CUDA performance to different GPU architectures by translation layers blocked by the corporate.
Opening up CANN to builders
CANN, brief for Compute Structure for Neural Networks, is Huawei’s heterogeneous computing framework designed to assist builders create AI functions for its Ascend AI GPUs.
The structure provides a number of programming layers, giving builders choices for constructing each high-level and performance-intensive functions.
In some ways, it’s Huawei’s equal to CUDA, however the determination to open its supply code alerts an intent to develop an alternate ecosystem with out the restrictions of a proprietary mannequin.
Huawei has reportedly already begun discussions with main Chinese language AI gamers, universities, analysis establishments, and enterprise companions about contributing to an open-sourced Ascend improvement neighborhood.
This outreach may assist speed up the creation of optimized instruments, libraries, and AI frameworks for Huawei’s GPUs, doubtlessly making them extra enticing to builders who at present depend on Nvidia {hardware}.
Huawei’s AI {hardware} efficiency has been enhancing steadily, with claims that sure Ascend chips can outperform Nvidia processors beneath particular circumstances.
Experiences equivalent to CloudMatrix 384’s benchmark outcomes towards Nvidia working DeepSeek R1 recommend that Huawei’s efficiency trajectory is closing the hole.
Nonetheless, uncooked efficiency alone won’t assure developer migration with out equal software program stability and assist.
Whereas open-sourcing CANN may very well be thrilling for builders, its ecosystem is in its early phases and is probably not something near CUDA, which has been refined for practically 20 years.
Even with open-source standing, adoption might depend upon how nicely CANN helps current AI frameworks, notably for rising workloads in giant language fashions (LLM) and AI author instruments.
Huawei’s determination may have broader implications past developer comfort, as open-sourcing CANN aligns with China’s broader push for technological self-sufficiency in AI computing, decreasing dependence on Western chipmakers.
Within the present setting, the place U.S. restrictions goal Huawei’s {hardware} exports, constructing a sturdy home software program stack for AI instruments turns into as important as enhancing chip efficiency.
If Huawei can efficiently foster a vibrant open-source neighborhood round CANN, it may current the primary critical various to CUDA in years.
Nonetheless, the problem lies not simply in code availability, however in constructing belief, documentation, and compatibility on the scale Nvidia has achieved.
Through Toms {Hardware}