
Huawei just dropped a big AI move at its annual Huawei Connect 2025 conference in Shenzhen — and it’s all about compute power. The tech giant announced its brand-new SuperPoD Interconnect system, designed to link together as many as 15,000 graphics cards, including Huawei’s own Ascend AI chips.
If that sounds familiar, it’s because Huawei is essentially positioning this as a rival to Nvidia’s NVLink tech, which is what allows Nvidia’s GPUs to “talk” to each other at ultra-fast speeds. And while Huawei’s individual AI chips aren’t as powerful as Nvidia’s top-tier GPUs, clustering them in massive numbers could give customers the compute juice they need to train and scale large AI models — the kind of horsepower modern AI systems demand.
The timing of this announcement is just as important as the tech itself. It comes one day after China officially banned domestic companies from buying Nvidia hardware, including the RTX Pro 600D servers Nvidia had customized specifically for the Chinese market. That move effectively cut Chinese firms off from Nvidia’s high-performance GPUs — and Huawei is clearly stepping into that gap.
Huawei knows it can’t yet match Nvidia chip-for-chip. But if it can deliver scalable infrastructure that connects thousands of Ascend chips into one massive compute engine, it may not need to. For Chinese AI labs suddenly locked out of Nvidia’s supply chain, Huawei’s SuperPoD Interconnect could be the homegrown alternative they’ve been waiting for.