Open-source AI development dominated conversation at Huawei Connect 2025 last week, as the company spelled out implementation schedules and technical details for making its entire AI software stack publicly available by the end of the year. The announcements carried clear signals for developers: candid recognition of past friction, concrete promises about which components will be released, and notes on how the software will fit into existing workflows and operating systems.
Eric Xu, Huawei’s Deputy Chairman and Rotating Chairman, opened his keynote with uncommon candor about challenges developers have faced with Ascend infrastructure. Referencing the impact of DeepSeek-R1’s release earlier this year, Xu noted: “Between January and April 30, our AI R&D teams worked closely to make sure that the inference capabilities of our Ascend 910B and 910C chips can keep up with customer needs.” He added direct feedback from customers to the picture: “Our customers have raised many issues and expectations they’ve had with Ascend. And they keep giving us great suggestions.”
That admission framed the broad open-source commitments first announced at the August 5, 2025 Ascend Computing Industry Development Summit and reinforced by Xu at Huawei Connect. For developers who have run into problems with Ascend tooling, documentation, or the maturity of the ecosystem, the frank assessment signals awareness of gaps between raw technical ability and day-to-day usability. Huawei’s open-source push is presented as a route for community contributions, greater transparency, and outside improvements.
The most technically significant pledge concerns CANN (Compute Architecture for Neural Networks), the core toolkit that mediates between AI frameworks and Ascend silicon. At the August summit Xu said: “For CANN, we will open interfaces for the compiler and virtual instruction set, and fully open-source other software.” That phrasing describes a tiered release strategy that separates parts getting full open-source treatment from parts where Huawei will expose interfaces while keeping some implementations proprietary. The compiler and virtual instruction set — the translation layers that turn high-level model code into instructions the chips execute — will have open interfaces. Developers gain visibility into how code is lowered toward Ascend processors, even if the compiler implementation itself remains partly closed.
That distinction matters when teams tune for latency-sensitive workloads or try to squeeze maximal efficiency from hardware. Open interfaces let developers inspect the translation steps; a fully open compiler would permit swapping in alternative implementations or making deep changes. Huawei’s approach promises a level of transparency useful for optimization, and it retains certain proprietary elements in implementation.
The company set a firm date for this work: “We will go open source and open access with CANN (based on existing Ascend 910B/910C design) by December 31, 2025.” The parenthetical makes clear that the release will reflect the current Ascend 910B/910C generation rather than future chips.
Outside the foundational CANN layer, Huawei pledged to open-source the components developers touch most often. Xu told the audience: “For our Mind series application enablement kits and toolchains, we will go fully open-source by December 31, 2025,” echoing the commitment made on August 5, 2025. The Mind series covers the practical developer surface — SDKs, libraries, debugging tools, profilers, and utilities used to build and ship AI applications. Where CANN follows a mixed approach, the Mind series is slated for blanket open-source release.
That would let the full application-layer toolchain be inspected, modified, and extended by the community. Debugging utilities could gain missing features, libraries can be optimized for particular workloads, and helper utilities may be wrapped in friendlier interfaces. The development environment is positioned to evolve through community contributions rather than waiting on vendor-only updates.
Yet the announcement left several implementation specifics unlisted. Huawei did not name every tool that comprises the Mind series, map out supported programming languages, or commit to the depth of documentation that will accompany the code. Teams deciding whether to spend development time on Ascend will need to judge the toolchain’s completeness once the December release appears.
Huawei also committed to “fully open-source our openPangu foundation models.” That move places Huawei in the same broad open foundation model landscape as initiatives such as Meta’s Llama series and Mistral AI. Still, details about openPangu were thin: the company did not disclose parameter counts, training datasets, fine-tuning pathways, evaluation metrics, or licensing conditions. Questions remain about commercial usage restrictions, the provenance of training data, bias and safety characteristics, and redistribution terms. Those topics matter for organizations that plan to build domain-specific products on top of foundation models.
Open-source foundation models give developer teams a ready starting point without needing the massive compute budgets necessary to train from scratch. Model quality, license flexibility, and available documentation will determine how useful openPangu is in practice. The December release will show whether those models are viable alternatives to existing open-source options.
One concrete detail that addresses an adoption barrier surfaced at Huawei Connect 2025: operating system compatibility. Huawei announced that “Huawei has made the entire UB OS Component open-source, so that its code can be integrated into upstream open-source OS communities like openEuler.” The company added practical notes about integration: “Users can integrate part or all of the UB OS Component’s source code into their existing OSes, to support independent iteration and version maintenance. Users can also embed the entire component into their existing OSes as a plug-in to ensure it can evolve in-step with open-source communities.”
A modular approach means organizations running Ubuntu, Red Hat Enterprise Linux, or other distributions won’t have to switch wholesale to a Huawei-branded operating system. The UB OS Component, which handles SuperPod interconnect management at the operating system level, can be merged into existing environments. For developers and system administrators, that should lower deployment friction for hardware and cluster setups that use Ascend chips.
That flexibility comes with obligations. Organizations that pull UB OS Component source code into their own stacks become responsible for testing, maintenance, and applying updates. Huawei is releasing the component as open-source code rather than promising full vendor support for arbitrary Linux distributions. The model suits groups with strong Linux skills; teams expecting a turnkey, fully supported product from day one may run into challenges.
Framework compatibility may be the single biggest factor in whether developers choose to adopt Ascend infrastructure. Rather than forcing teams to give up familiar tooling, Huawei said it “has been prioritising support for open-source communities like PyTorch and vLLM to help developers independently innovate.” PyTorch compatibility is particularly significant because that framework dominates AI research and many production deployments. If teams can run standard PyTorch code effectively on Ascend hardware without large rewrites, the path to evaluation and early experiments becomes far simpler.
The vLLM work targets optimized inference for large language models, a high-demand use case as organizations put LLM-based services into production. If Ascend can deliver competitive inference performance and cost profiles via native vLLM support, that will address a key practical concern for adopters. The announcements did not provide a complete picture of integration fidelity. Partial PyTorch support that requires workarounds or produces subpar performance for some operations could be more frustrating than useful. The real test will be whether framework integrations reduce barriers to use or create new interoperability headaches.
The December 31, 2025 date for open-sourcing CANN, the Mind series, and openPangu models sits roughly three months from Huawei Connect. A near-term deadline implies much preparatory work is complete: internal dependencies have likely been removed from code, documentation drafts are being prepared, licensing choices are being finalized, and repository infrastructure is being set up.
Release quality will drive early community response. Open-source projects that arrive with sparse documentation, few examples, missing features, or immature tooling often struggle to attract sustained contributors regardless of the underlying technology. Developer teams evaluating new platforms need clear learning resources, sample workflows that move from “Hello World” to production, and straightforward instructions for reproducing key results. The December publication is a starting point, not an end state.
Long-term project health depends on ongoing investment beyond the initial code drop. Community management, triaging issues, reviewing and merging pull requests, keeping documentation current, and coordinating a roadmap all require dedicated resources. Whether Huawei commits to multi-year community support and stable maintainer processes will help determine if the repositories grow into active ecosystems or remain public but lightly maintained codebases.
Several crucial items remain unresolved. License choice will shape how companies can use, extend, and redistribute the released software. Permissive licenses such as Apache 2.0 or MIT allow commercial use and proprietary derivatives with few constraints. Copyleft options like GPL require derivative works to be open, which affects certain commercial development models. Huawei has not stated which licenses it will adopt for the December releases.
Governance is another open question. Will the projects sit under an independent foundation? Will Huawei grant commit rights to external maintainers? How will the community influence feature priorities and roadmaps? What process will govern accepting external contributions? These points often decide whether a codebase attracts a broad contributor base or remains primarily vendor-controlled despite public access.
For organizations weighing investment in Huawei’s open-source AI platform, the next three months are a window for preparation and internal evaluation. Teams can assess whether Ascend hardware matches workload profiles, plan integration testing, and align staff training so they can act quickly once code and documentation appear.
The December 31 release should deliver concrete materials for hands-on review: repositories to inspect, documentation to read, examples to run, and toolchains to exercise. The weeks after publication will reveal external response — whether third-party developers file issues, submit fixes, and begin building the ecosystem artifacts that make platforms more useful.
By mid-2026, usage patterns and contributor activity should indicate whether Huawei’s open-source strategy has fostered an active community around Ascend infrastructure or whether the initiative mostly grows under Huawei’s direct control. For developers, the roughly six-month period from December 2025 through mid-2026 will serve as an evaluation window to decide whether to invest significant engineering time and resources in the platform.

