As GPU-driven workloads push rack power densities toward the 1 MW mark by 2030, data centers across Asia Pacific face mounting strain. Facilities built for earlier computing generations are struggling to meet the heat and power demands of modern AI systems, and piecemeal upgrades are proving insufficient. Operators are increasingly planning purpose-built “AI factory” data centers designed from the ground up to handle dense GPU clusters and their supporting infrastructure.
Paul Churchill, Vice President of Vertiv Asia, spoke about how the region is preparing for that shift and what infrastructure changes lie ahead.
The market for AI data centers is set to expand sharply, with projections putting the sector at roughly $236 billion in 2025 and rising to nearly $934 billion by 2030. That surge is driven by widespread AI adoption in finance, healthcare, and manufacturing, where high-performance computing environments packed with GPUs demand far more power and cooling than legacy server farms.
In Asia Pacific, governments are investing in digitalization, 5G networks are growing, and cloud-native and generative AI applications are being rolled out at scale. Those factors are driving compute needs upward at an intensity the region has not seen before.
Churchill said meeting that demand requires more than bigger buildings; it calls for smarter infrastructure strategies that can scale and meet sustainability targets. “Infrastructure leaders must move beyond piecemeal upgrades. A future-ready strategy involves adopting AI-optimized infrastructure that combines high-capacity power systems, advanced thermal management, and integrated, scalable designs,” Paul Churchill, Vice President of Vertiv Asia, said.
Rack densities are climbing from around 40 kW today toward 130 kW, and they could reach 250 kW in some sites by 2030. Under those conditions, conventional air cooling is no longer adequate.
To cope, Vertiv is building hybrid cooling platforms that pair direct-to-chip liquid cooling with air-based units. These arrangements can shift their balance as workloads change, lower energy consumption, and keep systems dependable. “Our coolant distribution units support direct-to-chip liquid cooling while keeping reliability and serviceability practical in high-density environments,” Churchill said.
Power distribution is becoming more complex as well. AI tasks can swing load levels quickly, so delivery infrastructure needs to respond in near real time. Vertiv is adapting rack power distribution units and busway systems for higher voltages and better load sharing. Smarter monitoring tools help operators balance demand, cut unused capacity, and lengthen uptime — a major concern in parts of Southeast Asia where grid stability varies.
The appearance of liquid-cooled GPU pods and the push toward 1 MW racks by chipmakers such as AMD and hyperscalers including Microsoft, Google, and Meta point to a deeper architectural change. Rather than retrofit older centers, developers are laying out new facilities specifically for AI workloads.
“The future of data-center architecture is hybrid, and these infrastructures require facilities to be built around liquid flow,” Churchill said. That shift involves rethinking floor plans, plumbing for coolant distribution, and more advanced power systems.
Next-generation builds will link cooling, power, and operational monitoring from the chip level up to the grid. In Asia Pacific, where hyperscale campuses are expanding fast, that kind of end-to-end approach helps meet performance targets and sustainability objectives.
Capacity in the region is expected to surge: Asia Pacific could surpass the US in commissioned data center power by 2030, approaching almost 24 GW. To absorb that growth, companies are moving away from ad hoc upgrades and toward full-stack AI factory centers.
Churchill described the transition as a staged process. The initial phase is integrated planning that brings power delivery, thermal systems, and IT operations under a coordinated plan rather than treating them as separate silos. That makes deployment smoother and establishes a foundation for expansion.
The next move is adoption of modular, prefabricated systems that let firms add capacity in stages without large-scale disruption. “Companies can deploy factory-tested modules alongside existing infrastructure, gradually migrating workloads to AI-ready capacity without disruptive overhauls,” he said.
Sustainability must be part of every step. That includes wider use of lithium-ion energy storage, grid-interactive UPS architectures, and higher-voltage distribution to raise overall efficiency and resilience.
Vertiv has introduced PowerDirect Rack, a DC power shelf aimed at AI and high-performance computing. Moving more of the power chain to DC can cut losses by trimming conversion stages between the grid and servers. It also aligns well with renewables and battery storage, which are getting more common across Asia Pacific.
That approach has real appeal in energy-constrained markets such as Vietnam and the Philippines, where flexible power strategies help keep sites operational. Paul Churchill said DC power is “not just an efficiency play — it is a strategy for enabling sustainable scalability.”
Operators are also contending with tighter rules and growing grid limits as AI pushes up demand. This is especially acute in Southeast Asia, where network reliability and electricity tariffs vary widely.
Vertiv is partnering with operators on hybrid power models that incorporate lithium-ion batteries, microgrids, and other distributed systems to reduce reliance on the grid and boost uptime. Interest is rising in solar-backed UPS arrangements and newer storage technologies that smooth peaks and control costs.
Improving cooling performance remains a high priority. Hybrid liquid cooling setups can lower both energy and water consumption compared with older approaches. “Our focus is on delivering infrastructure that meets performance demands while aligning with ESG goals,” Churchill said. “We’re collaborating with our partners to keep AI-driven growth in the region responsible, sustainable, and aligned with long-term digital and environmental objectives.”
Many emerging economies in Asia Pacific face limits on land, uneven power availability, and gaps in skilled labor. In those environments, modular, factory-built data center systems present a pragmatic alternative.
Prefabricated modules can cut build times by up to 50 percent while boosting energy performance and scalability. They let operators add capacity incrementally, avoiding heavy upfront capital outlays. That flexibility is particularly useful for AI workloads, which may expand rapidly and unpredictably.
By pairing compact layouts with energy-efficient operation, modular designs give operators a faster, lower-risk path to AI-ready capacity — a key advantage as the region’s digital economies expand.
The shift toward AI factory centers, supported by advanced cooling, DC power distribution, and modular construction, is changing how data centers are planned and run across Asia Pacific. As workloads grow and sustainability pressures rise, relying on legacy infrastructure is no longer viable.

