
Choosing between greenfield builds and brownfield retrofits determines an operator’s speed to market, total cost of ownership, and ability to meet future AI‑driven performance and sustainability targets.
The surge in AI and machine‑learning workloads has shattered traditional air‑cooling limits, with rack power densities now approaching 200 kW. Direct‑to‑chip liquid cooling and other hybrid solutions are emerging as the only viable path to maintain energy efficiency and thermal stability. Operators must therefore rethink cooling architecture, moving beyond simple airflow to integrated liquid‑assisted designs that can handle the intense heat flux of modern HPC clusters.
When evaluating site options, greenfield data centers provide unparalleled design freedom. Engineers can embed liquid‑first cooling loops, heat‑recovery systems, and renewable power sources from the ground up, creating a future‑proof environment that maximizes density and minimizes operational expenses. However, these projects demand extensive permitting, utility coordination, and capital outlay, often extending timelines by two years or more. In contrast, brownfield retrofits capitalize on existing power, water, and network infrastructure, slashing capital costs by up to half and delivering AI‑ready capacity within months. The trade‑off is limited flexibility; legacy structures may struggle with weight loads, fluid distribution, and the high‑density airflow patterns required for next‑gen hardware.
A pragmatic hybrid approach is gaining traction among hyperscalers and enterprise operators alike. By partitioning facilities—dedicating high‑density zones to liquid cooling while retaining air‑cooled sections for legacy workloads—organizations can incrementally upgrade assets, extend the life of existing buildings, and spread investment over time. This zoned strategy reduces risk, improves thermal efficiency, and positions operators to transition seamlessly to a liquid‑first future without the disruption of a full rebuild. As AI workloads continue to accelerate, the ability to blend greenfield ambition with brownfield agility will become a decisive competitive advantage.
Image: Alamy
AI and machine learning (ML) are driving explosive growth in high‑performance computing (HPC). With compute densities moving into territory once considered science fiction, power densities that used to be 10–20 kW per rack now approach and even exceed 100 kW, with some AI clusters in touching distance of 200 kW per rack.
Even the most optimized air systems cannot handle the heat fluxes generated by this technology. Hybrid cooling is the most practical path forward.
How this cooling approach is implemented hinges heavily on whether operators are building brand‑new sites (greenfield) or retrofitting older facilities (brownfield). When establishing high‑density, liquid‑assisted HPC environments, there are trade‑offs between greenfield new construction and brownfield retrofits in terms of sustainability, scalability, and cost‑effectiveness.
Air‑based systems scale poorly as compute density rises. The airflow required becomes difficult at rack sizes above 20–30 kW, and the problem is exacerbated by dense GPU arrays and AI accelerators. Even with the tightest containment, hot spots will start to emerge, pressure drops in the cold aisles increase, and fans exceed their maximum velocity.
Hybrid cooling solutions that combine direct liquid cooling (DLC) technology, such as direct‑to‑chip (DTC) cooling, with conventional airflow designs can reduce energy use and operating expenses while increasing efficiency.
Greenfield data centers offer designers the opportunity to incorporate innovative heat‑reuse systems, liquid‑first cooling systems, and renewable power integration from the outset. A new build offers an exceptional opportunity to construct the optimum physical layout, optimizing power distribution, cooling systems, and network architecture. Since you’re designing and building from scratch, you’re basically working with a blank canvas for implementing efficiency and sustainability goals, along with the ability to choose the optimal site location for power, water and network access.
For operators planning long‑term HPC roadmaps, new construction offers the greatest flexibility and sustainability. Unrestricted design freedom enables the implementation of the newest innovations and best practices, maximizing efficiency, density, performance, and scalability.
Optimized design: Components and white‑space arrangements can be optimized for rack densities that exceed 100 kW.
Scalability: Anticipated growth is enabled with high‑capacity infrastructure and modular construction.
Sustainability: Incorporating heat recovery and reuse, renewable energies, and low‑WUE solutions increases energy efficiency, minimizing the environmental impact.
Operational clarity: Standardized components and simplified fluid distribution expedite maintenance. Modern safeguards can be integrated into new buildings to ensure compliance with regulations and industry standards.
Site selection timelines, permits, utility coordination, and design validations lengthen project cycles and increase capital costs. Uncertainty can also arise if HPC workloads advance quicker than infrastructure.
Building a greenfield data center is a calculated investment for a future‑ready infrastructure. Advantages in efficiency and future‑proofing often render the cost worthwhile for hyperscalers dedicated to sustainability and high‑density computing and present an opportunity for organizations looking to establish new benchmarks for data‑center excellence.
While the industry obsesses over greenfield “AI‑ready” operations, the fastest path forward may lie in existing facilities.
Typically costing 30–50 % less than a new build and, more importantly, avoiding two‑plus years of revenue delay, brownfield retrofits remain the most direct route to HPC enablement. Some pros of brownfield retrofits are they’re typically more cost‑effective, faster to deploy and can be anywhere that already has existing utilities and connectivity. Some cons are the data center and/or retrofit will be limited by the existing footprint and infrastructure, an existing building may struggle with energy inefficiency, and retrofitting new DLC solutions can be challenging.
For organizations seeking rapid deployment, retrofits that upgrade existing data centers to support higher densities are an attractive option. Leveraging existing power, water, and building infrastructure, with lower initial capital expenditure, can also avoid the long timelines associated with new construction.
Speed to deployment: Accelerates AI readiness without waiting for new buildings to be constructed.
Lower upfront investment: Existing electrical and mechanical systems can be reused or upgraded incrementally, controlling capex.
Reduced risk: Gradual transition strategies extend asset life and manage TCO.
Proximity to network exchanges: Brownfields are usually located where latency matters.
The main issue with brownfield sites is that they are inherently constrained by legacy designs that weren’t designed for 40–100 kW per rack, liquid cooling, or the power step‑up transformers AI clusters demand. Existing air‑cooled layouts may limit achievable rack density and airflow management. Floors often can’t handle the weight of DTC racks or immersion tanks, and retrofitting DLC into air‑cooled halls can be complex and challenging.
However, the technology ecosystem for retrofit solutions has improved hugely. With a hybrid cooling design, DTC cooling and prefabricated liquid loops can be added onto existing systems without a total rebuild, allowing incremental upgrades.
Both brownfield and greenfield approaches have significant advantages, depending on the organization’s objectives and timescales. While brownfield retrofits provide agility and cost control, greenfield sites enable optimal performance and sustainability.
A hybrid strategy can strike a balance between operational continuity and financial risk by leveraging existing resources while preparing for the transition to high‑density, liquid‑first HPC architectures. Using zoned hybrid cooling that separates high‑density HPC clusters from legacy ITE via containment or partitioned rows allows liquid‑cooled racks to operate efficiently without disrupting air‑cooled areas. Non‑HPC zones can continue to use existing air‑cooling systems, enabling focused investment to increase asset life, boost performance, and lower total cost of ownership (TCO) without requiring a complete rebuild.
This approach improves thermal efficiency significantly without requiring a complete rebuild. Economically, it prolongs the useful life of current assets while being future‑ready for a liquid‑first transition.
The best design is not necessarily one that’s perfect today. It’s one that can easily adapt moving forward. The most perceptive players are those who adopt an evolutionary mindset.
By carefully fusing the vision of greenfield design with the speed of brownfield retrofits, operators can save costs, manage risk, and prepare their infrastructure for the AI‑driven HPC of the future.
Comments
Want to join the conversation?
Loading comments...