Is Liquid Cooling the Key Now that AI Pervades Everything?
Summary Bullets:
• Data center cooling has become an increasingly insurmountable challenge because AI accelerators consume massive amounts of power.
• Liquid cooling adoption is progressively evolving from experimental to mainstream starting with AI labs and hyperscalers, then moving into the colocation space and later enterprises.
As Generative AI (GenAI) takes an ever-stronger hold in our lives, the demands on data centers continue to grow. The heat generated by the high-density computing required to run AI applications that are more resource-intensive than ever is pushing companies to adopt ever more innovative cooling techniques. As a result, liquid cooling, which used to be a fairly experimental technique, is becoming more mainstream.
Eye-watering amounts of money continue to pour into data center investment to run AI workloads. Heat management has become top of mind due to the high rack densities deployed in data centers. GlobalData forecasts that AI revenue worldwide will reach $165 billion in 2025, marking an annual growth of 26% over the previous year. The growth rate will accelerate from 2026 at 34%, and in subsequent years; in fact, the CAGR in the period 2004-2025 will reach 37%.
![]()
Source: GlobalData
The powerful hardware designed for AI workloads is growing in density. Although average density racks are usually below 10 kW, it is feasible to think of AI training clusters of 200 kW per rack in the not-too-distant future. Of course, the average number of kW per rack varies a lot, depending on the application, with traditional IT workloads for mainstream business applications requiring far fewer kW-per-rack than frontier AI workloads.
Liquid cooling is a heat management technique that uses liquid to remove heat from computing components in data centers. Liquid has a much higher thermal conductivity than air as it can absorb and transfer heat more effectively. By bringing a liquid coolant into direct contact with heat-generating components like CPUs and GPUs, liquid cooling systems can remove heat at its source, maintaining stable operating temperatures.
Although there are many diverse types of liquid cooling techniques, direct to chip is the most popular cooling method, also known as “cold plate,” accounting for approximately half of the liquid cooling market. This technique uses a cold plate directly mounted on the chip inside the server, enabling efficient heat dissipation. This direct contact enhances the heat transfer efficiency. This method allows high-end, specialized servers to be installed in standard IT cabinets, similar to legacy air-cooled equipment.
There are innovative variations on the cold plate technique that are currently under experimentation. Microsoft is currently prototyping a new method that takes the direct to chip technique one step further by bringing liquid coolant directly inside the silicon where the heat is generated. The method entails applying microfluidics via tiny channels etched into the silicon chip, creating grooves that allow cooling liquid to flow directly onto the chip and more efficiently remove heat.
Swiss startup Corintis is behind the novel technique, which blends the electronics and the heat management system that have been historically designed and made separately, creating unnecessary obstacles when heat has to propagate through multiple materials. Corintis created a design that blends the electronics and the cooling together from the beginning so the microchannels are right underneath the transistor.
