Download J154

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Passive solar building design wikipedia , lookup

Building insulation materials wikipedia , lookup

Thermoregulation wikipedia , lookup

Thermal comfort wikipedia , lookup

Heat equation wikipedia , lookup

Heat sink wikipedia , lookup

Heat pipe wikipedia , lookup

Dynamic insulation wikipedia , lookup

Solar water heating wikipedia , lookup

Cutting fluid wikipedia , lookup

Heat exchanger wikipedia , lookup

R-value (insulation) wikipedia , lookup

Evaporative cooler wikipedia , lookup

Vapor-compression refrigeration wikipedia , lookup

Heat wave wikipedia , lookup

Cogeneration wikipedia , lookup

HVAC wikipedia , lookup

Copper in heat exchangers wikipedia , lookup

Cooling tower wikipedia , lookup

Thermal conduction wikipedia , lookup

Intercooler wikipedia , lookup

Radiator (engine cooling) wikipedia , lookup

Underfloor heating wikipedia , lookup

Hyperthermia wikipedia , lookup

Economizer wikipedia , lookup

Solar air conditioning wikipedia , lookup

Transcript
This article was downloaded by: [Rochester Institute of Technology]
On: 10 February 2015, At: 08:18
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK
Heat Transfer Engineering
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/uhte20
Current Status and Future Trends in Data-Center
Cooling Technologies
a
Zhen Li & Satish G. Kandlikar
a
b
Department of Engineering Mechanics, Tsinghua University, Beijing, China
b
Mechanical Engineering Department, Rochester Institute of Technology, Rochester, New
York, USA
Accepted author version posted online: 03 Jul 2014.Published online: 24 Oct 2014.
Click for updates
To cite this article: Zhen Li & Satish G. Kandlikar (2015) Current Status and Future Trends in Data-Center Cooling
Technologies, Heat Transfer Engineering, 36:6, 523-538, DOI: 10.1080/01457632.2014.939032
To link to this article: http://dx.doi.org/10.1080/01457632.2014.939032
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained
in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the
Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and
should be independently verified with primary sources of information. Taylor and Francis shall not be liable for
any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of
the Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions
Heat Transfer Engineering, 36:523–538, 2015
C Taylor and Francis Group, LLC
Copyright ISSN: 0145-7632 print / 1521-0537 online
DOI: 10.1080/01457632.2014.939032
Current Status and Future Trends
in Data-Center Cooling Technologies
ZHEN LI1 and SATISH G. KANDLIKAR2
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
1
Department of Engineering Mechanics, Tsinghua University, Beijing, China
Mechanical Engineering Department, Rochester Institute of Technology, Rochester, New York, USA
2
The data-center cooling strategies have evolved from their original roots based on room air conditioning systems to the
current status as a low-grade thermal energy resource. This paper presents an overview of different technologies driving
the evolution of data-center cooling systems. Current status, future research trends and opportunities for developing energyefficient systems utilizing single-phase and two-phase systems are highlighted. These changes are warranted more than ever,
as the majority of large-scale data centers continue to be cooled by the conventional air cooling technology.
INTRODUCTION
Data-center cooling systems have evolved from cooling a
single server or a small cluster of servers into a giant server
farm of information technology (IT)-centric units. These large
data centers continue to play an important role in the information economy, especially related to large institutions such as
social media, financial institutions, consumer and retail sector,
governmental infrastructure, universities, and scientific research
establishments. From an energy consumption perspective, these
modern large-scale data centers consume a formidable amount
of electricity worldwide. According to a recent estimate, the
rapid growth in data centers has resulted in up to 100 times
more energy consumption per square meter than in commercial
office spaces [1].
In 2005, 1.2% of the total U.S. energy consumption was
attributed to the server-driven power usage [2]. In 2006 the
U.S. Environmental Protection Agency (EPA) reported that 60
billion kWh (1.5% of the U.S. electricity usage) was consumed
by data centers [3]. Over the past 6 years, energy use by these
centers and their supporting infrastructure is estimated to have
increased by nearly twofold.
Similar energy consumption trends are seen in other parts of
the world. In Japan, the energy consumption in 2009 amounted
Address correspondence to Professor Satish G. Kandlikar, Mechanical Engineering Department, Rochester Institute of Technology, 1 Lomb Memorial
Drive, Rochester, NY 14623, USA. E-mail: [email protected]
Color versions of one or more of the figures in the article can be found online
at www.tandfonline.com/uhte.
to 7 billion kWh, and was expected to increase annually by
approximately 7% to reach 10.5 billion kWh in 2015 [4]. In
China, an analysis conducted in 2010 showed that the energy
consumption of the data centers has accounted for 1% of total
electricity consumption. Of this usage, the energy consumption
by the electronic components of the IT made up about 50%, and
cooling systems about 40%, while the remaining 10% (including humidifier equipment, lighting equipment, power system,
etc.) accounted for other usage by auxiliary systems [5]. Figure 1 shows a breakdown by different equipment associated with
data centers; the cooling system including computer room air
conditioning (CRAC), chiller and humidifier accounts for 45%
of the total energy consumption, and the IT equipment accounts
for 30% [6]. In simple terms, 1 kWh of energy consumed by
the IT equipment requires another 1 kWh of energy to drive the
cooling and auxiliary systems [7]. While computer and IT engineers are focusing on developing more efficient hardware and
software, the heat transfer community has been exploring various possibilities to reduce the energy consumption for removing
the heat generated in the IT equipment.
The traditional air cooling system employed in data-center
cooling not only leads to a significant amount of energy usage
but also constitutes a large water consumption requirement as
well. Since the IT equipment operates at relatively moderate
temperatures between 50◦ C and 80◦ C, it opens up another opportunity for utilizing the low-grade thermal energy for heating
and other applications. Developing new technologies to harvest
total energy efficiency benefits and generate environmentally
sustainable processes offers opportunities for large-scale conservation of both energy and water.
523
524
Z. LI AND S. G. KANDLIKAR
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
and finally the hot air is sent back to the air-conditioning system
where it is cooled down by chilled water.
As is well recognized in literature, the conventional aircooling system has a number of shortcomings:
Figure 1 Energy consumption in an air-cooled data center, adapted from [6].
CRAC: computer room air conditioning, PDU: power distribution unit, UPS:
uninterrupted power supply.
The air-cooling systems are made up of three main elements:
the refrigeration chiller plant (including the cooling tower fans
and condenser water pumps, in the case of water-cooled condensers), the building chilled water pumps, and the data-center
floor air-conditioning units. About half the energy used for cooling is consumed at the refrigeration chiller compressor and about
one-third is used by the room-level air-conditioning units for air
movement.
TRADITIONAL DATA-CENTER COOLING SYSTEM
The conventional way of data-center cooling, which is called
“computer room air handler units” (CRAH) or CRAC, is by air
cooling only. Figure 2 shows a schematic overview of an aircooled data center in which the floor is raised from the ground.
The server racks are placed face to face and back to back to
create passageways for cold and warm air streams. Air outlets
are placed on the floor at the cold stream passageway where
chilled air comes out and enters the room. Chilled air passes
through the server rack cooling the electronic equipment, then
comes out from the back of the server racks as a hot air stream,
a) Since air has poor thermal properties, it results in a low
convection heat transfer coefficient. The low heat transfer
coefficient coupled with a large temperature rise of air due
to its low specific heat results in a large temperature gradient within the server rack. For example, this temperature
difference can reach as high as 30◦ C.
b) In some locations within the server, the temperature can
reach quite a high value due to localized hot spots. To effectively cool these hot spots, the chilled air temperature needs
to be set unnecessarily low so as to not exceed the electronic
equipment’s recommended temperature limit. For such diverse heat loads, the IT equipment may not be cool enough
at certain locations, while the server room is overcooled.
c) The refrigeration system needs to operate at all times and
under all outdoor temperature conditions, even during winter
in cold regions.
d) The heat generated from IT equipment is removed indirectly
after being picked up by the air in the server room.
In order to make data-center air-cooling systems energy efficient, Srinarayana et al. [8] made a comparison between CRAC
systems with raised floors and nonraised floors. Breen et al.
[9] presented a model for analyzing the performance of an air
cooling system, and based on their findings proposed widening
the operating temperature range of the data centers to improve
energy efficiency. For example, the potential gains in the coefficient of performance (COP) were estimated at approximately
8% for every 5◦ C increase in the rack air inlet temperature. A
number of researchers have proposed different operating strategies to improve the energy efficiency of the cooling systems
[10–14]. However, the CRAC system efficiency is inherently
limited by the drawbacks mentioned earlier, and alternative systems using liquid cooling are making headway. For a special
kind of air cooling, Maguire et al. [15] analyzed some advanced
air-cooling approaches inside portable projection display equipment through computational fluid dynamics (CFD) analysis.
Such numerical simulations are becoming more common for
data-center components as well.
COLD-PLATE-BASED LIQUID COOLING SYSTEMS
Figure 2 Schematic of a data center using air cooling system.
heat transfer engineering
Liquid coolants in general have better thermal properties,
such as higher thermal conductivity, specific heat, and density
as compared to air. This also leads to a higher convection heat
transfer coefficient and reduced heat transfer surface area requirement. The electronic devices are mounted on cold plates
through which the liquid is circulated. This eliminates the direct thermal interaction with the room air, which is essentially
isolated from the cooling system. The room heat gains and
vol. 36 no. 6 2015
Z. LI AND S. G. KANDLIKAR
525
thermal regulation are thus of no direct consequence, and greater
control can be achieved in regulating temperature at individual
cold plate levels.
There are two ways in which liquid cooling is implemented in
cold plates: single-phase liquid cooling systems and two-phase
cooling systems. Both of them fall under direct cooling in that
the cold plate is directly attached to the central processing unit
(CPU) and other heat-generating devices. Some of the operational features of these systems are discussed in the following
sections.
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
Single-Phase Cold Plate
The heat exchanger (cooling jacket) is a metal plate with high
thermal conductivity. The heat-generating devices are mounted
on top of the plate, while cooling liquid (coolant) flows in the
coolant passages within the cold plate or through the tubes
attached or soldered to the cold plate. Heat generated within the
devices is transferred to the coolant channels by conduction and
then removed by single-phase convection to the liquid flowing
within the channels.
Two-Phase Cold Plate
The cooling approach in two-phase cold plates is similar to
the single-phase cold plates, except that the liquid evaporates
during its passage through the cold plate, absorbing the latent
heat. The large latent heat, coupled with high heat transfer coefficient and little variation in temperature of the evaporating liquid
during its passage through the cold plate, makes the two-phase
cold plates attractive.
Comparison of Single-Phase and Two-Phase Cold Plates
Both single-phase and two-phase cold plate designs offer superior performance compared to air cooled systems. They both
are in direct (conduction heat transfer mode) communication
with the CPU and other electronic devices mounted on them.
Effective cooling can be achieved to meet the individual thermal requirements of devices through their proper placements on
the cold plates. Handling of hot spots can be effectively managed by providing larger surface area and higher heat transfer
coefficients in the coolant channels under or in the immediate
vicinity of the devices responsible for creating the hot spots.
Figure 3 compares the cold-plate-based liquid cooling system
with a conventional air-cooled system. The temperature gradient
is reduced in the server rack, resulting in a greater temperature
uniformity over the entire region [16].
The main differences between the single-phase cold plates
and the two-phase cold plates can be summarized as follows:
•
The single-phase systems use sensible heat while the twophase systems utilize latent heat, resulting in generally larger
cooling capacities for the two-phase cold plates.
heat transfer engineering
Figure 3 Comparison of cooling system: (a) traditional cooling by CRAC and
(b) liquid cooling system [16]. Redrawn, not to scale.
•
The high latent heat in the two-phase cold plates also reduces
the liquid circulation rate and provides a more uniform cold
plate temperature.
• The vapor generated as a result of evaporation in a two-phase
cold plate requires larger return pipes for its transport to the
condenser. However, the buoyancy facilitates vapor transport
and provides an opportunity for gravity-driven pumpless thermosyphon loop operation in the two-phase coolant loop.
Desirable Features of Cold-Plate-Based Systems
Shortening the heat flow path. Keeping the coolant loop isolated from the room air provides opportunities for harvesting
energy that are not possible with conventional air cooled systems. Figure 4 shows the differences between traditional cooling
by CRAC and liquid cooling systems. In the case of an aircooled system, the heat removed from the devices maintained at
85◦ C is transferred to the chiller at low temperatures of around
10◦ C. In a liquid-cooled single-phase or two-phase system, the
Figure 4 Shortening the heat path to increase the cooling efficiency [17].
Redrawn, not to scale.
vol. 36 no. 6 2015
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
526
Z. LI AND S. G. KANDLIKAR
exit temperature of the coolant can be kept at a high enough
value where it can be reused for a number of different end uses.
District heating using the warm coolant can be accomplished if
there is a residential or commercial establishment in the vicinity of the data center. In case there is a power plant nearby,
the available heat can be used to heat the feed water to the
steam boiler, thereby improving the power generation cycle efficiency. In the event the devices are operated at higher temperatures, it may be possible to operate an absorption refrigeration system [17]. These various energy paths are illustrated
in Figure 4.
Currently, the vast majority of data centers are air-cooled using either computer room air-conditioning units or free cooling
(fan-driven flow of outside air in the cold region). Switching to
two-phase on-chip cooling provides an opportunity to reduce
the power consumption considerably and facilitates the reuse of
heat removed by shortening the heat flow path, as illustrated in
Figure 4.
Integrated Microchannel Cold Plates
The energy efficiency can also be improved by incorporating
microchannel or minichannel flow passages in the single-phase
and two-phase cold plates. The high heat transfer coefficients
achieved in these small-scale passages reduce the temperature
difference between the devices and the coolant stream. Increasing the inlet temperature of the coolant reduces the cooling
costs associated with cooling the heated coolant in a chiller.
Similarly, using microchannel/minichannel flow passages in the
chiller, or in the condenser in the case of a two-phase loop,
reduces the required temperature difference. Overall, these features will enable a reduction in the equipment size, increase the
required chilled water supply temperature, and extend the range
of free cooling temperatures of the outside air. Another advantage is that the server physical sizes could be reduced by increasing the heat flux levels and packaging more devices on a cold
plate.
Figure 5 shows a schematic of a liquid cooling system incorporating microchannels directly on the chip [18]. Such systems
could be incorporated alongside the cold plates with lower heat
flux dissipating components. The heat generated from the chip
is removed by evaporation; a thermal bus can also be seen (with
water or boiling refrigerant) that would remove the heat through
microcondensers positioned on the back of each blade, where
each condenser plate is clamped using replaceable thermal interface material (TIM) pads (and thus the blade can be hot
swapped) to the thermal bus cooling plate of the rack.
Meijer [18] indicated that in such liquid cooling systems,
a heat exchanger with microchannels can be designed so that
the thermal resistance between the transistor junction and the
fluid is reduced, which can also lead to energy saving through
reduced temperature differences between the heat source and
the coolant stream.
heat transfer engineering
Figure 5 Blade/cabinet architecture with two-phase on-chip cooling driven by
a liquid pump [18]. Redrawn, not to scale.
COUPLED AIR AND LIQUID COOLING SYSTEMS
Iyengar et al. [19, 20] and David et al. [21] used a coupled
liquid and air cooling system as shown in Figure 6. In this system, cold liquid entered into the rack, and then came to the side
car heat exchanger for cooling the recirculated air. The air circulating within the rack was cooled by the incoming cold liquid
using an air-to-liquid heat exchanger mounted within the rack
enclosure. Air was cooled down in the heat exchanger and then
circulated over the components to be cooled. The air-cooled devices included storage disk drives, power supplies, and surfacemounted components on printed circuit boards. This avoided the
complexities of mounting these low-energy-generating devices
Figure 6 Rack plan view schematic [20]. ©IEEE. Reproduced by permission
of IEEE. Permission to reuse must be obtained from the rightsholder.
vol. 36 no. 6 2015
Z. LI AND S. G. KANDLIKAR
527
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
Figure 7 Schematic of new data-center cooling design: dual enclosure liquid
cooling (DELC) [20]. ©IEEE. Reproduced by permission of IEEE. Permission
to reuse must be obtained from the rightsholder.
on a cold plate. After cooling the air in the rack, liquid entered
the inlet node manifold and after cooling components, hot liquid
exited from the exit node manifold and the buffer unit, where it
exchanged heat with cold water as shown in Figure 6.
Figure 7 shows a new design combining the CPU-mounted
cold plates and air-cooled components served by a rack-level
air–liquid heat exchanger [20]. As can be seen, in the buffer
unit, the hot liquid released its heat to the cold water (with
glycol added to avoid freezing in the subzero cold winter conditions). On the left side, water–glycol liquid exchanged heat
with ambient air. The server in the experimental loop shown in
Figure 7 was an IBM x3550 M3 server which is 1U tall (1.75
inches, shown in Figure 8). The microprocessor modules were
cooled using cold plate structures; the dual inline memory module (DIMM) cards were cooled by attaching them to a pair of
conduction spreaders, which were then bolted to a cold rail that
had water flowing through it as shown in Figure 9.
While the cooling system accounted for approximately 30%
of the total energy, the anticipated benefits of such energycentric configurations are expected to result in significant energy
savings.
Figure 8 Hybrid air-water cooled 1U server designed for intake of water and
air [20]. ©IEEE. Reproduced by permission of IEEE. Permission to reuse must
be obtained from the rightsholder.
heat transfer engineering
Figure 9 Node cooling sub-assembly for partially liquid cooled server [20].
©IEEE. Reproduced by permission of IEEE. Permission to reuse must be
obtained from the rightsholder.
CHILLER-LESS COOLING SYSTEM
Parida et al. [2] and David et al. [22] conducted experiments
investigating water cooling of server microprocessors and memory devices in an energy-efficient chiller-less data center. For the
chiller-less cooling, there was only a water-cooled section in the
server rack. Figure 10 shows the details of their experimental
liquid cooling loop and devices installed in an IBM server.
Figure 11 shows a schematic of the chiller-less data-center
liquid cooling design. As can be seen, water entered the cold
plate and cold rails where the CPU and DIMMs were cooled.
Then hot water came out of the rack, and flowed to the
liquid–liquid heat exchanger to transfer heat to the outside cold
water. Finally, the outside water flowed in the outdoor coolant
Figure 10 (a) Schematic of the volume server with node liquid cooling loop
and other server components. (b) Node liquid cooling loop, having liquid cooling
components for both the processors (CPU 1 and CPU 2) and the 12 DIMMs
(numbered 2 through 18), installed in an IBM System X volume server [2].
©IEEE. Reproduced by permission of IEEE. Permission to reuse must be
obtained from the rightsholder.
vol. 36 no. 6 2015
528
Z. LI AND S. G. KANDLIKAR
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
Figure 11 Schematic of the chiller-less data-center liquid cooling design [2].
©IEEE. Reproduced by permission of IEEE. Permission to reuse must be
obtained from the rightsholder.
loop and was cooled in the outdoor heat rejection exchanger by
ambient air. The indoor and outdoor working fluids were both
water. Thus water was used as a medium for heat removal from
indoor to ambient air. This experiment illustrates a typical use
of natural cold source (ambient air) and eliminates the cost of
running a chiller loop and refrigeration equipment.
Figure 12 shows the results of the Parida et al. [2] experiments
with a chiller-less cooling system. Different coolant and device
temperatures are plotted on a typical 24-hour daily cycle. As
can be seen, the outdoor temperature changed with time, from
about 20◦ C to 30◦ C, while the temperatures of DIMM17 CPU1
and CPU2 were still maintained below 65◦ C. This system utilized only “free” ambient environment cooling. This approach
greatly reduced cooling energy usage as well, and could reduce
data-center refrigerant and make up water usage. Since the device temperature was above any design outdoor temperature, the
data-center designs could be entirely based on the chiller-less
cooling concept. The sizes of the heat exchangers and associated costs will vary depending on the available temperature
difference at any given location.
Figure 12 Variation of temperature from the outdoor air to the server components [2]. PECI: platform environment control interface. DTS: digital thermal/temperature sensor. MWU: modular water unit. ©IEEE. Reproduced by
permission of IEEE. Permission to reuse must be obtained from the rightsholder.
heat transfer engineering
Figure 13 Cold plate geometry used using simple aluminum base and tubes
[24]. All dimensions in millimeters. ©IEEE. Reproduced by permission of
IEEE. Permission to reuse must be obtained from the rightsholder.
COLD PLATES AND OTHER DEVICE-LEVEL COOLING
SYSTEM INTEGRATION
Cader et al. [23] focused on a liquid-cooling technology that
deploys spray cooling at the device level. This technology used
a liquid-cooled cold plate that replaced the traditional air-cooled
heat sink on a given server’s microprocessor. The liquid delivered to the cold plate heated up as it cooled the microprocessor,
and was then returned to a water-cooled heat exchanger called a
thermal management unit (TMU) or a coolant distribution unit
(CDU). The TMU or CDU could be mounted in the bottom of
a given rack, or at the end of a row of racks, thereby supplying
coolant to all racks in that row. The other components in the
server were cooled with the heating–ventilation–air conditioning (HVAC) system facility air.
A simple cold plate geometry used by Goth et al. [24]
is shown in Figure 13. The cold plate was constructed of
tubes embedded and brazed into aluminum plates as shown in
Figure 13.
With the water-cooled heat plates, nearly all (greater than
98%) of the heat was removed by conduction paths. Water cooling can thus be effectively used for processor modules, memory
DIMMs, and a number of other IT equipment items.
Beaty and Schmidt [25] projected that the data-center industry is at crossroads; should it sacrifice performance in the future
to continue the air cooling system or use a liquid cooling (the
working fluid was mostly water before 2004) system [25]? A
hybrid technology using both liquid and air cooling was seen as
a more viable option in the future.
Two-Phase Cooling
Choi et al. [26, 27] conducted experiments with twophase liquid cooling. Figure 14 shows a schematic of their
vol. 36 no. 6 2015
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
Z. LI AND S. G. KANDLIKAR
Figure 14 Schematic of the experimental test apparatus [26]. ©IEEE. Reproduced by permission of IEEE. Permission to reuse must be obtained from the
rightsholder.
experimental setup. A dummy ohmic heater with controllable
electronic power supply was used as a heat source. The carbon film resistor was soldered to the bottom of copper spreader,
while its back side was covered with thermal insulation material so that the heat generated was mainly transferred to the
evaporation on the top of the cooper spreader.
Choi et al. [26] employed a porous evaporation wick in the
evaporator as shown in Figure 15. The porous wick was saturated
with liquid at the start. Once heat was applied to the porous
wick through the evaporator wall, the liquid evaporated at the
Figure 15 Schematic of the two-phase loop cooling system [26]. Tj : temperature at the junction of the semiconductor chips. Tei : liquid temperature when
it is flowing into the evaporation. Teo : liquid temperature when it is flowing
out the evaporation. Tsc : liquid temperature when it is flowing out the condenser. ©IEEE. Reproduced by permission of IEEE. Permission to reuse must
be obtained from the rightsholder.
heat transfer engineering
529
Figure 16 Another section “A–A” with different wick-based evaporator design
employed in Figure 14 of the two-phase loop cooling system [27]. ©Elsevier.
Reproduced by permission of Elsevier. Permission to reuse must be obtained
from the rightsholder.
meniscus of the porous wick. The vapor was then transported
via vapor lines to the condenser, where heat was dissipated and
the vapor changed phase, becoming liquid. The condenser was
cooled by air. By measuring the inlet and outlet temperatures
of the air, heat dissipated in the loop system was estimated
(n-pentane used as coolant).
The results show that with heat transfer rates of 200 W, the
junction temperature (dummy ohmic heater’s temperature) remained well below 55◦ C. This performance was attributed to the
evaporator chamber, which acted as a heat spreader. As a result,
the two-phase loop system is seen as a promising candidate for
server cooling applications to transfer heat to remote locations.
Choi et al. [27, 28] also proposed similar designs with this
kind of heat pipe. The structure of these heat pipes is shown in
Figures 16 and 17. All these designs address the same objective
of increasing the contact area. The porous wick can reduce the
contact resistance.
Thome et al. [17] used two-phase flow in electronics cooling
with pseudo-CPUs in parallel flow circuits. They used refrigerant R134a as a coolant in their experiments. Figure 18 shows the
Figure 17 An alternate two-phase loop cooling system employing porous
wick in the evaporator [28]. ©Elsevier. Reproduced by permission of Elsevier.
Permission to reuse must be obtained from the rightsholder.
vol. 36 no. 6 2015
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
530
Z. LI AND S. G. KANDLIKAR
Figure 19 Schematic of the hybrid cooling cycle [29]. Redrawn, not to scale.
Figure 18 Schematic of the experimental equipment [17]. Redrawn, not to
scale.
schematic of the experimental liquid pumping cycle for cooling
of two microprocessors in parallel. In the figure, MEs means microevaporators, LA means liquid accumulator, and SMV means
stepper motor valve. Pseudo-CPU is directly cooled by microevaporators. The speed of the gear pump and the aperture of
the SMV were used to control the flow rate through the MEs.
The condenser was used to remove the latent heat gained from
the boiling process in the MEs and a heated pipe, the latter to
mimic the low heat flux auxiliary electronics of blade boards.
Simulations and preliminary experiments were done, and the
results showed that on-chip two-phase cooling was very effective, efficient, and reliable for single and multiple parallel
microprocessors.
all, their cycle showed promise in efficient energy usage, heat
recovery, and controllability toward a greener data center.
Marcinichen et al. [30, 31] proposed a hybrid two-phase cooling cycle. The cycle is depicted in Figure 20. The cycle can be
interchanged, driven either by a liquid pump or a vapor compressor. Two parameters, however, needed to be controlled carefully.
One was the chip temperature, and the other was the condensing pressure (condensing temperature). The chip temperature
was controlled by the inlet conditions of the microevaporator
(pressure, subcooling, and mass flow rate). The objective of
controlling the condensing pressure was to recover the energy
dissipated by the refrigerant in the condenser to heat buildings,
residences, district heating, and so on.
These cooling cycles can be used in microprocessors, blade
servers, and clusters. Marcinichen et al. [31] simulated five cases
using three different working fluids, R134a, HFO1234ze (twophase cooling cycle), and water (single-phase cooling cycle).
Besides, different internal diameters of the pipes and elbows
joining the components were considered. The result showed
that the liquid water cooling cycle had a larger pumping power
WASTE HEAT RECOVERY
Wu et al. [29] performed an experiment on evaluation of
a controlled hybrid two-phase multi-microchannel cooling and
heat recovery system. To avoid the traditional air cooling system
low efficiency, they employed direct on-chip two-phase cooling
technology. Different from other cooling systems, their solution
can reuse waste heat since the two-phase coolant can cool CPUs
effectively at 60◦ C.
Figure 19 is a schematic of their experimental setup. The
MME is the multi-microchannel evaporator; two parallel MMEs
are used for cooling the pseudo-chips, and a post heater is used
to simulate other heat-dissipating components. The loops on the
right are the condensing loop and the water loop. The condensing
loop removed the heat transferred to the cooling loop, raised it
to a higher exergy level (temperature), and finally rejected the
heat to the external water loop.
Wu et al. [29] further investigated several aspects such as
energy savings, energetic efficiency, and controllability. Above
heat transfer engineering
Figure 20 Hybrid liquid cooling system with a single phase and a two-phase
loop [31]. PCV: pressure control valve. VSC: variable-speed compressor. TCV:
temperature control valve. LPR: low pressure receiver. LA: liquid accumulator.
©Elsevier. Reproduced by permission of Elsevier. Permission to reuse must be
obtained from the rightsholder.
vol. 36 no. 6 2015
Z. LI AND S. G. KANDLIKAR
531
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
requirement, which was 5.5 times as high as the two-phase
R134a cooling cycle, and 4.4 times the two-phase HFO1234ze
cooling cycle. Compared with traditional air cooling systems,
the energy consumption of the data center could be reduced by
as much as 50% when using a liquid pumping cycle, and 41%
when using a vapor compression cycle. The overall consumption
can be reduced even further if the recovered energy is sold to a
secondary application.
Marcinichen et al. [32] also performed experiments on two
types of cooling cycles: oil-free liquid pump and oil-free vapor compressor. It was shown that with the usage of a vapor
compression cycle, the power-plant efficiency can be increased
further.
HEAT-PIPE-BASED SYSTEMS
Figure 21 Schematic of the separate heat pipe system [35]. Redrawn, not to
scale.
Ice/Cold Water Storage with Heat Pipes
Energy storage using seasonal temperature variation offers
a possible way to configure the data-center cooling systems.
Singh et al. [33] and Wu et al. [34] used a cold storage with an
integrated heat pipe to take advantage of the daily as well as seasonal variations of ambient temperature. The ice and cold water
storage reservoir consisted of a thermally insulated underground
cabinet with a heat-pipe coupling placed at the top of the cabinet.
In this gravity-assisted heat-pipe system, the cold storage could
act only as an evaporator. Thus, the heat pipe worked as a diode,
transferring heat only from the cold storage to the ambient. The
cold storage could be cooled with ambient air only when the
ambient temperature fell below the cold storage temperature.
The heat pipe did not transfer any heat, except for losses, when
the ambient was hotter than the cold storage. The cold storage
can be designed to meet the daily or even seasonal temperature
variations. The chiller running time can be reduced, taking advantage of full-load running as well as reduced electricity rates
for shifting the load to off-peak hours. Its use as an emergency
backup for chiller also seems promising.
Heat-Pipe-Based Data-Center Cooling Systems
Heat pipes can also be used in two ways: heat-pipe-based airconditioning systems [35, 36] and distributed heat-pipe systems
[37–39].
Qian et al. [35] used a heat-pipe-based air-conditioning system in data-center cooling. In this work the cold ambient air was
used to cool the data center indirectly, as shown in Figure 21.
It composed of an evaporator, a condenser, a vapor pipe, and a
liquid pipe. The evaporator absorbed heat from the room air by
boiling liquid into vapor and transferring it to the ambient air
through condensation of vapor in the condenser. The working
fluid flowed down back to the evaporator by gravity. As a result,
heat generated from IT equipment was transferred to the cold
environment by a separate heat pipe.
When the outdoor temperature is lower than the environment
set temperature, the data center can be cooled by the outdoor
heat transfer engineering
air through this system, and the energy consumption of the
cooling system will be reduced significantly. This system can
be used as a substitute for the traditional CRAC system. Case
studies show that by incorporating the additional heat-pipe loop,
the cooling systems of the data center and the communication
base station can separately achieve energy savings of 38.9%
and 55.7%. Another advantage of this system is that it avoids
any contamination issues of bringing large quantities of outdoor
air directly into the data center, and provides a simple way to
switch back to the chiller mode. However, the system faces
the same disadvantages of an air-cooled data center. The room
heat gain from solar and auxiliary heat loads also needs to be
offset by the heat pipe, thus adding to the load in designing the
heat-pipe-based air-conditioning system.
Qian et al. [36] conducted experiments on comparing R22
and R134 as working fluids in a heat-pipe air-conditioning system. The results showed that the heat-pipe system had the features of low working temperature difference and high energy
efficiency. Their work also indicated that the capability of the
R22 system was 19.2% higher than that of the R134a system.
The optimal liquid fill ratio for both systems was about 80%.
Tian [37] employed multistage separated heat pipes in the
heat-pipe air-conditioning system as shown in Figures 22 and
23. Figure 22 shows the multistage separated heat pipes only,
while Figure 23 shows the whole system. The two-stage system
improved the heat-pipe performance as the outdoor air was used
in a counterflow configuration.
Distributed Heat-Pipe System
In Tian’s [37] analysis of data-center energy systems, the major portion of the energy consumption in the data-center cooling
occurred in the heat transfer and air-distributing processes in the
room. This was eliminated by employing the distributed heat
pipe system. In this system, which was different from the heatpipe air-conditioning system, heat pipes were built in the racks
and the condenser was cooled by cold water. Such distributed
vol. 36 no. 6 2015
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
532
Z. LI AND S. G. KANDLIKAR
Figure 22 Schematic of multiple separated heat pipes [37]. T1,out , temperature
of the outside airflow out of heat pipe. T1,in , temperature of the outside air flow
into heat pipe. T2,out , temperature of the inside airflow out of heat pipe. T2,in ,
temperature of the outside airflow into heat pipe. Tf1 , temperature of the first
class heat pipe. Tf2 , temperature of the second class heat pipe. Redrawn, not to
scale.
Figure 24 Configuration of the LHP rack [37]. Redrawn, not to scale.
heat-pipe systems avoid the disadvantages of an air-cooled datacenter design.
Figure 24 shows a schematic of a distributed heat-pipe system. The LHP is a loop heat pipe heat exchanger. There are two
fans on the top and bottom of the rack. At the bottom, the return
air is cooled in LHP1 and then flows through the servers. After
removing heat from the servers, the air is cooled by LHP2 and
flows out of the rack.
Tian’s [37] theoretical analysis was based on energy considerations as described by Guo et al. [40]. The results showed that
using the distributed heat-pipe cooling system, the data center’s
annual average energy efficiency ratio (EER) increased from 2.6
to 5.7, and the data center’s annual average power utilization effectiveness (PUE) decreased from 1.6 to 1.35.
Qian et al. [38] further analyzed the distributed heat-pipe system. The analytical results showed that the thermal resistance of
the computer room air handler (CRAH) was the largest contributor to the total resistance in the heat flow path. They proposed
a distributed heat-pipe cooling system to replace the CRAH and
cooling air distribution system. The case study indicated that
the distributed heat pipe cooling system would result in 26.8%
energy saving.
Zheng et al. [39] conducted experimental studies on a distributed heat-pipe system. In their system, the heat pipe was built
in the rear door of the racks and was cooled by cold water. This
arrangement was able to address the localized hot-spot issues
through a proper air distribution network within the rack.
The photographs of the system employed by Zheng et al. [39]
are shown in Figure 25. The heat pipe was located within the
rack, while fans were on the rear door of the rack (Figure 25a).
The heat generated in the IT equipment was transferred to the
evaporator and then to the outside condenser through the vapor
pipe; the condenser was cooled by the chilled water (Figure
25b). Figure 26 shows a schematic of the Zheng’s system [39]
shown in Figure 25.
The system shown in Figure 26 may be called a “cooled directly in rack” system. The heat was transferred to the heat pipe
in the cabinet without any airflow outside. The cooling capacity
provided by the distributed heat-pipe system was more than 60
kW with total EER (energy efficiency ratio) of 2.78. Energy
consumption of the entire air-conditioning system was reduced
by about 18% after retrofitting in summer. Further energy savings could be achieved by using a natural cold source from free
outdoor air whenever possible.
COMPARISON OF DIFFERENT DATA-CENTER
COOLING STRATEGIES
Figure 23 System of the multistage separated heat pipes.
heat transfer engineering
Ohadi et al. [41] conducted a CFD analysis of data centers
employing air, liquid, and two-phase cooling systems. According to their findings, direct liquid cooling eliminates the two
least effective heat transfer processes in an air cooling system:
from the heat sink to air and from air to chilled water. For the
cooling fluid, water and electronic-friendly dielectric liquids are
the two potential candidates. Although water has good thermal
properties, it has potential to cause catastrophic damage to electronics components if leaks occur. The dielectric fluid’s thermal
properties are not as good as water and it is also more expensive.
vol. 36 no. 6 2015
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
Z. LI AND S. G. KANDLIKAR
533
Figure 26 Separate heat pipe system diagram [39].
Figure 25 Photographs of data center and kinds of equipment [39].
Ohadi et al.’s [41] results also indicated that two-phase cooling
technology produces a substantially smaller thermal resistance,
with minimal or no additional pressure drop. Table 1 shows their
comparison of the three systems for removing 85 W of heat from
the electronic equipment. The thermal resistance is between the
heat-generating processor and heat sink.
Rubenstein et al. [42] presented a comparison between a data
center with a liquid loop and a chilled air cooling system. Their
liquid cooling system is shown in Figure 27. Heat from the chips
was removed by the liquid in the heat exchanger on the bottom
of the rack. It was then transferred through the heat exchanger to
the cooling tower. The air cooling system used in the comparison
was a conventional system as shown in Figure 28.
The analytical study by Rubenstein et al. [42] was based on a
5000-square-foot data center. The energy consumption of the IT
equipment was kept constant while comparing different cooling
systems. For air cooling, the chiller consumed the majority of
power used by the cooling system. The CRACs were the second largest energy consumers. Both of these components were
essential in the air cooling system, but were not needed in the
liquid cooling system. In the air-cooled system, 27% of energy
was used for cooling. For the hybrid cooling system (90% of the
IT load removed by liquid), the power consumed by the cooling
equipment dropped from 27% in the air cooled data center to
12%.
Zhou et al. [43] compared a thermosyphon heat exchanger
with an air cooling system for data-center application. They
considered three systems: data centers without any cooling system, data centers with air cooling system only, and data centers
with thermosyphon heat exchanger system using cold ambient air. Their results showed that using the thermosyphon heat
exchanger, the energy use was only 41% compared with using air cooling under the given conditions. They projected that
the annual energy consumption could be reduced by 35.4%.
However, the data center can only use the thermosyphon heat
exchanger in cold winter in places such as Beijing, China, Frankfurt, Germany, or Rochester, NY, for example, without the air
conditioning system.
Table 1 A comparison between air, liquid, and two-phase cooling
Coolant
Generated power
Fluid inlet temperature
Thermal resistance
Pumping power
Air
Water
85 W
5◦ C
0.4–0.7 K/W
29 mW
85 W
62.4◦ C
0.15–0.2 K/W
57 mW
heat transfer engineering
Dielectric fluid, FC-72
85 W
−4◦ C
0.15–0.2 K/W
56 mW
vol. 36 no. 6 2015
Two-phase flow, R245fa
85 W
76.5◦ C
0.038–0.048 K/W
2.3 mW
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
534
Z. LI AND S. G. KANDLIKAR
Figure 27 Data-center liquid cooling loop [42]. Redrawn, not to scale.
INTEGRATED HEATING/COOLING SYSTEM FOR
BUILDING HEATING
Zimmermann et al. [44, 45] at IBM proposed a new kind
of hot water cooled supercomputer prototype, called Aquasar.
Figure 29 shows a schematic of this system. There were three
cycles in the system: the primary cooling cycle, the intermediate
cooling cycle, and the Eidgenossische Technische Hochschule
Zürich (ETH) heating grid. The primary heat exchanger transferred heat to the intermediate loop, and then transferred it
through the second heat exchanger to the building heating grid.
Thus, heat from IT equipment was used for heating the building.
Inside the liquid-cooled electronic components, coolant water flowed centrally into the inlet manifold and then emerged
through the slot nozzle as a jet and impinged on the microchannels below. As water has better heat transfer coefficients and
heat-carrying capacity, the inlet temperature of water can be
as high as 60◦ C, with the temperature increase of about 15◦ C,
while the IT equipment can still run normally. Thus, this cycle
was called a hot water cooled system.
Figure 28 Data-center air cooling loop [42]. Redrawn, not to scale.
heat transfer engineering
Figure 29 Schematic of the cooling loop [44]. ©Elsevier. Reproduced by
permission of Elsevier. Permission to reuse must be obtained from the rightsholder.
Experiments were conducted on Aquasar by Zimmermann
et al. [44, 45]. Their results showed that power usage effectiveness (PUE) and energy efficiency ratio (EER) of this hot
water-cooled data center were significantly better than for the
air-cooled data enters. Further, the hot water cooling system not
only eliminated the chillers, but it also provided hot water from
Aquasar for building heating. Thus waste heat was reused.
MINICHANNELS AND MICROCHANNELS IN
DATA-CENTER COOLING SYSTEMS
Minichannels and microchannels have been shown to significantly reduce the convective thermal resistance. They are
especially suited for high heat flux removal application, but
their usage in cold plates and on-chip applications is also
very attractive. Description of such a system was presented by
Kandlikar [46]. The single-phase liquid or an evaporating liquid
is used in cold plates or directly on the chips. The liquid distribution system is integrated at the rack level, and interfaces with
a secondary heat exchanger. Application of microchannels and
minichannels in refrigeration equipment was also recommended
[47].
A detailed description of various cold plate types employed
in electronic cooling application was provided by Kandlikar
and Hayner [48]. The manufacturing considerations were also
included in the selection of specific cold plate geometries [49].
Cold plates are critical elements as the data centers transition
from air cooled to liquid cooled systems.
Ouchi et al. [4, 50] conducted experimental work on thermal
management systems for data centers with liquid cooling technology. They mainly focused on direct chip cooling systems.
However, there may be a potential concern for leakage as the
majority of IT customers do not accept liquid intervention in the
core part of the server racks, even though electrically insulating
(dielectric) liquid is used. However, heat pipes may be more
acceptable at the server level. Using a dielectric liquid such as
Novec 7200 in a cold plate as shown in Figure 30 may be more
vol. 36 no. 6 2015
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
Z. LI AND S. G. KANDLIKAR
535
Figure 30 Details of the meandering singe-phase heat exchanger [4]. Details
of the meandering singe-phase heat exchanger [4]. ©IEEE. Reproduced by
permission of IEEE. Permission to reuse must be obtained from the rightsholder.
acceptable until further experiments are conducted to demonstrate the safe operation of such systems in the large data-center
environment.
Figure 31 shows a schematic diagram of a system with narrow
channels (minichannels) for a two-phase heat exchanger as used
by Ouchi et al. [50]. The two-phase heat exchanger consisted
of a main heated channel with V-shaped grooves and auxiliary
unheated channels to supply liquid to the main channel. FC-72
and Novec7100 were used as the coolant.
In the single-phase heat exchanger, a cooling capacity of
200 W/CPU was realized when coolant inlet temperature was
lower than 20◦ C at a flow rate of 0.5 L/min. With a two-phase
heat exchanger, a cooling capacity of 300W/CPU was achieved
when Novec7100 was used as coolant, and the flow rate was 1.0
L/min.
Leonard and Phillips [51] presented two ways of data-center
cooling using new designs of cold plates: a heat pipe tower
Figure 31 Schematic diagram of narrow channel (minichannels) for two-phase
heat exchanger [51]. Redrawn, not to scale.
heat transfer engineering
Figure 32 Schematic of heat pipe tower [51]. Redrawn, not to scale.
or a liquid coolant system. The heat pipe tower is shown in
Figure 32; this tower interfaced directly with a CPU chip in a
server chassis. The heat sink was thus directly attached to the
chip. The tower had fins located above the CPU, but variations
of this concept can be used to transport heat to remote locations
where the fin stacks can be fitted and ventilated.
Pumped liquid systems can extract very high heat fluxes.
The CPUs are placed on a cold plate and the heat extracted
from the CPU chip is delivered to the liquid flowing in the
cold plate. There is a liquid-to-air heat exchanger located in
the server, where heat is finally rejected to air as shown in
Figure 33.
Brunschwiler et al. [52] presented a review with some recommendations on energy saving in data-center cooling. Their
Figure 33 Pumped liquid cooling system [51]. Redrawn, not to scale.
vol. 36 no. 6 2015
536
Z. LI AND S. G. KANDLIKAR
recommendations included reducing thermal resistance and interconnect length for improved efficiency, and minimization of
exergy losses due to mixing.
and Fuel Cell Laboratory in the Mechanical Engineering Department at Rochester Institute of Technology.
REFERENCES
CONCLUDING REMARKS
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
The data-center cooling systems have been evolving with
energy conservation as the main focus in recent years. The
following major observations can be made from the literature.
a) Traditional data-center cooling systems utilizing an air cooling method are inefficient and waste a large amount of water
and energy.
b) Liquid cooling (including single-phase and two-phase) is
a highly recommended method suggested by a number of
researchers.
c) Compared to the air cooling, cooling by liquid can save
energy and get high efficiency by shortening the heat flow
paths. At the same time, the problem of hot spots can be
addressed relatively easily as compared to the air-cooled
systems.
d) Integration of heat pipes and thermosyphon systems provides significant energy savings through a drastic reduction
in the thermal resistances during heat exchange between the
room air and the chiller, and between the room air and the
electronic components.
e) Cold plates provide a platform for transferring heat from
electronic components to the coolant. Incorporation of microchannels and minichannels leads to significant reduction
in the thermal resistances between the coolant and the electronic components. The design and integration of cold plates
within the overall system are areas for further research. Further demonstration projects showing the safety and reliability of such data-center cooling systems will greatly facilitate
wider acceptance of liquid cooled cold plate systems.
f) “Free” cooling provided by the ambient environment can be
used in the liquid cooling systems to cool down the warm
liquid; thus, refrigeration systems may ultimately be abandoned. This represents a huge potential for future energy
savings.
g) The waste heat from the data center can be effectively utilized for other applications such as building heating, absorption refrigeration, feedwater heating, and so on. These areas
should be aggressively pursued as they help in conserving
the precious resources for energy and water.
FUNDING
The first author gratefully acknowledges the financial support
by the National Nature Science Foundation of China (51376097,
51138005) and the National Basic Research Program of China
(Grant No. 2013CB228300). The second author acknowledges
the support provided by the Thermal Analysis, Microfluidics
heat transfer engineering
[1] Fakhim, B., Behnia, M., Armfield, S. W., and Srinarayana,
N., Cooling Solutions in an Operational Data Center: A
Case Study, Applied Thermal Engineering, vol. 31, pp.
2279–2291, 2011.
[2] Parida, P. R., David, M., Iyengar, M., Schultz, M., Gaynes,
M., Kamath, V., Kochuparambil, B., and Chainer, T., Experimental Investigation of Water Cooled Server Microprocessors and Memory Devices in an Energy Efficient
Chiller-Less Data Center, 28th IEEE SEMI-THERM Symposium, San Jose, CA, pp. 224–231, March 18–22, 2012.
[3] Zhou, R. L., Wang, Z. K., McReynolds, A., Bash, C. E.,
Christian, T. W., and Shih, R., Optimization and Control of
Cooling Microgrids for Data Centers, 13th IEEE ITHERM
Conference, San Diego, CA, pp. 338–343, May 30–June
1, 2012.
[4] Ouchi, M., Abe, Y., Fukagaya, M., Ohta, H., Shinmoto,
Y., Sato, M., and Iimura, K., Thermal Management Systems for Data Centers with Liquid Cooling Technique of
CPU, 13th IEEE ITHERM Conference, San Diego, CA,
pp. 790–799, May 30–June 1, 2012.
[5] Gu, L. J., Zhou, F. Q., and Meng, H., Research on Data
Center Energy Consumption and Energy Efficiency Level,
Energy of China, vol. 32, no. 11, pp. 42–45, 2010.
[6] Chiriac, V. A., and Chiriac, F., Novel Energy Recovery Systems for the Efficient Cooling of Data Centers Using Absorption Chillers and Renewable Energy Resources, 13th
IEEE ITHERM Conference, San Diego, CA, pp. 814–821,
May 30–June 1, 2012.
[7] Greenberg, S., Mills, E., Tschudi, B., Rumsey, P., and Myatt, B., Best Practices for Data Centers: Lessons Learned
from Benchmarking 22 Data Centers, 2006 ACEEE Summer Study on Energy Efficiency in Buildings, Pacific Grove,
CA, pp. 76–87, 2006.
[8] Srinarayana, N., Fakhim, B., Behnia, M., and Armfield, S.
W., Thermal Performance of an Air-Cooled Data Center
With Raised-Floor and Non-Raised-Floor Configurations,
Heat Transfer Engineering, vol. 35, no. 4, pp. 384–397,
2014. doi:10.1080/01457632.2013.828559.
[9] Breen, T. J., Walsh, E. J., Punch, J., Shah, A. J., and
Bash, C. E., From Chip to Cooling Tower Data Center Modeling: Influence of Server Inlet Temperature and
Temperature Rise Across Cabinet, 12th IEEE Intersociety
Conference, Las Vegas, NV, pp. 1–10, June 2–5, 2010.
doi:10.1109/ITHERM.2010.5501241.
[10] Sarood, O., Miller, P., Totoni, E., and Kale, L. V., Cool
Load Balancing for High Performance Computing Data
Centers, IEEE Transactions, December, pp. 1752–1764,
2012.
[11] Srinarayana, N., Fakhim, B., Behnia, M., and Armfield, S. W., A Comparative Study of Raised-Floor and
vol. 36 no. 6 2015
Z. LI AND S. G. KANDLIKAR
[12]
[13]
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
Hard-Floor Configurations in an Air-Cooled Data Centre, 13th IEEE ITHERM Conference, San Diego, CA, pp.
43–50, May 30–June 1, 2012.
Wilson, D., Cooling System Design for Data Centers Utilizing Containment Architecture, ASHRAE Transactions,
vol. 118, no. 1, pp. 415–420, 2012.
Almoli, A., Thompson, A., Kapur, N., Summers, J.,
Thompson, H., and Hannah, G., Computational Fluid Dynamic Investigation of Liquid Rack Cooling in Data Centres, Applied Energy, vol. 89, pp. 150–155, 2012.
Don, B., and Tom, D., New Guideline for Data Center
Cooling, ASHRAE Journal, vol. 45, no. 12, pp. 28–35,
2003.
Maguire, L., Nakayama, W., Behnia, M., and Kondo, Y.,
A CFD Study on the Effect of Shrinking Box Size on
Cooling Airflows in Compact Electronic Equipment—The
Case of Portable Projection Display Equipment, Heat
Transfer Engineering, vol. 29, no. 2, pp. 188–197. 2008,
doi:10.1080/01457630701686735.
Ouchi, M., Abe, Y., Fukagaya, M., Kitagawa, T., Ohta,
H., Shinmoto, Y., Sato, M., and Iimura, K., New Thermal Management Systems for Data Centers, Journal of
Thermal Science and Engineering Applications, vol. 4, no.
031005, 2012. doi:10.1115/1.4006478.
Thome, J. R., Lamaison, N., and Marcinichen, J. B., TwoPhase Flow Control of Electronics Cooling With PseudoCPUs in Parallel Flow Circuits: Dynamic Modeling and
Experimental Evaluation, Journal of Electronic Packaging,
vol. 135, no. 030908, 2013. doi:10.1115/1.4024590.
Meijer, G. I., Cooling Energy-Hungry Data Centers, Science, vol. 328, no. 5976, pp. 318–319, 2010.
Iyengar, M., David, M., Parida, P., Kamath, V., Kochuparambil, B., Graybill, D., Schultz, M., Gaynes, M., Simons, R., Schmidt, R., and Chainer, T., Extreme Energy
Efficiency Using Water Cooled Servers Inside a ChillerLess Data Center, 13th IEEE ITHERM Conference, San
Diego, CA, USA, pp. 137–150, May 30–June 1, 2012.
Iyengar, M., David, M., Parida, P., Kamath, V., Kochuparambil, B., Graybill, D., Schultz, M., Gaynes, M., Simons, R., Schmidt, R., and Chainer, T., Server Liquid Cooling With Chiller-Less Data Center Design to Enable Significant Energy Savings, 28th IEEE SEMI-THERM Symposium, San Jose, CA, pp. 212–224, March 18–22, 2012.
David, M. P., Iyengar, M. K., Parida, P., Simons, R.,
Schultz, M., Gaynes, M., Schmidt, R., and Chainer, T., Experimental Characterization of an Energy Efficient ChillerLess Data Center Test Facility with Warm Water Cooled
Servers, 28th IEEE SEMI-THERM Symposium, San Jose,
CA, pp. 232–238, March 18–22, 2012.
David, M. P., Iyengar, M. K., Parida, P., Simons, R.,
Schultz, M., Gaynes, M., Schmidt, R., and Chainer, T.,
Impact of Operating Conditions on a Chiller-Less Data
Center Test Facility with Liquid Cooled Servers, 13th IEEE
ITHERM Conference, San Diego, CA, pp. 562–574, May
30–June 1, 2012.
heat transfer engineering
537
[23] Cader, T., Sorel, V., Westra, L., and Marquez, A., Liquid
Cooling in Data Centers, ASHRAE Transactions, vol. 115,
pp. 231–241, 2009.
[24] Goth, G. F., Arvelo, A., Eagle, J., Ellsworth, M. J., Jr.,
Marston, K. C., Sinha, A. K., and Zitz, J. A., Thermal and
Mechanical Analysis and Design of the IBM Power 775
Water Cooled Supercomputing Central Electronics Complex, 13th IEEE ITHERM Conference, San Diego, CA, pp.
700–710. May 30–June 1, 2012.
[25] Beaty, D., and Schmidt, R., Back to the Future—Liquid
Cooling: Data Center Considerations, ASHRAE Journal,
vol. 46, no. 12, pp. 42–49, 2004.
[26] Choi, J., Ha, M., Lee, Y., Graham Jr., S., Kang, H., and
Co, Z. T., Thermal Management Of High Density Power
Servers Using A Compact Two-Phase Loop Cooling System, 29th IEEE SEMI-THERM Symposium, San Jose, CA,
pp. 29–33, March 17–21, 2013.
[27] Choi, J., Sano, W., Zhang, W., Yuan, Y., Lee, Y., Andra,
D., and Tasciuc, B., Experimental Investigation on Sintered Porous Wicks for Miniature Loop Heat Pipe Applications, Experimental Thermal and Fluid Science, vol. 51,
pp. 271–278, 2013.
[28] Choi, J., Sung, B., Kim, C., Andra, D., and Tasciuc, B.,
Interface Engineering to Enhance Thermal Contact Conductance of Evaporators in Miniature Loop Heat Pipe Systems, Applied Thermal Engineering, vol. 60, pp. 371–378,
2013.
[29] Wu, D., Marcinichen, J. B., and Thome, J. R., Experimental Evaluation of a Controlled Hybrid Two-Phase MultiMicrochannel Cooling and Heat Recovery System Driven
by Liquid Pump and Vapor Compressor, Refrigeration, vol.
36, pp. 375–389, 2013.
[30] Marcinichen, J. B., Olivier, J. A., Oliveira, V., and Thome,
J. R., A Review of On-Chip Micro-Evaporation: Experimental Evaluation of Liquid Pumping and Vapor Compression Driven Cooling Systems and Control, Applied Energy,
vol. 92, pp. 147–161, 2012.
[31] Marcinichen, J. B., Olivier, J. A., Oliveira, V., and Thome,
J. R., On-Chip Two-Phase Cooling of Data Centers: Cooling System and Energy Recovery Evaluation, Applied
Thermal Engineering, vol. 41, pp. 36–51, 2012.
[32] Marcinichen, J. B., Olivier, J. A., Lamaison, N., and
Thome, J. R., Advances in Electronics Cooling, Heat
Transfer Engineering, vol. 34, no. 5–6, pp. 434–446,
2013.
[33] Singh, R., Mochizuki, M., Mashiko, K., and Nguyen,
T., Heat Pipe Based Cold Energy Storage Systems for
Data Center Energy Conservation, Energy, vol. 36, pp.
2802–2811, 2011.
[34] Wu, X. P., Mochizuki, M., Mashiko, K., Nguyen, T., Wuttijumnong, V., Cabsao, G., Singh, R., and Akbarzadeh,
A., Energy Conservation Approach for Data Center Cooling Using Heat Pipe Based Cold Energy Storage System,
26th IEEE SEMI-THERM Symposium, Santa Clara, CA,
pp. 115–123, February 21–25, 2010.
vol. 36 no. 6 2015
Downloaded by [Rochester Institute of Technology] at 08:18 10 February 2015
538
Z. LI AND S. G. KANDLIKAR
[35] Qian, X. D., Li, Z., and Tian, H., Application of Heat Pipe
System in Data Center Cooling, 11th International Conference on Sustainable Energy Technologies, Vancouver,
BC, Canada, September 2–5, 2012.
[36] Qian, X. D., Li, Z., and Li, Z. X., Experimental Study on
Data Center Heat Pipe Air Conditioning System, Journal
of Engineering Thermophysics, vol. 33, pp. 1217–1220,
2012.
[37] Tian, H., Research on Cooling Technology for High Heat
Density Data Center, Paper for Doctor of Philosophy in
Civil Engineering, Tsinghua University, Beijing, China,
October 2012.
[38] Qian, X. D., Tian, H., Li, Z., and Li, Z. X., EntransyDissipation-Based Thermal Resistance Analysis and Energy Saving Design of Data Center Cooling System, Proceedings of the 3rd International Forum on Heat Transfer,
Nagasaki, Japan, no. IFHT 2012-027, November 13–15,
2012.
[39] Zheng, Y. W., Li, Z., Liu, X. H., Tong, Z., and Tu,
R., Retrofit of Air Conditioning System in Data Center Using Separate Heat Pipe System, Proceedings of
the 8th International Symposium on Heating, Ventilation
and Air Conditioning, Xi’an, China, pp. 685–694, 2013.
doi:10.1007/978-3-642-39581-9 67.
[40] Guo, Z. Y., Zhu, H. Y., and Liang, X. G., Entransy:
A Physical Quantity Describing Heat Transfer Ability,
Int. J. Heat Mass Transfer, vol. 50, pp. 2545–2556,
2007.
[41] Ohadi, M. M., Dessiatoun, S. V., Choo, K., Pecht, M., and
Lawler, J. V., A Comparison Analysis Of Air, Liquid, and
Two-Phase Cooling of Data Centers, 28th IEEE SEMITHERM Symposium, San Jose, CA, pp. 58–64, March
18–22, 2012.
[42] Rubenstein, B. A., Zeighami, R., Lankston, R., and
Peterson, E., Hybrid Cooled Data Center Using
Above Ambient Liquid Cooling, 12th IEEE Intersociety Conference, Las Vegas, NV, June 2–5, 2010.
doi:10.1109/ITHERM.2010.5501426.
[43] Zhou, F., Tian, X., and Ma, G. Y., Investigation into the Energy Consumption of a Data Center with a Thermosyphon
Heat Exchanger, Chinese Science Bulletin, vol. 56, no. 20,
pp. 2185–2190, 2011.
[44] Zimmermann, S., Meijer, I., Tiwari, M. K., Paredes, S.,
Michel, B., and Poulikakos, D., Aquasar: A Hot Water
Cooled Data Center with Direct Energy Reuse, Energy,
vol. 43, pp. 237–245, 2012.
[45] Zimmermann, S., Tiwari, M. K., Meijer, I., Paredes, S.,
Michel, B., and Poulikakos, D., Hot Water Cooled Electronics: Exergy Analysis and Waste Heat Reuse Feasibility,
International Journal of Heat and Mass Transfer, vol. 55,
pp. 6391–6399, 2012.
[46] Kandlikar, S. G., High Flux Heat Removal with
Microchannels—A Roadmap of Challenges and Opportunities, Heat Transfer Engineering, vol. 26, no. 8, pp.
5–14, 2005.
heat transfer engineering
[47] Kandlikar, S. G., A Roadmap for Implementing
Minichannels in Refrigeration and Air-Conditioning
Systems—Current Status and Future Directions, Heat
Transfer Engineering, vol. 28, no. 12, pp. 973–985, 2008.
[48] Kandlikar, S. G., and Hayner, C. N., Liquid Cooled
Cold Plates for Industrial High-Power Electronic Devices
Thermal Design and Manufacturing Considerations, Heat
Transfer Engineering, vol. 30, no. 12, pp. 918–930, 2010.
[49] Hayner, C. N., Steinke, M. E., and Kandlikar, S. G., Liquid
Coldplate Design—Contemporary Perspectives on Design
and Manufacturing Liquid Cooled Heat Sinks for Electronics Cooling, Begell House, Danbury, CT, 2014.
[50] Ouchi, M., Abe, Y., Fukagaya, M., Ohta, H., Shinmoto,
Y., Sato, M., and Iimura, K. I., Liquid Cooling Network
Systems for Energy Conservation in Data Centers, ASME
2011 Pacific Rim Technical Conference, Portland, OR, pp.
1–7, July 6–8, 2011.
[51] Leonard, P. L., and Phillips, A. L. (Fred), The Thermal Bus
Opportunity—A Quantum Leap in Data Center Cooling
Potential, American Society of Heating Refrigerating and
Air Conditioning Engineers Transactions, vol. 111, no. 2,
pp. 732–745, 2005.
[52] Brunschwiler, T., Meijer, G. I., and Paredes, S., Direct
Waste Heat Utilization from Liquid-Cooled Supercomputers, 14th International Heat Transfer Conference, Washington, DC, pp. 1–12, August 8–13, 2010.
Zhen Li is an associate professor in the Department
of Engineering Mechanics, Tsinghua University. He
received his bachelor’s degree from Tsinghua University in 1997, and his Ph.D. degree in 2005. He
has worked in the areas of heat transfer, desiccant
cooling system, liquid desiccant, heat pipe, and highperformance cooling technic of data centers. He has
published more than 100 journal and conference papers. He has received the New Century Talent Supporting Project by Education Ministry of China. He
is currently working on a project sponsored by NSFC of China on research
into data-center cooling systems using separated heat pipes, and a Ministry of
Science and Technology of China-sponsored project on steel plant waste heat
dehumidification technology.
Satish G. Kandlikar is the Gleason Professor of Mechanical Engineering at Rochester Institute of Technology (RIT). He received his Ph.D. degree from the
Indian Institute of Technology in Bombay in 1975
and has been a faculty member there before coming
to RIT in 1980. He has worked extensively in the
area of flow boiling heat transfer and critical heat
flux (CHF) phenomena at microscale, single-phase
flow in microchannels, high-heat-flux chip cooling,
and water management in PEM fuel cells. He has
published more than 200 journal and conference papers. He is a fellow of the
ASME and a former associate editor of ASME Journal of Heat Transfer. He
received RIT’s Eisenhart Outstanding Teaching Award in 1997 and Trustees
Outstanding Scholarship Award in 2006. He received the 2008 Rochester Engineer of the Year award from the Rochester Engineering Society. He is the
recipient of the 2012 ASME Heat Transfer Memorial Award. Currently he is
working on Department of Energy (DOE)- and GM-sponsored projects on fuel
cell water management under freezing conditions, and National Science Foundation (NSF)-sponsored projects on developing nanostructures for enhanced
pool and flow boiling.
vol. 36 no. 6 2015