Documente Academic
Documente Profesional
Documente Cultură
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This publication was prepared in cooperation with TC 9.9, Mission Critical Facilities,
Technology Spaces, and Electronic Equipment.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
ASHRAE has compiled this publication with care, but ASHRAE has not investigated, and
ASHRAE expressly disclaims any duty to investigate, any product, service, process, proce-
dure, design, or the like that may be described herein. The appearance of any technical data
or editorial material in this publication does not constitute endorsement, warranty, or guar-
anty by ASHRAE of any product, service, process, procedure, design, or the like. ASHRAE
does not warrant that the information in the publication is free of errors, and ASHRAE does
not necessarily agree with any statement or opinion in this publication. The entire risk of the
use of any information in this publication is assumed by the user.
No part of this book may be reproduced without permission in writing from ASHRAE,
except by a reviewer who may quote brief passages or reproduce illustrations in a review with
appropriate credit; nor may any part of this book be reproduced, stored in a retrieval system,
or transmitted in any way or by any means—electronic, photocopying, recording, or
other—without permission in writing from ASHRAE. Requests for permission should be
submitted at www.ashrae.org/permissions.
ASHRAE STAFF
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Contents
Acknowledgments vii
Foreword ix
Chapter 1—Introduction 1
Chapter 2—Background 3
Chapter 3—Load Trends and Their Application 15
Chapter 4—Air Cooling of Computer Equipment 29
Chapter 5—Liquid Cooling of Computer Equipment 41
References/Bibliography 51
Introduction to Appendices 55
Appendix A—Collection of Terms from Facilities and IT Industries 57
Appendix B—Additional Trend Chart Information/Data 75
Appendix C—Electronics, Semiconductors, Microprocessors, ITRs 83
Appendix D—Micro-Macro Overview of Datacom Equipment Packaging 87
Index 101
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Acknowledgments
Representatives from the following companies participated in producing this
publication:
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
viii⏐ Acknowledgments
Dan Baer, Ken Baker, Dina Birrell, Mike Bishop, Alan Claassen, Howard Cooper,
Tom Currie, Tom Davidson, Brian Durham, Bill French, Dennis Hellmer, Magnus
Herrlin, Mark Hydeman, Charlie Johnson, Christopher Kurkjian, H.S. Liang Lands-
berg, Andy Morrison, David Moss, Greg Paustch, Dick Pressley, Terry Rodgers, Jeff
Rutt, Melik Sahraoui, Grant Smith, Vali Sorell, Fred Stack, Ben Steinberg, Robin
Steinbrecher, Jeff Trower, William Tschudi, and David Wang.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Foreword
Datacom (data processing and telecommunications) equipment technology is
advancing at a rapid pace, resulting in relatively short product cycles and an
increased frequency of datacom equipment upgrades. Since datacom facilities that
house this equipment, along with their associated HVAC infrastructure, are
composed of components that are typically built to have longer life cycles, any
modern datacom facility design needs the ability to seamlessly accommodate the
multiple datacom equipment deployments it will experience during its lifetime.
Based on the latest information from all the leading datacom equipment manu-
facturers, Datacom Power Trends and Cooling Applications, authored by ASHRAE
TC9.9 (Mission Critical Facilities, Technology Spaces, and Electronic Equipment),
provides new and expanded datacom equipment power trend charts to allow the data-
com facility designer to more accurately predict the datacom equipment loads that
the facility can expect to have to accommodate in the future as well as providing
ways of applying the trend information to datacom facility designs today.
Also included in this book is an overview of various air and liquid cooling
system options that may be considered to handle the future loads and an invaluable
appendix containing a collection of terms and definitions used by the datacom equip-
ment manufacturers, the facilities operation industry, and the cooling design and
construction industry.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
1
Introduction
PURPOSE / OBJECTIVE
The purpose of this book is to discuss datacom (data center and telecommuni-
cation) power trends at the equipment level as well as to describe how to use those
trends in making critical decisions on infrastructure (e.g., cooling system) require-
ments and the overall facility.
It is important to consider the fundamental definition of “trend,” which for this
book will be defined as the general direction in which something tends to move. The
trends referenced or presented in this book should not be taken literally but rather
considered as a general indication of both the direction and magnitude of the subject
matter. The intended audience for this document is:
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
2⏐ Introduction
OVERVIEW OF CHAPTERS
The following is an overview of the chapters of this document:
Chapter 1—Introduction. The introduction states the purpose / objective of
the document as well as a brief overview of the upcoming chapters.
Chapter 2—Background. The five key aspects of planning a datacom facility
are discussed. In addition a simple example is provided to show how one might use
this process in the planning stage. Finally, the use of the metric power density is
discussed.
Chapter 3—Load Trends and Application. This chapter contains updated
and extended datacom equipment power trend charts including the historical trends
for power dissipation of various classes of equipment. An overview is provided of
the trend evolution of the various groupings of datacom equipment from the previous
trend chart to the trend chart published in this book. There is also a discussion of
applying the load trend charts when planning the capacity of a new datacom facility
and an introduction on how to provision for that capacity.
Chapter 4—Air Cooling of Computer Equipment. Various configurations of
air cooling of computer equipment are presented. These configurations include cool-
ing equipment outside the room, cooling equipment inside the room but outside the
rack, and cooling equipment physically mounted on the rack.
Chapter 5—Liquid Cooling of Computer Equipment. This chapter provides
an introduction into the reasons behind the re-emergence of liquid cooling as a
consideration and potential solution to higher density loads along with details on the
types of liquid used for enhanced heat transfer.
Appendices. The appendices are a collection of information included to supple-
ment the chapters of this book. Further, the appendices provide information that is
useful for those involved with datacom cooling but is not readily available or
centrally collected. For example, the appendices include cooling-related terms used
in the building design / construction industry and IT industry, which accomplishes
the goal of a centralized, single source as well as emphasizing integration and collab-
oration of the industries.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
2
Background
DATACOM FACILITY PLANNING
Architects and engineers will generally provide the environmental infrastruc-
ture according to existing conventions, building codes, and local conditions.
However, they are not trained to be information technology futurists, and given the
volatility of technology, an IT staff would have far more credible insight into IT
requirements for their particular organization, at least for tactical planning cycles.
Nonetheless, the IT staff can provide some insight as to what could happen in
the future, thus providing some guidance in the strategic planning of a datacom facil-
ity in terms of the amount of space required, as well as the environmental impacts
governed by systems of the future.
As the current trends indicate increasing power density loads, there is a concern
over the impact that the increase will have on how to characterize or plan for these
loads, as well as the selection of the cooling system best suited to meet the load. The
most challenging question to answer is “Who really plans the datacom facility?”
• Is it the architect/engineer?
• Is it planned by the IT department based on forecast of future datacom appli-
cations growth?
• Is it planned by the facilities department once they are given the amount and
type of equipment from the IT department?
• Is it the owner/developer of the facility based on financial metrics?
• Is it a joint decision amongst all of the parties listed above?
Unfortunately, for many companies the planning process for the growth of data-
com facilities or the building of new datacom facilities is not a well-documented
process. The purpose of this book is to focus on the power trends of datacom equip-
ment and also briefly outline a process for arriving at the floor space required and,
hopefully, take some of the confusion out of the process.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
4⏐ Background
Each datacom facility is unique and each company utilizes different applica-
tions, thereby resulting in a different set of hardware; thus the personalities of data-
com facilities vary quite dramatically. The space occupied by the hardware of one
specific datacom facility is shown below:
Columns 1.0%
The key in presenting this breakdown is that there are many components that
make up the space required for a datacom facility. Many times the focus is on the
servers, but a holistic view must be maintained in developing the space required and
must include all the elements.
The hardware that makes up the datacom facility should not be the initial focus
for planning a datacom facility. Although the hardware physically occupies the
space on the datacom facility floor, the software does all the work. Therefore, the
planning should begin with understanding what are the applications that need to be
accomplished for running the business, both now and in the future. Application
capacity drives hardware acquisition, which, in turn, drives floor space and energy
requirements.
The plan for space in the current datacom facility or in planning a new datacom
facility should consider the following five aspects:
1. Existing applications floor space
2. Performance growth of technology based on footprint
3. Processing capability compared to storage capability
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
6⏐ Background
5. Asset Turnover
Each IT organization has its own roadmap and a rate of hardware renewal.
Slower turnover means that more floor space will be required to support the growth
in applications that might be required. Faster turnover would allow more computing
power to exist in the current space taken up by older, lower performing equipment.
Of course the newer equipment in general generates more heat and requires
more power for the same footprint, and this is the issue that is being addressed in this
book. This is because the increase in the rate of transactions per watt of energy used
(i.e., greater processing efficiency) is not offsetting the increase in technology
compaction (i.e., more processing capacity for a given packaging) and the result is
more processing power per equipment footprint.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
com equipment itself, the room also houses power distribution units (PDUs) and
chilled water CRAC units and has some ancillary space (cross aisles, spare parts stor-
age, etc.). For the purposes of this example, we shall consider two baseline scenarios:
These two baseline scenarios are summarized in Table 2.1 along with their cooling
load impact:
Note: This breakdown is not intended to encompass every datacom facility since
each facility is unique.
Total current cooling load based on Table 2.1 would be around 125,000 watts
(35 tons) for Scenario 1, which equates to an average of around 25 watts per square
foot (considered over the 5,000 ft2 gross floor area of the datacom equipment room).
For Scenario 2, it would be approximately 175,000 watts (50 tons), which equates
to around 35 watts per square foot.
Now consider the following potential projections:
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
8⏐ Background
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
10⏐ Background
Figure 2-2 provides a graphical summary for the two scenarios, which shows us
that, although the overall datacom facility depicts a relatively small W/ft2 increase
in average power density (15 watts per square foot for Scenario 1 and 25 watts per
square foot for Scenario 2), the maximum power density for a localized area with the
new servers is considerably higher in both scenarios (200 watts per square foot)
compared to the older server equipment.
This increased maximum density for the new servers results in the need for care-
ful consideration with regard to the cooling and power distribution to these areas.
Here we have emphasized that planning the floor space required for a datacom
facility involves many aspects, and a holistic view needs to be taken. We have
attempted to address the factors that are relevant in planning the amount of floor
space required for the datacom facility. Once these allocations are made for the vari-
ous pieces of equipment, then the other aspects of the infrastructure need to be
assessed, including power distribution capabilities and cooling capabilities.
As will be seen in the next chapter, the power density of some equipment types
is significant and increasing rapidly. These factors may cause the design team to
potentially examine other cooling options, such as expansion of the facility area to
decrease the heat density (which has to be weighed against the cost of the expan-
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
12⏐ Background
As a result, some are pushing for the more precise kW per rack metric. The kW
per rack metric is based on approximating the load per rack and then estimating the
population of racks within the facility to obtain an overall load. Although, there is
some logic to kW per rack since it more accurately defines a specific heat load over
a given footprint (although this footprint has been known to vary in size as well),
there remain obstacles to overcome in establishing the actual value(s).
The first challenge to overcome is the inherent sequence of events. Often at proj-
ect inception (especially if it is a new site, new room, or major renovation) the data-
com computer equipment has not been finalized and certainly the rack configuration
remains an unknown. Therefore, the rack configuration (i.e., the equipment type and
quantity within a given rack) is estimated in order to establish a load.
Secondly, equipment nameplate data are often the only information provided by
the manufacturers to establish the cooling load and using this method essentially
equates datacom equipment power load with the heat dissipation of that particular
piece of datacom equipment. However, this is not as accurate as first perceived since
the datacom equipment manufacturers’ nameplate data are published with a focus
on regulatory safety and not heat dissipation. To overcome this discrepancy, a stan-
dard thermal report format was introduced in ASHRAE’s Thermal Guidelines for
Data Processing Environments (ASHRAE TC 9.9 2004) and in conformance with
the guidelines set forth in that publication, datacom equipment manufacturers are
just beginning to publish meaningful heat release data for their equipment that
allows for a more accurate load assessment.
Both the watts per square foot and the kW per rack metrics are used to calculate
a load at a point in time, but only when the values are used in conjunction with the
datacom equipment power trend charts can we begin to understand and predict how
that load could change for future datacom equipment deployments across the life
cycle of the facility.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• Step 1 - The IT department determines the need to procure and deploy blade
servers, which represent a technology they have never used before. They
interact with the datacom equipment manufacturers and select a manufacturer
and product.
• Step 2 – The IT department obtains preliminary pricing from the manufac-
turer and submits for funding. Little or no consideration is given at this time
for additional deployment costs to augment the support or infrastructure ser-
vices (i.e., cooling). Management approves the pricing for the IT equipment
after going through the cost benefit metrics as a part of their approval process.
• Step 3 – The datacom equipment is procured and the facilities department is
notified that new equipment is coming and the datacom equipment room must
be modified to accommodate the new deployment.
• Step 4 – The facilities department discovers the datacom equipment loads are
far beyond what they have ever cooled before. Due to their current experience
with projected loads not being realized, their first reaction is skepticism and
the published loads are declared as being grossly overstated.
• Step 5 – The facilities department ask their counterparts in other firms and
discover that people feel these incredible loads could be real.
• Step 6 – The facilities department hires a mechanical consulting engineer and
assigns them the task of “figure out how to cool this”. No budget for this
scope was assigned previously and management is blindsided by an additional
cost that was not considered in their previous metrics. Consequently, com-
pounding the difficulty of accomplishing the actual cooling is the fact that
there are only minimal financial resources available to accomplish it.
IT INDUSTRY BACKGROUND
The IT industry continues to respond to client demand with their focus on more
speed, more data storage, more bandwidth, higher density, smaller footprint /
volume, more portable, more open, and lower cost.
The typical life cycle for a facility’s infrastructure (e.g., air handlers, pumps,
and chillers) can be 10 to 25 years, while the datacom equipment it serves is an order
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
14⏐ Background
of magnitude less. Further, the building itself (e.g., steel and concrete, bricks and
mortar) can have a life cycle well beyond 25 years.
A critical challenge is to initially plan and design both new construction and
renovation projects so that the investment in the building and its infrastructure is
fully realized and they do not become prematurely obsolete.
Datacom equipment power trends over the past 10 years have been on a path of
rapid increase. There has also been a trend toward equipment compaction
compounding the increases in load density (watts per square foot or watts per rack).
While power consumption is increasing, the focus on technology compaction is
causing the power per equipment footprint to increase at a more rapid rate.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
3
Load Trends and
Their Application
INTRODUCTION
When appropriately applied, the “Datacom Equipment Power Trend Chart” can
be a powerful tool in considering what the future loads might be in a facility or space.
Future load is a critical component to planning, designing, constructing, and oper-
ating facilities to avoid ineffective expenditures, premature obsolescence, stranded
cost or assets, etc.
As stated in Chapter 1, it is important to consider the fundamental definition of
“trend,” which is the general direction in which something tends to move. The trends
referenced or presented in this book should not be taken literally but rather consid-
ered as a general indication of both the direction and magnitude of the subject matter.
Further, predicting future needs/loads is difficult and merely a speculation. Although
not precise, using trends and predicting future needs/loads is typically far more
effective than simply taking the narrow sighted approach of just considering the
current needs/loads.
The next section provides an overview of the original trend chart created by the
Thermal Management Consortium and published by the Uptime Institute (2000).
The new “Datacom Equipment Power Trend Chart” in this book is the result of direct
input from essentially the same representative Thermal Management Consortium
companies (often the very same individuals) that were used to produce the previous
trend chart and also based on recent information obtained since the publication of
the previous trend chart in 2000.
Over 20 datacom manufacturers were included in formulating this new trend
chart. Extensive interactions and iterations occurred for more than six months in
order to gain reasonable understanding and consensus among the representatives of
datacom manufacturers. Some of the features of the trend chart that were reevaluated
are as follows:
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• The original trend chart was compared to the actual equipment that had been
shipped since the original publication of the chart. This review indicated that
there were servers shipped that exceeded the values predicted in the chart (an
indication that the published trends did not overstate the server loads).
• The individual trend lines or bands from the original trend chart were
reviewed for current relevance. The original trend lines were:
• Communication equipment (frames)
• Servers and disk storage systems
• Workstations (stand-alone)
• Tape storage systems
• The trend line assessment initially determined that the individual trends of
servers and disk storage systems were different and should be separated. Ulti-
mately this evolved further into changing the “Servers and Disk Storage Sys-
tems” to the following three categories:
• Storage servers
• Compute servers – 1U, blades, custom
• Compute servers – 2U and greater
• The intent of the trends is to characterize the actual heat rejection of fully con-
figured equipment and so by default they can all be called high density. Simi-
lar to the servers described above, for the trend originally called
“communication equipment (frames),” all of the high-density communication
equipment does not fit within the one trend and so it needed to be split. How-
ever, unlike the server groupings, the communication equipment trends can-
not easily be identified by type of equipment. As a result, the two
communication equipment trends that are included are generically called:
• Communication equipment – extreme density
• Communication equipment – high density
• The International Technology Roadmap for Semiconductors (ITRS) publishes
trends at the semiconductor level, and those values, trends, and projections
were considered during the “Datacom Equipment Power Trend Chart” assess-
ment.
The evolution of the power trend chart contained many steps. This chapter
graphically takes the reader through those steps and arrives at a new “Datacom
Equipment Power Trend Chart.” It will also provide some description of the issues
behind the application of the new trend chart.
Appendix B provides additional formats for the trend information, such as in
spreadsheet form and metric units. Also, versions are provided where each trend is
shown as a line rather than band and substituting a linear ‘Y’ axis scale for the loga-
rithmic scale.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• The data shown in the product trend chart provide a general overview of the
actual power consumed and the actual heat dissipated by data processing and
telecommunications equipment. These trends reflect data collected from hard-
ware manufacturers for many products.
• The data emulate the most probable level of power consumption assuming a
fully configured system in the year the product was first shipped. It was also
intended that the trend lines capture those equipment categories that dissipate
the most power, but in general most of the equipment in a specific class
should fall within the bands shown.
• Finally, the intent of the trends is that they are to be used as a forecasting and
planning tool in providing guidance for the future implementation of the dif-
ferent types of hardware.
Not all products will fall within the trends lines on the chart at every point in
time. It is not the intent to show or compare individual products with the trend lines.
However, it is the intent that most equipment will fall within the parameters given
and therefore this book provides valuable planning guidance for the design and oper-
ation of future data processing and telecommunication spaces.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 3-4 New power trend chart: workstations and tape storage projections.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
COMMUNICATION EQUIPMENT
As mentioned earlier, the previous trend chart accounted for the measured load
at maximum configuration. For the communication equipment trend line, this repre-
sented the largest density values for all groups. However, current studies reveal that
the communication equipment actually has two distinct groupings.
The extreme density communication equipment does indeed closely follow the
trend line that was shown in the previous chart, but the trend is representative of the
most powerful communications equipment available. More recently, the communi-
Figure 3-5 New power trend chart: compute and storage servers split.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 3-6 New power trend chart: compute servers’ second split.
Figure 3-7 New power trend chart: compute and storage server projection.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
cation equipment technology has resulted in a distinct grouping (labeled in the new
trend chart as “Communication—High Density”) introduced around 2000 that has
significantly lower trend values, and this new grouping is indicated in Figure 3-8.
As before, the groupings are all projected to 2014, and for the communication
equipment trend lines, Figure 3-9 shows that particular extended projection.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 3-9 New power trend chart: communication extreme and high-density
equipment projection.
Figure 3-10 New ASHRAE updated and expanded power trend chart.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Additional information related to the new trend chart can be found in Appendix
B, including SI versions of the trend chart, tabular data extracted from the trend
chart, and nonlogarithmic y-axis versions.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This combination of resources has a unique insight into how the power
consumption of computer equipment will evolve, and the new trend chart is the
culmination of quantifying that insight into a tangible tool that can be used by the
industry. It was that insight that allowed for the evolution of the previous trend chart
to expand the groupings of the various types of equipment to result in seven trend
lines instead of the previously considered four.
By having an understanding of the group of equipment (or multiple groups of
equipment) that is to be housed within a given facility, the chart can be used to
quickly ascertain the increase in power density to that grouping over a given product
cycle timeframe.
For example, if we consider the extreme density communication equipment
group, the trend chart indicates a density of around 5,000 watts per equipment square
foot at the present time (2004). In three years’ time, the projected value increases to
7,000 watts per equipment square foot (40% higher), in five years’ time, the
predicted value is around 8,000 (60% higher), and in ten years’ time the value is
predicted to be 10,500 (110% higher).
For a datacom facility predominantly made up of extreme density communica-
tions equipment, this would mean that a holistic design would need to incorporate
provisions that would be able to accommodate the load increasing by more than a
factor of 2 over ten years and being able to accommodate phased upgrades that
would represent an increase in load of around 50% every three to five years.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The Day 1 capacity of the datacom facility does not necessarily need to include
all of the power and cooling equipment required for the ultimate capacity, although
good design practice should make provisions for future augmentation. Those provi-
sions should include:
1. System growth components (e.g., isolation valves, additional taps, power taps).
2. Allocating the necessary spatial provisions required to accommodate the future
equipment and providing some consideration for how that future equipment would
be brought into the facility.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
4
Air Cooling of
Computer Equipment
INTRODUCTION
As load densities increase, it becomes increasingly difficult to cool equipment,
especially when using air. The trends and predesign phase load calculation methods
described in the earlier chapters provide insight to establish the design criteria for
the load the facility will most likely be required to support today and in the future.
Using this information, combined with space geometry and other attributes,
determines the economics of using air cooling versus liquid cooling versus a combi-
nation. The initial and final load densities will directly impact the economic choice
between air- and liquid-cooled solutions as well as impacting the determination of
the optimum choice between the two. The following describes the basic types of air
cooling systems.
The cooling systems presented are limited to the systems within the datacom
room; it is not the intent to present options for central plant equipment (i.e., chillers,
drycoolers, etc.). The descriptions are not intended to be comprehensive but to
provide a sense of some of the choices.
Knowledge of these choices allows us to understand the provisions required for
a particular cooling system. These provisions are sometimes overlooked at the early
stages when considering high density load deployment but can have a significant
impact on the allocation of resources (financial, spatial, etc.).
The cooling systems presented in this chapter and in chapter 5 are categorized
into air-cooled and liquid-cooled systems. For the purposes of this book, the defi-
nitions of these categories are:
• Air Cooling – Conditioned air is supplied to the inlets of the rack / cabinet for
convection cooling of the heat rejected by the components of the electronic
equipment within the rack. It is understood that within the rack, the transport
of heat from the actual source component (e.g., CPU) within the rack itself
can be either liquid or air based, but the heat rejection media from the rack to
the terminal cooling device outside of the rack is air.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• Liquid Cooling – Conditioned liquid (e.g., water, etc., and usually above dew
point) is channeled to the actual heat-producing electronic equipment compo-
nents and used to transport heat from that component where it is rejected via a
heat exchanger (air to liquid or liquid to liquid) or extended to the cooling ter-
minal device outside of the rack.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
UNDERFLOOR DISTRIBUTION
In an underfloor distribution system, chilled air is distributed via a raised floor
plenum and is introduced into the room through perforated floor tiles (Figure 4-2)
and other openings in the raised floor (i.e., cable cutouts).
The underfloor distribution system provides flexibility in the configuration of
the computer equipment above the raised floor. In theory, if the floor fluid dynamics
are set up properly, chilled air can be delivered to any location within the room
simply by replacing a solid floor tile with a perforated tile.
In practice, pressure variations in the raised floor plenum can create a non-
uniform distribution of airflow through the perforated floor tiles, in turn causing
facility hot spots. The various factors that influence the airflow distribution (i.e.,
raised floor height, open area of floor grilles, etc.) are well documented in a paper
by Patankar and Karki (2004).
The perforated tiles are located within the cold aisles, allowing chilled air to be
drawn through the front of the racks (via the electronic equipment) and discharged
at the rear of the racks in the hot aisles.
The warm air in the hot aisles is typically left unchanneled and is returned to the
top inlet of the computer room air-conditioning (CRAC) unit via airflow through the
room. Constricted airflow paths (e.g., due to low ceiling heights increasing the
impact of overhead infrastructure) can cause a negative impact on the effectiveness
of the cooling system.
The source of the chilled air is typically from CRAC units that are located within
the datacom room (Figure 4-2); this is currently the most common data center cool-
ing method. Figures 4-3 and 4-4 show a variation of the raised floor environment
where the chilled air is provided from air-conditioning units that are located outside
of the room.
Figure 4-2 Raised floor implementation most commonly found in data centers
today using CRAC units.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 4-3 Raised floor implementation using building air from a central plant.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
OVERHEAD DISTRIBUTION
In an overhead distribution system, chilled air is distributed via ductwork and
is introduced into the room through diffusers supplying chilled air. The air is directed
into the cold aisles from above, vertically downward (Figure 4-5). The source of the
chilled air is cooling equipment that can be located either within or outside the data-
com room.
In general, overhead distribution systems have a higher static pressure than an
underfloor system and therefore inherently incorporate an increased ability to
balance airflows to provide uniform air distribution.
The warm air in the hot aisles is typically left unducted and is returned to the
cooling units via the room, and the potential of short-circuiting the supply air based
on the airflow patterns present in a shallow ceiling application remains a concern.
Figure 4-5 illustrates one method of overhead distribution, a technique that is
commonly found in telecom central office environments. In this example, the over-
head cold supply air is ducted to the cold aisles with the source of the cold air coming
from a centralized cooling plant located outside of the raised floor area. Alternative
schemes could supply the air using localized upflow CRAC units.
Although Figure 4-5 does not require a raised floor for cooling, the raised floor
may be used for some power and/or data/fiber distribution to avoid ceiling space traf-
fic with the ductwork.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 4-6 Raised floor implementation using a dropped ceiling as a hot air
return plenum.
Certain techniques aim to physically separate the hot and cold air in the datacom
facility to minimize mixing, as shown in Figures 4-6 and 4-7. Figure 4-6 uses a
dropped ceiling as the hot exhaust air plenum that mirrors the raised floor and is used
to channel the air back to the CRAC units.
Figure 4-7 shows baffles that essentially are placed over the cold aisles. This
method attempts to ensure that the cold supply air is forced through the inlets of the
datacom equipment, and it also prevents the short-circuiting of hot exhaust air at the
uppermost equipment in the rack being drawn into the cold aisle.
Of the two techniques, the dropped ceiling approach is more common. The
concern for both of these techniques is that as airflow requirements increase in the
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
datacom facility, some of the servers may become “starved” or “choked.” There are
additional concerns with the baffle configuration method and its associated varia-
tions being difficult to implement from a practical standpoint due to fire and safety
code implications.
Figures 4-8 and 4-9 show a variation of the raised floor environment that may
actually have either distribution plenums or ducts on the inlet and/or outlet of the
servers. There are already products on the market that utilize a configuration similar
to this by enclosing and extending the rack depth and having built-in fans to assist
in the movement of air through the enclosed racks.
Figure 4-8 Raised floor implementation using inlet and outlet plenums/ducts
integral to the rack.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
These techniques have demonstrated some promise, but there are concerns
about racks with multiple servers, especially from different vendors. The concern for
these techniques is that as airflow requirements increase in the datacom facility,
some of the servers may become “starved” or “choked.” It is expected that computer
manufacturers will have to assess the impact of these techniques on their servers and
qualify certain and specific configurations for use in this type of application. In addi-
tion, equipment access and fire and safety implications must be assessed.
LOCAL DISTRIBUTION
Local distribution systems aim to introduce chilled air as close to the cold aisle
as possible. The source of the chilled air is localized cooling equipment that is
mounted on, above, or adjacent to the electronic equipment racks.
Typically, local distribution systems are not intended to be installed as stand-
alone equipment cooling systems but rather as supplemental cooling systems for just
the high density load racks. Because of the proximity of the local cooling unit, the
problems associated with poor airflow distribution and mixing (both supply/chilled
airstreams and return/warm airstreams) are eliminated.
Local distribution systems require that liquid (either water or refrigerant) be
piped to the cooling equipment located near the racks, and this may be of concern
to certain end users. Cooling equipment redundancy measures should also be care-
fully evaluated.
Techniques that use air cooling at or near the rack have also started to emerge
(Stahl and Belady 2001). The fundamental thought is that the closer the evaporator
or chilled exchanger is to the source of the problem, the more effective the cooling
of the datacom facility and the better capacity may be achieved. While this is yet to
be determined, there are some interesting possibilities. Figures 4-10 through 4-12
offer such possibilities.
Figure 4-10 shows the schematic with the evaporator or chilled heat exchanger
on the top of the rack, but it could also be to the side of the rack. Figures 4-11 and
4-12 show the evaporator or heat exchanger on the exhaust and inlet side, respec-
tively.
The preferred technique is to have the exchanger on the exhaust side to limit
condensation exposure, which is the issue in Figure 4-13. In addition, the hot aisle
is not nearly as cool as that in Figure 4-12.
Note that these techniques offer some options for localizing the cooling, but
some flexibility may be lost in moving or swapping equipment. Also, note that for
all of the localized techniques, the use of the CRAC units and raised floor cooling
may still be required to provide general or ambient cooling of the overall room.
These are left to the user to determine.
Yet another variation of this technique is to have a heat exchanger built into the
base of a cabinet. Products utilizing this technique are being introduced in the
market. For some configurations using this technique, the airflow is completely
internal to the enclosure.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 4-10 Local cooling distribution using overhead cooling units mounted to
the ceiling.
Figure 4-11 Local cooling distribution using overhead cooling units mounted to
the rack.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 4-12 Local cooling via integral rack cooling units on the exhaust side of the
rack.
Figure 4-13 Local cooling via integral rack cooling units on the inlet side of the
rack.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
RELIABILITY
More often than not, the reliability associated with air systems has involved
utilizing a redundancy strategy such as N+1, N+2, etc., resulting in additional CRAC
units being located in the electronic equipment room. However, reliability or
availability is more than providing redundant CRAC units, components, etc. It is
about delivering a total solution, including the verification of the performance of the
system in meeting the loads.
ASHRAE’s Thermal Guidelines for Data Processing Centers (ASHRAE TC
9.9 2004) provided direction on measurement and monitoring points that can be used
to obtain the data required in order to verify whether the performance of the system
is as designed and can also be carried out during the testing phases to determine the
impact of “what if” scenarios such as a CRAC unit failure.
However, during the design phase the effort to accurately predict the perfor-
mance of the system is not as simple and can mean significant computational fluid
dynamics modeling (CFD) to discover the weak points and how the air system will
perform based on various failure scenarios.
Sections in the next chapter expand upon the reliability issue as it relates to
chilled water and other liquid aspects of reliability.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
5
Liquid Cooling of
Computer Equipment
INTRODUCTION
As discussed in the previous chapter, the cooling systems presented are cate-
gorized into air-cooled and liquid-cooled systems. As a recap, for the purposes of
this book, the definitions of these categories are:
• Air Cooling – Conditioned air is supplied to the inlets of the rack/cabinet for
convection cooling of the heat rejected by the components of the electronic
equipment within the rack. It is understood that within the rack, the transport
of heat from the actual source component (e.g., CPU) within the rack itself
can be either liquid or air based, but the heat rejection media from the rack to
the terminal cooling device outside of the rack is air.
• Liquid Cooling – Conditioned liquid (e.g., water, etc., usually above dew
point) is channeled to the actual heat-producing electronic equipment compo-
nents and used to transport heat from that component where it is rejected via a
heat exchanger (air to liquid or liquid to liquid) or extended to the cooling ter-
minal device outside the rack.
The scope of this chapter is limited to the heat rejection associated with rack/
cabinet cooling and does not include the intricacies of component or board level
cooling at a component level. There are various liquid cooling methods (e.g., heat
pipes, thermosyphons, etc.) used to transport heat from the source component (e.g.,
CPU) to a location elsewhere, either within the packaging of the electronic equip-
ment or another location within the rack/cabinet itself.
For the purposes of this chapter, we will consider defining the liquid used to
transport the heat from the electronic equipment component to another location
within the packaging or the rack as being the “transport liquid.” The liquid cooling
methods considered all require a means of rejecting heat from the transport liquid
to the larger building cooling system, and the methods for rejecting that heat are
covered by the three basic strategies that are discussed in this chapter:
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• Heat rejection by AIR cooling the heat transport liquid from the electronic
equipment
• Heat rejection by LIQUID cooling the heat transport liquid from the elec-
tronic equipment
• Heat rejection by extending the heat transport liquid from the electronic
equipment to a location remote from the rack/cabinet
Although liquid cooling systems were prevalent as a means for cooling main-
frame computer systems, they have since fallen out of favor since the semiconductor
technologies did not initially require it but now are approaching limits that may again
require some form of liquid cooling.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figures 5-2 and 5-3 depict the two possible liquid-cooled systems. Figure 5-1
shows a liquid loop internal to the rack where the exchange of heat with the room
occurs with a liquid to air heat exchanger. In this case the rack appears as an air-
cooled rack to the client and is classified as an air-cooled system. It is included here
to show the evolution to liquid-cooled systems. Figure 5-2 depicts a similar liquid
loop internal to the rack used to cool the electronics within the rack, but in this case
the heat exchange is with a liquid to chilled water heat exchanger. Typically the
liquid circulating within the rack is maintained above dew point to eliminate any
condensation concerns. Figure 5-3 depicts a design very similar to Figure 5-2 but
where some of the primary liquid loop components are housed outside the rack to
permit more space within the rack for electronic components.
Observe that the name for each type of cooling method actually refers to the
primary coolant that is used to cool the computer equipment. Each option requires
Figure 5-1 Internal liquid cooling loop restricted within rack extent.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 5-2 Internal liquid cooling loop with rack extents and liquid cooling loop
external to racks.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
a path (pipes or hoses) for the coolant to flow and work (pump or compressor) to
force the coolant through the system. Each option includes some combination of
valves, sensors, heat exchanger, and control logic within the cooling circuit. Some
of the factors that must be considered when choosing the cooling methodology are:
Once the priorities of the system design have been established, the “best” cool-
ing option is selected. Some of the relative merits/trade-offs for the three primary
methodologies follow.
FLUORINERTTM
FluroinertsTM exhibit properties that make an attractive heat transfer media for
data processing applications. Foremost is an ability to contact the electronics
directly (eliminating some of the intermediary heat exchange steps), as well as the
transfer of high heat loads (via an evaporative cooling methodology). This technol-
ogy has containment concerns, metallurgical compatibility exposures, and tight
operating tolerances. FluorinertTM liquids are not to be confused with chlorinated
flurocarbons (CFCs), which are subject to environmental concerns.
WATER
Water is generally circulated throughout the electronic system between 15°C
and 25°C. The new ASHRAE recommendations (ASHRAE TC 9.9 2004) state that
the maximum dew point for a class 1 environment is 18°C. With this requirement the
logical design point would be to provide water to the electronics above 18°C to elim-
inate any condensation concerns.
The heat that is rejected by this water is either rejected through a water to air heat
exchanger (Figure 5-1) or to a water to water heat exchanger (Figures 5-2 and 5-3)
where the central plant supplies the chilled water to remove the heat. Liquid transfer,
for high density heat loads and where the system heat loads are high, is the optimum
design point for product design and client requirements. There are several reasons
for choosing a water cooling strategy:
• Less conversion losses (fewer steps between the heat load and the ultimate
heat sink). The heat transfer path would be from the electronic circuit to com-
ponent interface, to water, to central plant chilled water.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• Heat transfer capacity of water compared to air (water has several orders of
magnitude higher specific heat capacity compared to air)
• Minimal acoustical concerns
• Lower operating costs
• cost of installation: heat to air compared to heat to water is similar
• cost of operation based on electrical cost: water cooling is less costly than
air cooling
• More compact
REFRIGERANT
Refrigerants can be used either in a pumped loop technique or vapor compres-
sion cycle. The advantages of using refrigerants are similar to those of fluroinertsTM
in that they can contact the electronics without shorting out any of them. This tech-
nology has containment concerns, metallurgical compatibility exposures, and tight
operating tolerances.
In most cases the refrigerant requires the liquid lines to use copper piping
instead of hose to limit the loss of refrigerant over time. In the pumped loop meth-
odology, the refrigerant is at a low pressure such that when passing through an evap-
orator the liquid evaporates or passes into a two-phase flow situation and then passes
onto the condenser where the cycle begins again. If lower than ambient temperatures
are desired, then a vapor compression cycle may be employed. Similar concerns
exist with this system as with the pumped loop. Again, to limit refrigerant leaking,
no hoses are employed.
Clients view a system employing refrigerant as a “dry” liquid such that any leak
that does occur does not damage any of the electronics nor does it cause the elec-
tronics to fail when in operating mode. This can be viewed by some clients as a must
in their data center and is the preferred cooling methodology over other liquid cool-
ing technologies such as water.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure 5-4 Typical example of chilled water loop and valve architecture.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Chilled water piping must be pressure tested, fully insulated, and protected with
an effective vapor retarder. The test pressure should be applied in increments to all
sections of pipe in the computer area. Drip pans piped to an effective drain should
be placed below any valves or other components in the computer room that cannot
be satisfactorily insulated. A good-quality strainer should be installed in the inlet to
local cooling equipment to prevent control valve and heat exchanger passages from
clogging.
If cross-connections with other systems are made, possible effects on the
computer room system of the introduction of dirt, scale, or other impurities must be
addressed.
System reliability is so vital that the potential cost of system failure may justify
redundant systems, capacity, and/or components. The designer should identify
potential points of failure that could cause the system to interrupt critical data
processing applications and should provide redundant or backup systems.
It may be desirable to cross-connect chilled water or refrigeration equipment for
backup, as suggested for air-handling equipment. Redundant refrigeration may be
required, the extent of the redundancy depending on the importance of the computer
installation. In many cases, standby power for the computer room air-conditioning
system is justified.
RELIABILITY
As discussed in the previous section, a strategy for configuring the piping
system and components must be planned to achieve the desired level of reliability
or availability. This applies not only to chilled water systems but any liquid cooling
system.
Cooling systems are as critical as electrical systems and therefore must be
planned to continuously perform during a power outage. Especially in high density
situations, the equipment temperatures can very quickly exceed their operational
limits during the time that the generators are being started, the power being trans-
ferred, and the cooling system being restarted.
To achieve the desired continuous operation during an outage can require
certain cooling equipment to be supplied from an uninterruptible power source
(UPS). Another measure may involve the use of a liquid standby storage. In the case
of chilled water this can be achieved through the use of thermal storage tanks, which
could provide sufficient cooling until the full cooling system is restored to full oper-
ation.
Where cooling towers or other configurations that require makeup water are
used, sufficient water storage on the premises should be considered. This provision
is to protect against a loss of water service to the site. Typical storage strategies for
makeup water are similar to generator fuel storage (e.g., 24, 48, 72 hours of reserve
or more) and can result in the need for very large multiple storage tanks, depending
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
on the scale of the installation, so the impact to the site is a significant one and may
be problematic if not planned.
There is also often a concern over the presence of liquid near electronic equip-
ment. Liquid cooling has been effectively used for many years, for example, in the
mainframe environment. Just as with any other design conditions or parameters, it
requires effective planning, but it can be accomplished and the level of reliability
achieved.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
References/Bibliography
REFERENCES
1. Mitchell-Jackson, J.D. 2001. Energy needs in an internet economy: A closer
look at data centers. Thesis, University of California, Berkeley, July 10.
2. ASHRAE TC 9.9. 2004. Thermal Guidelines for Data Processing Environ-
ments. Atlanta: American Society of Heating, Refrigerating and Air-Condi-
tioning Engineers, Inc.
3. The Uptime Institute. 2000. Heat Density Trends in Data Processing, Computer
Systems, and Telecommunications Equipment. White Paper.
4. Patankar, S.V., and K.C. Karki. 2004. Distribution of cooling airflow in a raised-
floor data center. ASHRAE Transactions 110 (2): 629-634.
5. Stahl, L., and C. Belady. 2001. Designing an alternative to conventional room
cooling. International Telecommunications and Energy Conference
(INTELEC), October 2001.
6. Beaty, D.L. 2004. Liquid cooling—Friend or foe? ASHRAE Transactions 110
(2): 643-652.
BIBLIOGRAPHY
1. Azar, K. 2002. Advanced cooling concepts and their challenges. Therminic—
2002. Advanced Thermal Solutions, Inc., Norwood, MA. <www.qats.com>.
2. Belady, C. 2001. Cooling and power considerations for semiconductors into the
next century (invited paper). Proceedings of the International Symposium on
Low Power Electronics and Design, August 2001.
3. Chu, R.C. 2003. The challenges of electronic cooling: Past, current and future.
Proceedings of IMECE: International Mechanical Engineering Exposition
and Congress, November 15-21, 2003, Washington D.C.
4. Garner, S.D. 1996. Heat pipes for electronics cooling applications. Electronics
Cooling Magazine.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
52⏐ References/Bibliography
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
18. Schmidt, R., and E. Cruz. 2002. Raised floor computer data center: Effect on
rack inlet temperatures of chilled air exiting both the hot and cold aisles. ITh-
erm conference, June 2002.
19. Schmidt, R., and E. Cruz. 2003. Raised floor computer data center: Effect on
rack inlet temperatures when rack flow rates are reduced. Interpack Confer-
ence, July 2003, to be published.
20. Schmidt, R., and E. Cruz. 2003. Raised floor computer data center: Effect on
rack inlet temperatures when adjacent racks are removed. Interpack Confer-
ence, July 2003, to be published.
21. Schmidt, R., and E. Cruz. 2002. Raised floor computer data center: Effect on
rack inlet temperatures when high powered racks are situated amongst lower
powered racks. IMECE conference, November 2002.
22. Schmidt, R., and E. Cruz. 2003. Clusters of high powered racks within a raised
floor computer data center: Effect of perforated tile flow distribution on rack
inlet air temperatures. IMECE Conference, November 2003, to be published.
23. Schmidt, R., K.C. Karki, K.M. Kelkar, A. Radmehr, and S.V. Patankar. 2001.
Measurements and predictions of the flow distribution through perforated
tiles in raised-floor data centers. Paper No. IPACK2001-15728, InterPack’01,
July 2001.
24. Schmidt, R., and B. Notohardjono. 2002. High-end server low-temperature
cooling. IBM J. Res. Develop. 46(6), November.
25. Ståhl, L. 2004. Cooling of high density rooms: Today and in the future.
ASHRAE Transactions 110 (1): 574-579.
26. Telcordia. 2001. GR-3028-CORE, Thermal Management in Telecommunica-
tions Central Offices. Telcordia Technologies Generic Requirements.
27. Vukovic, A. 2004. Communication network power efficiency–Assessment,
limitations and directions. Electronics Cooling Magazine, August.
28. Yamamoto, M., and T. Abe. 1994. The new energy-saving way achieved by
changing computer culture (Saving energy by changing the computer room
environment), IEEE Transactions on Power Systems, vol. 9, August.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Introduction to Appendices
One of the primary reasons behind the creation of the ASHRAE TC9.9 technical
committee that produced this book is to provide a better alignment between equip-
ment manufacturers and facility operations personnel to ensure proper and fault
tolerant operation within mission critical environments in response to the steady
increase of the power density of electronic equipment.
The content of the appendices is aimed at an audience that could either be from
industry or be stakeholders (e.g., facility owners, developers, end-users / clients)
with varying levels of technical knowledge about the two primary industries.
These appendices fall into two primary categories:
1. Some are included to supplement the content of the chapters by providing addi-
tional related material, much like a traditional appendix.
2. Some are included to provide a central location for attaining high level information
that spans both the facility cooling and the IT industries. This information may
normally be difficult to attain without referencing multiple sources that are dedi-
cated to a particular industry or a facet of that industry and, even then, the source
may have information that is too detailed or requires a greater knowledge level.
Therefore, some of the content of the appendices may be only indirectly related
to the content of the chapters of the book but may be appreciated by the audience for
general background information. An overview of the individual appendices is
provided below.
Appendix A—Collection of Terms. This contains a standardized list of industry-
related terms complete with high level definitions. It is not necessarily a collection
of terms for the book, although a number of terms used are defined, but is intended
as an easy reference.
Appendix B—Additional Trend Chart Information / Data. This contains the
trend charts in SI units as well as versions of the trend charts without logarithmic
scales for power density to provide a clearer picture as to the magnitude of the esca-
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
lation of the loads. Also included is a tabular version of the trend values themselves
and, finally, graphs indicating the trends using the kW per rack and W/ft2 metrics.
Appendix C—Electronics, Semiconductors, Microprocessors, ITRS. This
provides some background information on the history of the semiconductor industry
and also about the ITRS roadmap information specifically for semiconductors.
Appendix D—A Micro-Macro Overview of Datacom Equipment Packaging.
This provides a high level graphical overview of the terminology of the packaging
and components associated with the datacom industry, with the approach beginning
at the small component level and building up to an entire facility. It is intended to
provide a simple definition in order to overcome the multiple meanings for the same
term given by the data processing and telecommunications industries.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Appendix A
Collection of Terms from
Facilities and IT Industries
Acoustics: Generally, a measure of the noise level in an environment or from a sound
source. For a point in an environment, the quantity is sound pressure level in decibels
(dB). For a sound source, the quantity is sound power level in either decibels (dB)
or bels (B). Either of these quantities may be stated in terms of individual frequency
bands or as an overall A-weighted value. Sound output typically is quantified by
sound pressure (dBA) or sound power (dB). Densely populated data and communi-
cations equipment centers may cause annoyance, affect performance, interfere with
communications, or even run the risk of exceeding OSHA noise limits (and thus
potentially causing hearing damage), and reference should be made to the appro-
priate OSHA regulations and guidelines (OSHA 1996). European occupational
noise limits are more stringent than OSHA’s and are mandated in EC Directive 2003/
10/EC (European Council 2003).
Air:
• Conditioned Air*: Air treated to control its temperature, relative humidity,
purity, pressure, and movement.
• Supply Air*: Air entering a space from an air-conditioning, heating, or
ventilating apparatus.
• Return Air: Air leaving a space and going to an air-conditioning, heating, or
ventilating apparatus.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Air-Cooled Data Center: Facility cooled by forced air transmitted by raised floor,
overhead ducting, or some other method.
Air-Cooled System: Conditioned air is supplied to the inlets of the rack/cabinet for
convective cooling of the heat rejected by the components of the electronic equip-
ment within the rack. It is understood that within the rack, the transport of heat from
the actual source component (e.g., CPU) within the rack itself can be either liquid
or air based, but the heat rejection media from the rack to the terminal cooling device
outside of the rack is air.
Air Inlet Temperature: The temperature measured at the inlet at which air is drawn
into a piece of equipment for the purpose of conditioning its components.
Air Outlet Temperature: The temperature measured at the outlet at which air is
discharged from a piece of equipment.
Backplane: A printed circuit board with connectors where other cards are plugged.
A backplane does not usually have many active components on it in contrast to a
system board.
Bay:
• A frame containing electronic equipment.
• A space in a rack into which a piece of electronic equipment of a certain size
can be physically mounted and connected to power and other input/output
devices.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
BIOS: Basic Input / Output System. The BIOS gives the computer a built-in set of
software instructions to run additional system software during computer bootup.
Blade Server: A modular electronic circuit board, containing one, two, or more
microprocessors and memory, that is intended for a single, dedicated application
(such as serving Web pages) and that can be easily inserted into a space-saving rack
with many similar servers. One product offering, for example, makes it possible to
install up to 280 blade server modules vertically in a single floor-standing cabinet.
Blade servers, which share a common high-speed bus, are designed to create less
heat and thus save energy costs as well as space.
Btu: The abbreviation for British thermal units. The amount of heat required to raise
one pound of water one degree Fahrenheit; a common measure of the quantity of
heat.
Cabinet: Frame for housing electronic equipment that is enclosed by doors and may
include vents for inlet and exhaust airflows and, in some cases, exhaust fans. Cabi-
nets generally house electronic equipment requiring additional security.
CFM: The abbreviation for cubic feet per minute, commonly used to measure the
rate of air flow in systems that move air.
Chassis: The physical framework of the computer system that houses all electronic
components, their interconnections, internal cooling hardware, and power supplies.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Client: A server system that can operate independently but has some interdepen-
dence with another server system.
Cluster: Two or more interconnected servers that can access a common storage
pool. Clustering prevents the failure of a single file server from denying access to
data and adds computing power to the network for large numbers of users.
Cold Plate: Cold plates are typically aluminum or copper plates of metal that are
mounted to electronic components. Cold plates can have various liquids circulating
within their channels.
Compute Server: Servers dedicated for computation or processing that are typi-
cally required to have greater processing power (and, hence, dissipate more heat)
than servers dedicated solely for storage (also see Server).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Condenser: Heat exchanger in which vapor is liquefied (state change) by the rejec-
tion of heat as a part of the refrigeration cycle.
Conditioned Air*: Air treated to control its temperature, relative humidity, purity,
pressure, and movement.
CPU: Central Processing Unit, also called a processor. In a computer the CPU is the
processor on an IC chip that serves as the heart of the computer, containing a control
unit, the arithmetic and logic unit (ALU), and some form of memory. It interprets and
carries out instructions, performs numeric computations, and controls the external
memory and peripherals connected to it.
Data Center Availability: Probability that a data center will be operable at a future
time (takes into account the effects of failure and repair/maintenance of the data-
center).
Data Center Reliability: Probability that a data center system will be operable
throughout its mission duration (only takes into account the effects of failure of the
data center).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Daughter Card: Also called daughter board, a printed circuit board that plugs into
another circuit board to provide extended feature(s). A daughter card accesses its
“parent” card’s circuitry directly through the interconnection between the boards.
• A mezzanine card is a kind of daughter card that is installed such that it lies in
the same plane but on a second level above its “parent.”
DIMM: Dual In-line Memory Module, a small circuit board that holds memory
chips. A single in-line memory module (SIMM) has a 32-bit path to the memory
chips, whereas a DIMM has 64-bit path.
Diversity: Two definitions for diversity exist, diverse routing and diversity from
maximum.
• Systems that employ an alternate path for distribution are said to have diverse
routing. In terms of an HVAC system, it might be used in reference to an alter-
nate chilled water piping system. To be truly diverse (and of maximum bene-
fit) both the normal and alternate paths must each be able to support the entire
normal load.
• Diversity can also be defined as a ratio of maximum to actual for metrics such
as power loads. For example, the nominal power loading for a rack may be
based on the maximum configuration of components, all operating at their
maximum intensities. Diversity would take into account variations from the
maximum in terms of rack occupancy, equipment configuration, operational
intensity, etc., to provide a number that could be deemed to be more realistic.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Down Time: A period of time during which a system is not operational, due to a
malfunction or maintenance.
DRAM: Dynamic Random Access Memory is the most commonly used type of
memory in computers. A bank of DRAM memory usually forms the computer's
main memory. It is called dynamic because it needs to be refreshed periodically to
retain the data stored within.
Efficiency: The ratio of the output to the input of any system. Typically used in
relation to energy; smaller amounts of wasted energy denote high efficiencies.
Equipment: Refers to, but not limited to, servers, storage products, workstations,
personal computers, and transportable computers. May also be referred to as elec-
tronic equipment or IT equipment.
Equipment Room: Data center or telecom central office room that houses computer
and/or telecom equipment. For rooms housing mostly telecom equipment, see
Telcordia GR-3028-CORE.
ESD: Electro Static Discharge, the sudden flow of electricity between two objects
at different electrical potentials. ESD is a primary cause of integrated circuit damage
or failure.
Evaporative Condenser: Condenser in which the removal of heat from the refrig-
erant is achieved by the evaporation of water from the exterior of the condensing
surface, induced by the forced circulation of air and sensible cooling by the air.
Fan*: Device for moving air by two or more blades or vanes attached to a rotating
shaft.
• Airfoil fan: shaped blade in a fan assembly to optimize flow with less turbu-
lence.
• Axial fan: fan that moves air in the general direction of the axis about which
it rotates.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• Centrifugal fan: fan in which the air enters the impeller axially and leaves it
substantially in a radial direction.
• Propeller fan: fan in which the air enters and leaves the impeller in a direc-
tion substantially parallel to its axis.
Fan Sink: A heat sink with a fan directly and permanently attached.
Fault Tolerance: The ability of a system to respond gracefully and meet the system
performance specifications to an unexpected hardware or software failure. There are
many levels of fault tolerance, the lowest being the ability to continue operation in
the event of a power failure. Many fault-tolerant computer systems mirror all oper-
ations—that is, every operation is performed on two or more duplicate systems, so
if one fails, the other can take over.
Firmware: Software that has been encoded onto read-only memory (ROM). Firm-
ware is a combination of software and hardware. ROMs, PROMs, and EPROMs that
have data or programs recorded on them are firmware.
Flux: Amount of some quantity flowing across a given area (often a unit area perpen-
dicular to the flow) per unit time. Note: The quantity may be, for example, mass or
volume of a fluid, electromagnetic energy, or number of particles.
Heat:
• Total Heat (Enthalpy): A thermodynamic quantity equal to the sum of the
internal energy of a system plus the product of the pressure-volume work
done on the system.
h= E + pv
where h= enthalpy or total heat content, E = internal energy of the system, p =
pressure, and v = volume. For the purposes of this book, h = sensible heat +
latent heat.
• Sensible Heat: Heat that causes a change in temperature.
• Latent Heat: Change of enthalpy during a change of state.
Heat Exchanger*: Device to transfer heat between two physically separated fluids.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Heat Pipe: Also defined as a type of heat exchanger. Tubular closed chamber
containing a fluid in which heating one end of the pipe causes the liquid to vaporize
and transfer to the other end where it condenses and dissipates its heat. The liquid
that forms flows back toward the hot end by gravity or by means of a capillary wick.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Humidity Ratio: The ratio of the mass of water to the total mass of a moist air
sample. It is usually expressed as grams of water per kilogram of dry air (gw/kgda)
or as pounds of water per pound of dry air (lbw/lbda).
LAN: Local Area Network. A computer network that spans a relatively small area.
Most LANs are confined to a single building or group of buildings. However, one
LAN can be connected to other LANs over any distance via telephone lines and/or
radio waves. A system of LANs connected in this way is called a wide area network
(WAN).
Leakage Current: Refers to the small amount of current that flows (or “leaks”)
from an output device in the off state caused by semiconductor characteristics.
Liquid Cooled System: Conditioned liquid (e.g., water, etc., usually above dew
point) is channeled to the actual heat-producing electronic equipment components
and used to transport heat from that component where it is rejected via a heat
exchanger (air to liquid or liquid to liquid) or extended to the cooling terminal device
outside of the rack.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Memory: Internal storage areas in the computer. The term memory identifies data
storage that comes in the form of silicon, and the word storage is used for memory
that exists on tapes or disks. The term memory is usually used as a shorthand for
physical memory, which refers to the actual chips capable of holding data. Some
computers also use virtual memory, which expands physical memory onto a hard
disk.
Microprocessor: A chip that contains a CPU. The terms microprocessor and CPU
are quite often used interchangeably.
Midplane: Provides a fault-tolerant connection from the blade server to the server
chassis and other components. The midplane replaces an average of nine cables typi-
cally required in rack and pedestal server configurations, eliminating excessive
cables.
Motherboard: The main circuit board of a computer. The motherboard contains the
CPU, BIOS, memory, serial and parallel ports, expansion slots, connectors for
attaching additional boards and peripherals, and the controllers required to control
those devices.
Nameplate Rating: Term used for rating according to nameplate (IEC 60950, under
clause 1.7.1): “Equipment shall be provided with a power rating marking, the
purpose of which is to specify a supply of correct voltage and frequency, and of
adequate current-carrying capacity.”
Non-Raised Floor: Facilities without a raised floor utilize overhead ducted supply
air to cool equipment. Ducted overhead supply systems are typically limited to a
cooling capacity of 100 W/ft2 (Telcordia 2001).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
PCB (Printed Circuit Board): Board that contains layers of circuitry used for inter-
connecting the other components.
Point of Presence (PoP): A PoP is a place where communication services are avail-
able to subscribers. Internet service providers have one or more PoPs within their
service area that local users dial into. This may be co-located at a carrier's central
office.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Rack Power: Used to denote the total amount of electrical power being delivered
to electronic equipment within a given rack. Often expressed in kilowatts (kW), this
is often incorrectly equated to be the heat dissipation from the electrical components
of the rack.
Raised Floor: Also known as access floor. Raised floors are a building system that
utilizes pedestals and floor panels to create a cavity between the building floor slab
and the finished floor where equipment and furnishings are located. The cavity can
be used as an air distribution plenum to provide conditioned air throughout the raised
floor area. The cavity can also be used for routing of power/data cabling infrastruc-
ture.
RAM: Random Access Memory, a configuration of memory cells that hold data for
processing by a computer's processor. The term random derives from the fact that
the processor can retrieve data from any individual location, or address, within
RAM.
Rated Voltage Range: The supply voltage range as declared by the manufacturer.
Rated Current: The rated current is the absolute maximum current that is required
by the unit from an electrical branch circuit.
Rated Frequency Range: The supply frequency range as declared by the manu-
facturer, expressed by its lower and upper rated frequencies.
Redundancy: “N” represents the number of pieces to satisfy the normal conditions.
Redundancy is often expressed compared to the baseline of “N”; some examples are
“N+1,” “N+2,” “2N,” and 2(N+1). A critical decision is whether “N” should repre-
sent just normal conditions or whether “N” includes full capacity during off-line
routine maintenance. Facility redundancy can apply to an entire site (backup site),
systems, or components. IT redundancy can apply to hardware and software.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Room Load Capacity: The point at which the equipment heat load in the room no
longer allows the equipment to run within the specified temperature requirements of
the equipment. The load capacity is influenced by many factors, the primary one
being the room’s theoretical capacity. Other factors, such as the layout of the room
and load distribution, also influence the room load capacity.
Room Theoretical Capacity: The capacity of the room based on the mechanical
room equipment capacity. This is the sensible tonnage of the mechanical room for
supporting the computer or telecom room heat loads.
Router: A device that connects any number of LANs. Routers use headers and a
forwarding table to determine where packets (pieces of data divided up for transit)
go, and they use Internet Control Message Protocol (ICMP) to communicate with
each other and configure the best route between any two hosts. Very little filtering
of data is done through routers. Routers do not care about the type of data they
handle.
Server: A computer that provides some service for other computers connected to it
via a network. The most common example is a file server, which has a local disk and
services requests from remote clients to read and write files on that disk.
SRAM: Static RAM is random access memory (RAM) that retains data bits in its
memory as long as power is being supplied. SRAM provides faster access to data
and is typically used for a computer's cache memory.
STC: Sound Transmission Class. This is an acoustical rating for the reduction in
sound of an assembly. It is typically used to denote the sound attenuation properties
of building elements such as walls, floors, and ceilings. The higher the STC, the
better the sound-reducing performance of the element.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Temperature:
• Dew Point: The temperature at which water vapor has reached the saturation
point (100% relative humidity).
• Dry Bulb: The temperature of air indicated by a thermometer.
• Wet Bulb: The temperature indicated by a psychrometer when the bulb of
one thermometer is covered with a water-saturated wick over which air is
caused to flow at approximately 4.5 m/s (900 ft/min) to reach an equilibrium
temperature of water evaporating into air, where the heat of vaporization is
supplied by the sensible heat of the air.
Tonnage: The unit of measure used in air conditioning to describe the heating or
cooling capacity of a system. One ton of heat represents the amount of heat needed
to melt one ton (2000 lb) of ice in one hour; 12,000 Btu/h equals one ton of heat.
Upflow: A type of air-conditioning system that discharges air upward, into an over-
head duct system.
Uptime:
• Uptime is a computer industry term for the time during which a computer is
operational. Downtime is the time when it isn’t operational.
• Uptime is sometimes measured in terms of a percentile. For example, one
standard for uptime that is sometimes discussed is a goal called five 9s—that
is, a computer that is operational 99.999 percent of the time.
Valve*: A device to stop or regulate the flow of fluid in a pipe or a duct by throttling.
Velocity:
• Vector quantity: Denotes the simultaneous time rate of distance moved and
the direction of a linear motion.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
• Face velocity: Velocity obtained by dividing the volumetric flow rate by the
component face area.
Virtual: Common alternative to logical, often used to refer to the artificial objects
(such as addressable virtual memory larger than physical memory) created by a
computer system to help the system control access to shared resources.
Virtual Server:
• A configuration of a World Wide Web server that appears to clients as an
independent server but is actually running on a computer that is shared by any
number of other virtual servers. Each virtual server can be configured as an
independent Web site, with its own hostname, content, and security settings.
• Virtual servers allow Internet service providers to share one computer
between multiple Web sites while allowing the owner of each Web site to use
and administer the server as though they had complete control.
Wafer: Any thin but rigid plate of solid material, especially of discoidal shape; a
term used commonly to refer to the thin slices of silicon used as starting material for
the manufacture of integrated circuits.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
References
• http://www.computer-dictionary-online.org/
• http://whatis.techtarget.com/
• http://www.linktionary.com/linktionary.html
• ASHRAE Terminology of Heating, Ventilation, Air Conditioning, and Refrig-
eration
• ASHRAE Thermal Guidelines for Data Processing Environments
• OSHA. 1996. 29 CFR 1910.95: Occupational Noise Exposure.
• European Council. 2003. Directive 2003/10/EC of the European Parliament
and of the Council of 6 February 2003 on the minimum health and safety
requirements regarding the exposure of workers to the risks arising from
physical agents (noise).
• Telecordia. 2001. GR-3028-CORE. Thermal Management in Telecommuni-
cations Central Offices.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Appendix B
Additional Trend
Chart Information/Data
ASHRAE UPDATED AND EXPANDED
POWER TREND CHART—ADDITIONAL DATA
Additional versions and information related to the chart are provided for refer-
ence:
• Figure B-1 provides the complete updated and expanded trend chart in SI
units.
• Table B-1 provides the trend data in tabular form.
• Figure B-2 provides the complete updated and expanded trend chart without a
logarithmic y-axis scale to better understand the rate of change of the trends.
For this chart, the lines represent the median values of the updated and
expanded power trend chart bands shown in Chapter 3.
• Figure B-3 provides the complete updated and expanded trend chart in SI
units and without a logarithmic y-axis scale. For this chart, the lines represent
the median values of the updated and expanded power trend chart bands
shown in Figure B-1.
• Figure B-4 provides the trend chart expressed in kW per rack.
• Figure B-5 provides the trend chart expressed in watts per square foot.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure B-2 New ASHRAE updated and expanded power trend chart (non-log
scale, I-P units).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure B-3 New ASHRAE updated and expanded power trend chart (non-log
scale, SI units).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure B-4 New ASHRAE updated and expanded power trend chart (non-log
scale, kW per rack).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure B-5 New ASHRAE updated and expanded power trend chart (non-log
scale, watts per square foot).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Appendix C
Electronics, Semiconductors,
Microprocessors, ITRS
INTERNATIONAL TECHNOLOGY ROADMAP
FOR SEMICONDUCTORS (ITRS)
The International Technology Roadmap for Semiconductors (ITRS) is an
assessment of the semiconductor technology requirements. The objective of the
ITRS is to ensure advancements in the performance of integrated circuits. This
assessment, called roadmapping, is a cooperative effort of the global industry manu-
facturers and suppliers, government organizations, consortia, and universities.
The ITRS identifies the technological challenges and needs facing the semi-
conductor industry over the next 15 years. It is sponsored by the European Electronic
Component Association (EECA), the Japan Electronics and Information Technol-
ogy Industries Association (JEITA), the Korean Semiconductor Industry Associa-
tion (KSIA), the Semiconductor Industry Association (SIA), and Taiwan
Semiconductor Industry Association (TSIA).
International SEMATECH is the global communication center for this activity.
The ITRS team at International SEMATECH also coordinates the USA region
events.
SEMICONDUCTORS
When considering the whole facility or even just a datacom room within the
facility, semiconductors and chips seem like a tiny element and of little importance
or relevance. As a result, it is common for facilities-centric people to gloss over or
ignore trends and information about semiconductors and chips. However, semicon-
ductors and chips have a major impact on the load of a datacom facility and are a crit-
ical source for predicting the loads, especially future loads.
A source of information regarding trends at the chip level is Moore’s Law
(Figure C-1). Since the chips are the primary component used in the datacom equip-
ment, the chip trends can be considered an early indicator to future trends in that
equipment.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Year / Power in Watts 2004 2005 2006 2007 2010 2013 2016
High Performance 160 170 180 190 218 251 288
Cost Performance 85 92 98 104 120 138 158
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Appendix D
Micro-Macro Overview of
Datacom Equipment Packaging
INTRODUCTION
A lot of different definitions and terms are used to describe the various elec-
tronic components associated with a datacom facility. This section is provided as a
reference to define the terms and also to define the hierarchy and groupings associ-
ated with those terms. In the context of this book, the descriptions of the components
will be provided with a bias toward the cooling considerations of the component.
PROCESSOR
The processor is the primary source of heat generation within a piece of elec-
tronic equipment with surface temperatures rising to greater than 100 degrees
Celsius (212 degrees Fahrenheit). The processor typically has some means of inte-
gral cooling to transport the heat away from the chip surface. This is sometimes
liquid based (such as a heat pipe, which is common in laptops) or a heat sink, typi-
cally with fan assistance.
Other terms used for this component include CPU (central processing unit).
Figure D-1 shows a typical processor, and Figure D-2 shows a processor with the
heat sink / fan assembly mounted to it.
MEMORY
The memory can be thought of as the working interface between the data storage
device and the processor. These memory chips are typically installed on cards with
multiple chips per card (Figure D-3). These cards have edge connectors that allow
them to be installed in sockets mounted on the board.
BOARD
The board (or motherboard) provides interconnections between the various
components (e.g., processors, memory, input/output, etc.). Typically, the boards
themselves are fairly thin and have printed circuitry, small components, and sockets.
A typical board layout is shown in Figure D-4.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Figure D-1 Processor of CPU. Figure D-2 Processor with heat sink/fan
assembly.
SERVERS
Server Definition
In addition to the board described above, the other major components that make
up the packaging include the main storage device (hard drive), supplementary stor-
age devices for portable media types (e.g., CD-ROM drives, floppy disk drives, etc.),
input/output connectors (e.g., video graphics cards, sound cards, etc.), and the power
supply. These components are all collocated with the board in a single housing and
that housing is colloquially referred to as the server.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Source: www.techtutorials.com
Also a part of the server packaging is the cooling system. The cooling system
for the majority of servers involves air movement and utilizes one or more fans
mounted inside of the packaging to draw air in from the surrounding environment
through and around the server components. The air is channeled within the server
to transport the heat that is generated by the server components through convection
before the server fans exhaust the warmer air back out to the surrounding environ-
ment.
Rack-Mounted Servers
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
Compute Servers
Servers used for computing are available in rack mount and custom configura-
tions. As mentioned above, compute servers that are housed within standard racks
are typically expressed in terms of unit heights (1U, 2U, 3U, etc.). Typical dimensions
and sizes for standard rack mount compute servers are shown in Figure D-6, and a
typical custom compute server is shown in Figure D-7.
Storage Servers
Storage servers vary in their configurations and sizes based on the manufacturer.
Similar to compute servers, depending on the scale of deployment required, the
configuration may be a standard rack-mounted box with varying unit height (as
depicted in Figure D-6) or it can be a custom stand-alone piece of equipment.
Figure D-8 shows some of the typical custom storage server sizes.
Blade Servers
Even the packaging density of 1U and 2U rack-mounted compute servers has
not proven to meet the ever increasing demands of the marketplace. A new type of
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
compute server packaging, blade servers, entered the market in May 2001 and threat-
ens to spark a period of rapid change in the equipment installed in datacom facilities.
Each blade consists of a board (complete with the processor, memory, and input /
output, etc.). The only other component from the server that is included in the pack-
aging is the hard drive, and these components are all contained within a minimal amount
of packaging, which results in a blade-like dimension (Figure D-9).
Server components that had previously been packaged inside the tower/pedestal
and rack mount boxes, such as fans, power supplies, etc., are still required, but these
components are now located within a chassis that is designed to house multiple blade
servers in a vertical side-by-side configuration (Figure D-10). These chassis are
typically 3U to 7U tall and can house up to 24 blades.
Blade servers initially used low power processors and compensated for low
individual performance with greatly increased density. High performance blades are
now available, some with multiple (first two, now four) processors per blade.
The blade server equipment is the result of technology compaction, which
allows for a greater processing density in the same equipment volume. The greater
processing density also results in a greater power density and a greater heat density.
This heat density increase has sparked the need in the industry to address the cooling
of high density heat loads.
Server Airflow
ASHRAE’s Thermal Guidelines for Data Processing Environments (ASHRAE
2004) introduced standardized nomenclature for defining the cooling airflow paths
for server equipment. This information was to be referenced by equipment manu-
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
facturer’s literature via a standard thermal report (also introduced in the same publi-
cation) as an attempt to provide meaningful data to bridge the gap between the
equipment manufacturer’s data and the requirements of the facility cooling system.
The diagrammatic overview of that nomenclature can be seen in Figure D-11.
RACK
A rack can be thought of as the standard framework within which the servers
are located. Racks are defined differently by the telecommunications and the data
processing industries, but essentially, both definitions are related to this framework.
In the data processing industry, racks owned by multiple companies are often
collocated in a single data center, and, therefore, there is an increased need for secu-
rity. In such environments, the racks may end up being cabinets or enclosures that
have lockable doors on the front and rear to prevent unauthorized access to the equip-
ment within (Figure D-12).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
ROWS
As mentioned earlier, the configuration at a rack level is becoming more and
more standardized to reflect a front-to-back airflow configuration. This airflow
pattern also allows for multiple racks to be placed side by side with little or no clear-
ances to form a row of racks (Figure D-15).
The limiting constraint on the length of a row is based on a combination of
connectivity constraints between equipment. Rows are often placed with the fronts
facing each other, allowing for the hot-aisle/cold-aisle cooling method that is typi-
cally deployed (see chapter 4 for more information).
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
The height of the raised floor varies based on what its purpose is as well as the
physical constraints of the building. Raised floors have been as little as 6 to 12 inches
all the way up to 48 inches for specific applications. The raised floor also often
houses some of the local power and cooling support equipment downstream of the
central plant area.
DATACOM FACILITY
The datacom facility itself is not only composed of the building extents but also
consists of the power and cooling support components that are required to be located
on the site somewhere. As covered in chapter 4, the support equipment location may
vary based on a particular preference toward more power and cooling support equip-
ment inside the building versus outside.
The facility may also have areas allocated for administration. Figure D-16
shows a typical facility setup.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
102⏐ Index
BIOS 59, 67
bipolar semiconductor technology 59, 60
blade server 13, 21, 59, 67, 94, 95
blower 59
board 41, 58, 59, 62, 67, 68, 87-89, 94
broad 11
Btu 59, 71
budget 13
C
cabinet 25, 29, 36, 41, 42, 52, 57-59, 65, 68, 69
calculate 12
capability 4, 6, 42
capacity 2, 4, 6, 7, 26, 27, 36, 42, 46, 48, 67, 69-71
central office 33, 59, 63, 68
centrifugal fan 64
centrifugal pump 68
CFD 39, 59
CFM 59
chassis 59, 67, 94, 95
chilled water 7, 38, 39, 43, 45-48, 59, 62, 99
chilled water system 46, 47, 59
chillers 11, 13, 29
circuit 45, 58-60, 62, 63, 67-69, 89
classes 2
client 13, 43, 45, 60
cluster 60
CMOS 59, 60, 85, 86
CMOS electronic technology 60
cold plate 60
cold-aisle 30, 34, 98
collaboration 2, 12
communication 12, 16, 21, 23, 24, 26, 53, 60, 61, 63, 68, 83, 96
communication equipment 16, 21, 23, 26, 60, 61, 63
compaction 6, 13, 14, 25, 85, 94
company 4, 6, 7, 12, 67
compatibility 45, 46
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
component 15, 29, 30, 41, 45, 56, 58, 65, 66, 72, 83, 85, 87, 94
compounding 13, 14
compressors 39
compute 4, 5, 7, 16, 21-23, 52, 60, 90-92, 94
compute server 7, 60, 90-92, 94
compute-intensive 60
computer 2, 12, 25, 26, 31, 36, 42, 43, 46-48, 51-53, 59-64, 66-73
computer system availability 60
computer system reliability 60
computing industry 68
condensation 36, 43, 45, 46
condenser 46, 61, 63
conditioned air 29, 41, 57, 58, 61, 69
configuration 12, 17, 21, 30-32, 35, 45, 62, 69, 72, 89, 90, 94, 96-98
consensus 15
consortium 15, 17, 19
constraints 11, 98, 99
construction 1, 2, 12, 14, 26, 52
consumption 14, 17, 19, 26
containment 45, 46
continuous 6, 48
contractors 1
convection 29, 41, 89
conversion 45
cooling 1-4, 7, 10-13, 25-27, 29-31, 33, 36-39, 41-46, 48, 49, 51-53, 55, 57-59, 61,
63, 65-68, 71, 85, 87, 89, 94-96, 98, 99
cooling tower 61
core network or equipment 60, 61
cost 10, 13, 15, 46, 48, 85
cost benefit 13
counterflow heat exchanger 65
CPU 29, 41, 58, 61, 67, 68, 87, 88
CRAC 7, 31-34, 36, 38, 39, 61, 98, 99
cross-flow heat exchanger 65
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
104⏐ Index
D
data 1, 11-13, 17, 19, 23, 25, 31, 33, 39, 45, 46, 48, 51-53, 55-64, 67, 69, 70, 73, 87,
94-96, 99
data center 1, 31, 46, 51-53, 58, 61, 63, 95
data center availability 61
data center reliability 61
data processing 12, 17, 19, 39, 45, 48, 51, 52, 56, 61, 73, 94-96
datacom 1-7, 10-17, 25-27, 29-31, 33-36, 39, 42, 46, 56, 61, 65, 83, 87, 94, 98-100
daughter card 62
day 1 25-27
DDR memory 62
dehumidification 62
delivery 11, 26
density 1-3, 10, 11, 13, 14, 16, 17, 21, 23, 24, 26, 29, 36, 42, 45, 48, 51-53, 55, 66,
85, 89, 90, 94, 96
design 1, 2, 10-12, 14, 19, 25-27, 29, 39, 43, 45, 46, 49, 51, 52, 67, 72
developer 3, 11
development 6, 19
dew point 30, 41, 43, 45, 66, 71
diaphragm pump 68
DIMM 62
discrepancy 12
disk storage 16, 21
dissipation 2, 12, 17, 69, 85
distribution 4, 7, 10, 11, 25, 26, 30, 31, 33, 35-38, 47, 51, 53, 62, 69, 70, 98, 99
diversity 62, 70
domain 62
down time 63
downflow 62
DRAM 63
dry bulb 71
dry-bulb temperature 62, 66
drycoolers 29
ducts 35
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
E
edge equipment 61, 63
edge equipment or devices 61
efficiency 6, 11, 53, 63
electronic 13, 29, 30, 31, 36, 39, 41-43, 45, 49, 51, 52, 55, 58-60, 63, 65, 66, 68, 69,
83, 87
elements 4-6, 70
energy 1, 4, 6, 51-53, 59, 63, 64, 68, 85
engineers 1, 3, 25, 51
environment 5, 6, 11, 25, 31, 35, 42, 45, 49, 53, 57, 72, 89
equipment 1-7, 10-19, 21, 23-27, 29-31, 33, 34, 36, 38, 39, 41-43, 46-49, 51, 52, 55-
63, 65-70, 83, 87, 90, 94-96, 98, 99
equipment recommended operation range vs. manufacturer’s specifications 63
equipment room 6, 7, 11, 13, 39, 61, 63
ESD 63
ethernet 63
evaporative condenser 63
evaporator 36, 46
exhaust 34, 36, 38, 59, 89
expansion 10, 38, 39, 47, 67
experience 11, 13
extreme density 16, 21, 23, 26
F
face velocity 72
facility 1-7, 10-13, 15, 25-27, 29, 31, 33-36, 45, 46, 55, 56, 58, 69, 83, 85, 87, 95,
98-100
failure 39, 48, 60, 61, 63, 64
fan 59, 63, 64, 87, 88
fan sink 64
fault tolerance 64
firmware 64
flexibility 31, 36, 47
floor 67, 98
floor 3-7, 10, 11, 26, 31-36, 51-53, 58, 59, 62, 67, 69, 98, 99
fluid dynamics 31, 39, 52, 59
FluorinertTM 42, 45, 64
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
106⏐ Index
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
I
IEC 66, 67
impact 1, 3, 5-7, 11, 25, 26, 29, 31, 33, 36, 39, 49, 83, 86
implementation 19, 31, 32, 34, 35
inception 12
increase 3, 6, 7, 10, 12, 14, 25, 26, 29, 34, 36, 55, 94
independent 12, 72
industry 1, 2, 6, 11-13, 17, 25, 26, 30, 55, 56, 68, 69, 71, 83, 85, 94-96
information 1-3, 11-13, 15-17, 19, 25, 29, 55, 56, 60, 64, 66, 71, 75, 83, 94, 98
intakes 30
integration 2
IS 1
IT 1-4, 6, 11-13, 25, 55, 63, 67, 69
ITE 66
ITRS 16, 52, 56, 66, 83
K
KVM 66
L
LAN 66
latent heat 64
leakage current 66
liquid cooled system 66
liquid cooling 2, 29, 30, 41-44, 46, 48, 49, 51, 57
liquid-cooled blade 58
liquid-cooled board 58
liquid-cooled chip 58
liquid-cooled equipment 58
liquid-cooled rack or cabinet 58
liquid-cooled server 58
local distribution 36
loop 42-44, 46, 47
M
magnitude 1, 14, 15, 42, 46, 55
mainframe 42, 49
managers 1
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
108⏐ Index
manufacturers 1, 12, 13, 15, 17, 19, 36, 42, 55, 83, 96
maximum 10, 17, 19, 21, 45, 62, 69
measured 11, 17, 19, 21, 58, 65, 67, 71
measured power 67
memory 19, 45, 59, 61-64, 67, 69, 70, 72, 87, 88, 94
method 12, 31, 33-35, 43, 58, 61, 98
metrics 3, 11-13, 25, 56, 62
microprocessor 42, 67, 85
midplane 67
minimum 6, 17, 73
motherboard 67, 87, 89
N
N+1 39, 69
nameplate 11, 12, 17, 67
nameplate rating 67
net 11, 85
non-raised floor 67
O
obsolete 14
obstacles 12
ODM 67
OEM 67
operating system 67, 72
operation 1, 19, 30, 39, 46, 48, 55, 63, 64
ordinances 11
organization 3, 6, 12, 66
outline 3
overhead 30, 31, 33, 37, 52, 58, 67, 71, 99
owner 3, 26, 72
P
packaging 6, 12, 17, 41, 45, 52, 56, 85, 88-90, 94
parallel-flow heat exchanger 65
parameters 19, 45, 49
PCB 68
PDU 99
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
110⏐ Index
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.
112⏐ Index
velocity 71, 72
ventilation 72
virtual 67, 72
virtual machine 72
virtual private network 72
virtual server 72
volume 13, 64, 66, 68, 94
W
wafer 72
water 7, 11, 30, 36, 38, 39, 41-43, 45-48, 52, 57, 59, 61-63, 66, 68, 71, 99
watts 7, 10-12, 14, 17, 19, 23, 25, 26, 67, 75, 81, 85
wet bulb 71
white-space 5
workload 6, 7
workstations 16, 19, 20, 23, 63, 65
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012