|Publication number||US7647787 B2|
|Application number||US 10/830,503|
|Publication date||Jan 19, 2010|
|Filing date||Apr 22, 2004|
|Priority date||Apr 22, 2004|
|Also published as||US20050235671|
|Publication number||10830503, 830503, US 7647787 B2, US 7647787B2, US-B2-7647787, US7647787 B2, US7647787B2|
|Inventors||Christian L. Belady, Eric C. Peterson|
|Original Assignee||Hewlett-Packard Development Company, L.P.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (7), Referenced by (22), Classifications (12), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention relates generally to the field of computer data centers, and more particularly to the field of cooling computer data centers.
Densification in data centers is becoming so extreme that the power density of the systems in the center is growing at a rate unmatched by technology developments in data center heating, ventilation, and air-conditioning (HVAC) designs. Current servers and disk storage systems generate 10,000 to 20,000 watts per square meter of footprint. Telecommunication equipment may generate two to three times the heat of the servers and disk storage systems. Liquid-cooled computers could solve this heat transfer problem, however, there is reluctance by both end users and computer manufacturers to make the transition from air-cooled computers to liquid-cooled computers. Also, there currently is no easy way to transition from an air-cooled data center to a liquid-cooled data center without a major overhaul of the data center and substantial down time to retrofit the data center.
Computer designers are continuing to invent methods that extend the air-cooling limits of individual racks of computers (or other electronic heat-generating devices) that are air-cooled. However, these high heat capacity racks require extraordinary amounts of air to remove the heat dissipated by the racks, requiring expensive and large air handling equipment.
Many modern data centers utilize a system utilizing a raised floor configured as a supply air plenum. Large HVAC units take air from near the ceiling of the data center, chill the air, and blow the cold air into the plenum under the raised floor. Vents in the floor near the servers allow cold air to be pulled up from the plenum, through the rack and the now warm air is blown out the back of the rack where it rises to the ceiling and eventually is pulled in to the HVAC units to begin the cycle anew. However, this type of system is limited in that it can only handle power of about 1600 to 2100 watts per square meter, significantly under the heat generated by many current electronic systems. Thus, the data center must contain significant amounts of empty space in order to be capable of cooling the equipment. Also, use of the under floor plenum has difficulties in that airflow is often impeded by cabling and other obstructions residing in the plenum. Further, perforated tiles limit airflow from the plenum into the data center to approximately 6 cubic meters per minute, well below the 60 cubic meters per minute required by some server racks. Even the use of blowers to actively pull cold air from the plenum and direct it to the front of the rack is insufficient to cool many modern servers. Balancing the airflow throughout the data center is difficult, and often requires a substantial amount of trial and error experimentation. Finally, the airflow is somewhat inefficient in that there is a substantial amount of mixing of hot and cold air in the spaces above the servers and in the aisles, resulting in a loss of efficiency and capacity.
In an attempt to increase the efficiency of raised floor plenum designs, some designers incorporate a large number of sensors through out the data center in an attempt to maximize the efficiency of the data center cooling with either static or dynamic provisioning cooling based on environmental parameters using active dampers and other environmental controls. Others may use a high pressure cooling system in an attempt to increase the cooling capacity of the raised floor plenum design. However this technique still has all of the inefficiencies of any raised floor plenum design and only increases the power handling capacity of the data center to about 3200 watts per square meter, still below the requirements of densely packed servers or telecommunication devices.
In a desperate attempt to increase cooling capabilities of a data center, some designers use an entire second floor to house their computer room air-conditioners (CRAC's). While this allows the use of large numbers of CRAC's without use of expensive data center floor space, it effectively acts as a large under floor plenum and is subject to the same inefficiencies and limitations of the under floor plenum design.
Other designers include air coolers within the server racks. For example, a liquid to air heat exchanger may be included on the back of a server rack to cool the air exiting the rack to normal room temperature. However, the airflow of the heat exchanger fans must match the airflow of the server precisely to avoid reliability and operational issues within the server. Also by mounting the heat exchanger on the racks, serviceability of the racks is reduced and the fluid lines attached to the rack must be disconnected before the rack may be moved. This results in less flexibility due to the presence of the liquid line and may require plumbing changes to the area where the rack is being moved to. Also, this technique does not directly cool the heat generating integrated circuits, it is simply a heat exchanger which is not as efficient as direct liquid cooling of the integrated circuits.
Another possibility is the use of overhead cooling which may offer cooling densities in the order of 8600 watts per square meter. However such overhead devices require a high ceiling that also must be strong enough to support the coolers. Also, in such a design, there is no easy migration route from air-cooled to liquid-cooled servers, and some users are concerned about the possibility of leaks from the overhead coolers dripping onto, and possibly damaging, their servers.
A data center is configured using alternating rows of racks containing other heat-generating electronic devices and air conditioners. Fluid, such as water or a refrigerant, for the air conditioners is supplied through pluming below a raised floor, such as those commonly found in current data centers. Attached to this plumbing are standard fluid couplings configured to couple to either air conditioners or liquid cooling units. These air conditioners and liquid cooling units use the same fluid so that they may share the plumbing. As data center migrates to liquid-cooled racks, a fraction of the air conditioners are replaced with liquid conditioning units in such a way that the data center contains both air-cooled and liquid-cooled racks without substantial reduction in efficiency of the air-cooling system. Since the air conditioners and liquid conditioning units use the same couplings and the same fluid, no infrastructure change is required.
Other aspects and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
Underneath the raised floor 104 may be found the plumbing required by the air conditioning units. In this example embodiment of the present invention, a building chilled fluid supply is provided through chilled fluid supply pipes 122, and returned to the main chiller through chilled fluid return pipes 124 contained within trenches 132 in the foundation 102. This trench is optional, but provides a place for fluids to drain in the event of any leakage, and also by placing the chilled fluid supply pipes 122 and chilled fluid return pipes 124 in the trench, there is more room for cabling with less congestion. Each air conditioning unit is connected to these chilled fluid pipes through air conditioner pipes 126 which each include a fluid coupling 128. Note that the configuration of these pipes and couplings may vary widely according to the needs of each individual data center. In many cases water will be used as the chilled fluid, however other fluids, such as a liquid refrigerant (which may undergo a phase change during the coolant cycle), may be used in its place within the scope of the present invention.
Also note that while three rack and air conditioner pairs are shown in this figure, any number of rack and air conditioner pairs may be used in a similar configuration within the scope of the present invention. Also, as mentioned above, each of the racks and air conditioners shown in
Those of skill in the art will recognize that there are a very wide variety of ways to configure data centers to take advantage of the present invention. There are many different ways to configure air-cooled racks with air conditioners such that the air-cooled racks may be replaced with liquid-cooled racks and the air conditioners may be replaced with liquid conditioning units without disrupting the airflow of any remaining air-cooled racks and air conditioners within the scope of the present invention. The Figures shown in this disclosure are simply a variety of example embodiments of the present invention, not a complete set of the various ways of implementing the present invention. For example, two story data centers may be build such that the air flows left to right on the first floor then is ducted up to the second floor where if flows right to left before being ducted back down to the first floor, completing the cycle.
Underneath the raised floor 104 may be found the plumbing required by the liquid conditioning units. In this example embodiment of the present invention, a building chilled fluid supply is provided through chilled fluid supply pipes 122, and returned to the main chiller through chilled fluid return pipes 124 contained within trenches 132 in the foundation 102. Each liquid conditioning unit is connected to these chilled fluid pipes through liquid conditioner pipes 126 which each include a fluid coupling 128. Note that the configuration of these pipes and couplings may vary widely according to the needs of each individual data center. For example, some data centers may be configured with the fluid supply pipes overhead instead of under a raised floor. However, the fluid couplings 128 must be configured to couple to both air conditioners and liquid conditioning units so that an air conditioner may be replaced by a liquid conditioning unit simply by disconnecting the fluid couplings 128 from the air conditioner and connecting the same fluid couplings 128 to the liquid conditioning unit. In many cases water will be used as the chilled fluid, however other fluids, such as a liquid refrigerant (which may undergo a phase change during the coolant cycle), may be used in its place within the scope of the present invention.
Also note that while three rack and liquid conditioner pairs are shown in this figure, any number of rack and liquid conditioner pairs may be used in a similar configuration within the scope of the present invention. Also, as mentioned above, each of the racks and liquid conditioners shown in
Those of skill in the art will recognize that this configuration of the present invention also allows easy transition from air-cooled racks to liquid-cooled racks by replacing a row at a time from the outside of the data center working in to the center of the room, or by replacing a row at a time from the inside of the data center working out to the edges of the room.
The foregoing description of the present invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and other modifications and variations may be possible in light of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application to thereby enable others skilled in the art to best utilize the invention in various embodiments and various modifications as are suited to the particular use contemplated. It is intended that the appended claims be construed to include other alternative embodiments of the invention except insofar as limited by the prior art.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5963425 *||Nov 21, 1997||Oct 5, 1999||International Business Machines Corporation||Combined air and refrigeration cooling for computer systems|
|US20010042616 *||Mar 21, 2001||Nov 22, 2001||Baer Daniel B.||Method and apparatus for cooling electronic enclosures|
|US20010052235 *||Feb 9, 2001||Dec 20, 2001||Durham James W.||Multiplex system for maintaining of product temperature in a vehicular distribution process|
|US20020104323 *||Feb 2, 2001||Aug 8, 2002||Logis-Tech, Inc.||Environmental stabilization system and method for maintenance and inventory|
|US20050122684 *||Dec 3, 2003||Jun 9, 2005||International Business Machines Corporation||Cooling system and method employing at least two modular cooling units for ensuring cooling of multiple electronics subsystems|
|US20050126747 *||Dec 16, 2003||Jun 16, 2005||International Business Machines Corporation||Method, system and program product for automatically checking coolant loops of a cooling system for a computing environment|
|US20050225936 *||Mar 28, 2003||Oct 13, 2005||Tony Day||Cooling of a data centre|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8004831 *||Aug 23, 2011||Exaflop Llc||Orthogonally system arrangements for data center facility|
|US8037644 *||Oct 18, 2011||International Business Machines Corporation||Fire-code-compatible, collapsible partitions to prevent unwanted airflow between computer-room cold aisles and hot aisles|
|US8094452||Jun 23, 2008||Jan 10, 2012||Exaflop Llc||Cooling and power grids for data center|
|US8248801||Aug 21, 2012||International Business Machines Corporation||Thermoelectric-enhanced, liquid-cooling apparatus and method for facilitating dissipation of heat|
|US8274790||Sep 25, 2012||International Business Machines Corporation||Automatically reconfigurable liquid-cooling apparatus for an electronics rack|
|US8276397||Jun 25, 2008||Oct 2, 2012||Exaflop Llc||Cooling and power paths for data center|
|US8320125||Nov 27, 2012||Exaflop Llc||Modular data center cooling|
|US8472182||Jul 28, 2010||Jun 25, 2013||International Business Machines Corporation||Apparatus and method for facilitating dissipation of heat from a liquid-cooled electronics rack|
|US8514575||Nov 16, 2010||Aug 20, 2013||International Business Machines Corporation||Multimodal cooling apparatus for an electronic system|
|US8760863||Oct 31, 2011||Jun 24, 2014||International Business Machines Corporation||Multi-rack assembly with shared cooling apparatus|
|US8797740||Nov 26, 2012||Aug 5, 2014||International Business Machines Corporation||Multi-rack assembly method with shared cooling unit|
|US8817465||Nov 26, 2012||Aug 26, 2014||International Business Machines Corporation||Multi-rack assembly with shared cooling apparatus|
|US8817474||Oct 31, 2011||Aug 26, 2014||International Business Machines Corporation||Multi-rack assembly with shared cooling unit|
|US8925333||Sep 13, 2012||Jan 6, 2015||International Business Machines Corporation||Thermoelectric-enhanced air and liquid cooling of an electronic system|
|US8953316 *||Dec 1, 2011||Feb 10, 2015||Lenovo Enterprise Solutions (Singapore) Pte. Ltd.||Container-based data center having greater rack density|
|US8988879||Nov 26, 2012||Mar 24, 2015||Google Inc.||Modular data center cooling|
|US9095889||Mar 7, 2013||Aug 4, 2015||International Business Machines Corporation||Thermoelectric-enhanced air and liquid cooling of an electronic system|
|US20090173017 *||Jan 7, 2008||Jul 9, 2009||International Business Machines Corporation||Fire-code-compatible, collapsible partitions to prevent unwanted airflow between computer-room cold aisles and hot aisles|
|US20110303406 *||Dec 15, 2011||Fujitsu Limited||Air-conditioning system and control device thereof|
|US20120073783 *||Sep 27, 2011||Mar 29, 2012||Degree Controls, Inc.||Heat exchanger for data center|
|US20120075806 *||Mar 29, 2012||Wormsbecher Paul A||Container-based data center having greater rack density|
|US20130130609 *||Dec 27, 2011||May 23, 2013||Inventec Corporation||Fan control system and method thereof|
|U.S. Classification||62/259.2, 62/407|
|International Classification||F25D3/02, F25D23/12, F24F3/06, H05K7/20|
|Cooperative Classification||H05K7/20745, F24F3/06, H05K7/2079|
|European Classification||H05K7/20S10D, H05K7/20S20D, F24F3/06|
|Aug 17, 2004||AS||Assignment|
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BELADY, CHRISTIAN L.;PETERSON, ERIC C.;REEL/FRAME:015066/0132
Effective date: 20040812
|Apr 27, 2010||CC||Certificate of correction|
|Mar 11, 2013||FPAY||Fee payment|
Year of fee payment: 4
|Nov 9, 2015||AS||Assignment|
Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001
Effective date: 20151027