Getting comfortable with elevated data center temperatures

Running data centers at elevated air temperatures cuts energy costs by reducing energy for cooling, but only recently has the industry started to get comfortable with the practice of running data centers a bit warmer than in the past.

Much of the credit for this comfort level goes to the fairly recent effort by a committee of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) to update its thermal guidelines for data centers. ASHRAE’s Technical Committee (TC) 9.9 laid out the new guidelines in a white paper—“2011 Thermal Guidelines for Data Processing Environments—Expanded Data Center Classes and Usage Guidelines”—and followed that up with a third edition book about thermal guidelines. The committee’s website has further information, and a link to the book purchase page can be found here.

The guidelines include a breakdown of data centers into four classes and two tiers for temperature management, and plenty of detailed discussion and analysis, so they are well worth some study. Rather than trying to summarize all of that information, here are a few select points to keep in mind:

 

  • The new guidance (updating previous guidelines from 2004 and 2008) generally allow for a broader recommended thermal operating envelope (temperature and humidity) compared to past guidelines. By running data centers a few degrees warmer than in the past, energy consumption for cooling infrastructure is reduced. The server fans may need to work a bit harder, but the savings related to reduced cooling more than make up for this within the recommended ranges.
  • The performance risk to the information technology (IT) assets is primarily from rapid temperature changes, not from a marginally higher set point. As long as the temperature is within the recommended range and stays consistent, performance should not be an issue.
  • Newer server hardware is capable of running reliably and efficiently at higher temperatures, as long as the temperature stays consistent. Fan power consumption in servers, for example, is much more efficient today than 10 or 15 years ago.
  • Within the recommended ranges, there is a “sweet spot” to hit in finding the optimal balance between reduced energy consumption for cooling, and potentially making the IT equipment work harder. This sweet spot tends to vary depending on the configuration and assets in a particular data center. For a good analysis on this issue, refer to Schneider Electric white paper 138, “Energy Impact of Increased Server Inlet Temperature.”

 

Overall, perhaps the biggest impact of the updated TC 9.9 thermal guidelines is that data center managers now have an authoritative set of recommendations to back up a decision to raise temperatures marginally. This raises the comfort level of data centers managers who know that slightly higher temperatures make economic sense given today’s server hardware, but were hesitant to make the decision without solid guidance.

For data center considering implementing elevated inlet air temperatures in accordance with the guidelines, it’s important to think about how to keep the new set point consistent. This typically requires multiple factors to be assessed, such as proper air flow management, configuration of hot aisle and cold aisle configurations, including placement of cooling units, possible upgrades to containment to reduce air mixing, as well as placement of temperature sensors within the data center.

Consistency is generally a very good thing when it comes to data centers, and it’s a vitally important goal when data center operators start thinking about raising the target temperature. An audit could help a facility prepare for such a change so that the new set point stays consistent and the facility is not left struggling with air mixing and temperature shifts that could threaten IT equipment performance or erode the expected energy savings.

5 Responses to “Getting comfortable with elevated data center temperatures”

  1. Joe Capes

    Luca, not to mention that with virtualization, most servers are running 8 or 9 applications versus just one like in the ‘old days’. So the combination of more efficient servers, fewer servers, more compute power per square foot and the capability to consolidate IT gear into fewer racks requires less room space. Operating at higher power densities is very compelling. Some work my teams conducted back in 2010 showed a sweet spot for consolidation of power to 8-10kw per rack with a diminishing return on investment after that. I would encourage owners and operators to take advantage of increasing set points along with the other efficiencies that virtualization, consolidation and operating at higher densities can provide.

    Reply
  2. Patphong

    In thailand on some IT business, they use the Comfort air cooling Branded “Daikin” which offered inverter technology. This unit is claimed by customer that it more efficiency that CRAC. Do you have any idea and how we can show the evidence effectively???

    Patphong

    Reply
    • Luca Melluso

      Hi Patphong,

      Solutions like Daikin’s are not designed for IT applications. In a server room, the cooling load is typically made up (almost entirely) of sensible heat coming from IT equipment, lights and so forth. There is very little latent load since there are few people and limited outside air. The required SHR of an air conditioner to match this heat load profile is very high, 0.95-0.99.

      Proper CRAC units (or precision air conditioning systems) are specifically designed to meet these very high sensible heat ratios.

      Comfort cooling solutions on the other hand are sized based based on the sensible and latent cooling they have to provide which for a typical office environment, for instance, could translate in a SHR of 0.65-0.70.
      This solution will provide too little sensible cooling and too much latent cooling. The excess latent cooling means that too much moisture is continually being removed from the air. In order to maintain the desirable 35-50% relative humidity band, continuous humidification would be necessary, which by definition consumes large quantities of energy.

      Luca

      Reply
  3. Patphong

    Thank you Luca.

    By the way, if the customer confirm that they also consider sensible cooling capacity comparison between CRAC & Comfort air inverter, so it’is less energy consumption that us, What is the point that we should give them explaination?.

    Patphong P.

    Reply
    • Luca Melluso Luca Melluso

      Hi Patphong,
      the key is that they will need to maintain temperature and relative humidity levels, if they use a split system they will necessarily need a humidifier (running). They should consider the power consumption of the comfort cooling unit PLUS the humidifier. They will be surprised to see how much current those devices use.
      Luca

      Reply

Leave a Reply

  • (will not be published)