Data Center

Why data centers with a PUE greater than 2 could be obsolete in the next 5 years

I caught up with Paul-Francois Cattier, Global VP Data Center for Schneider Electric at Data Centres Europe 2012 in Nice and asked him about his presentation – the title of this post. It seems to be a provocative statement, so why make it?

In response, Paul-Francois told me that with cloud computing and virtualization, new applications will be very dynamic and could much more easily be moved from one data center supplier to another. The 6-year or more colo contracts could well become a thing of the past very rapidly.

He said that a CFO looking at the cost of running business applications will have many choices. There’s going to be a big energy cost difference between operating an IT load in a data center with a PUE of, say 1.1 or 1.2, compared with one with a PUE of 2. And during the next 5 years, as energy costs increase, we’re going to see that cost differential increase too.

With the inertia which has tended to keep IT loads fixed in their racks now overcome, it’s a no-brainer to choose to put your IT into a more efficient data center. It simply makes more financial sense once you understand how the cost of energy impacts data center operating expense. The prediction is that a lot of data centers will be empty if they cannot compete in the efficiency stakes as more IT will be hosted in more modern and more efficient facilities.

However, today there is no shortage of ways which service providers and owner/ operators can go about making their data centers more efficient. Modular data center infrastructure, for example, represents one of the most flexible ways of adapting to the new data center constraints which will be energy pricing, efficiency and the need to be able to meet uncertain requirements.

Intelligent planning is another response to changing constraints. When planning a data center 10 years ago a lot of assumptions were made in power density, cooling requirements, energy price and even the IT load itself. But it was quite easy to use ‘rules of thumb’, because the IT load tended to be captive – the facility was therefore built for a known requirement – and experience told you that this load was stable and unlikely to change for another 10 or 15 years.

But today, with cloud and virtualization we face more dynamic loads and making design assumptions about energy consumption and cooling requirements has become very difficult – we live in a much more uncertain world. We don’t fully understand, for example, how the use of mobile data and apps will impact the data center. There are a lot of concerns about energy pricing and how that will change the business model for a lot of stock. So you need to accept uncertainty and build your data center accordingly – with an architecture that accommodates change.

Even 10 years ago, people got things wrong: When you built a facility for a 20-year return on investment and within 5 years it was out of capacity, questions definitely were asked! Today there are ways to plan intelligently; to provide flexibility and agility to adapt to an evolving requirement with new technologies and different patterns of reliability or efficiency.

Schneider Electric are helping customers to embrace these changes by supplying tools such as reference designs, online calculators for design, cost and efficiency, plus, of course modular data center architecture. All of these are aimed to help managers to extract the optimum return on investment from data centers and critical facilities, and to justify the decision to invest capital budgets much easier.


No Responses
  1. Christopher Carter

    The uncertainty on mobile data,apps & energy consuption is huge. A Phone now days with capabilities as much as a home unit or typical lap top poses new questions.

    I look forward to see more information on this subject.

    Thank you gor sharing.

    Reply

Leave a Reply

  • (will not be published)