Data Center

Driving Towards Simpler Data Centers

The Open Compute Project has helped to open up some much needed dialogue within the data center industry.  It is driving data center owners/managers toward simpler, more standardized, more efficient, more cost effective data center designs.  But the focus of the conversations & documentation has primarily been about systems at the rack-level (servers, power supplies, etc.), leaving some uncertainty as to what that means for the upstream infrastructure.

We recently published a white paper #228, Analysis of Data Center Architectures Supporting Open Compute Project (OCP), to help answer questions about the implications of OCP on the power infrastructure.  In this paper, Kevin Brown & I step through different proposed architectures to support OCP IT loads (which are available as a reference design located here on our website) and provide a cost analysis.  We analyzed costs from the switchgear down to, and including, the IT racks as well as the server power supplies so that we would have a complete picture.  In a nutshell, here’s what we found:

When we first analyzed the most simplistic cost reduced OCP design – 1N switchgear, no upstream UPS, 1N path to the IT rack/power supplies – there was a 45% capex savings over a traditional 2N design.  When we looked at where that savings came from, however, the biggest driver was the reduction in the redundancy/complexity.

Many data center operators today would argue that some degree of redundancy is still needed to avoid downtime during maintenance activities, or unforeseen failures.  So we then compared the 2N traditional design to a redundant (2N) OCP and found the savings was about 25%.  Still substantial savings, but more than half (14%) was achieved through architecture simplification.

With architecture simplification in mind, we also analyzed a design capable of handling mixed loads.  We believe this is the most likely design to be considered since companies are more likely to ‘test the waters’ with OCP equipment as their existing equipment reaches end-of-life before fully committing to a 100% OCP data center.  This design represented a 3% cost premium over the simplified 2N OCP-only design, which we think is minimal given the flexibility it offers.

The moral of the story is… whether you are considering an OCP design, a mixed-load design, or even if you are sticking with 100% traditional IT loads – when we, as an industry, shift from a mindset of “I have to provide power to redundant server power supplies at all times” to a mindset of “I can rely on the inherent redundancy of my server power supplies during maintenance & failures”, there is opportunity for smarter, simpler, more cost effective design.  Think about a design where you:

  • eliminate unnecessary components like cross-ties, their related breakers and load banks
  • keep redundant paths, but eliminate one of the UPS/battery systems
  • save the capital expense of oversizing switchgear and cabling required to simultaneously support the critical load and UPS load bank testing.

These design strategies save on the order of $0.40/watt.  On a 10MW data center, that’s $4,000,000 of capex (material cost) saved.  You can dive deeper into the numbers in the white paper, and also check out the online TradeOff Tool we developed (Traditional vs. Open Compute Capital Cost Calculator), so you can vary the data center assumptions and compare designs.

Leave a Reply

  • (will not be published)

Time limit is exhausted. Please reload CAPTCHA.