Servers: The Right Compute
cancel
Showing results for 
Search instead for 
Did you mean: 

Data Center Power Consumption: The Hungry Beast

MichaelPratt

 

Data centers are power hungry animals, with over half of the consumption going to the power-hungry servers in the racks—and it’s growing every year with the expansion of virtualization and high-density computing.HPE server options_power supply_blog.jpg

It’s probably no surprise that data centers take a lot of heat (pun intended) for the amount of electricity they consume twenty-four hours a day, three-hundred and sixty-five days a year. In short, the power hungry beast never sleeps.

How hungry is the beast? According to the United States Department of Energy, data centers alone account for about 2% of the entire US electricity consumption.

While roughly a third of a data center’s power consumption goes to environmental cooling, over half of the consumption goes to the power hungry servers in the racks, and it’s growing every year with the expansion of virtualization and high-density computing.

In their quest to ensure that the beast never lacks for food, IT and data center managers frequently invoke the practice of overprovisioning in an effort maintain uptime at the expense of efficiency.

At the rack level, overprovisioning typically comes in the form of matching excessively large power supplies with servers to alleviate concerns that continually growing loads or peak loads don’t overtask the available power and bring the server down.

But power isn’t cheap, and excess power just creates heat, further exacerbating the situation. This is where the world of efficiency and “right-sizing” come in play.

Narrow the gap between the workload on a server and the amount of power it’s being supplied

One way to accomplish this is to load the server with more tasks in an effort to get it closer to the power supply’s efficient utilization rate. While that’s somewhat easier said than done, the growing use of virtualization and composable infrastructure are providing more tools for IT managers to find the right balance of efficient power use and server load.

However, for many server configurations, it’s typically easier to choose the right size and configuration of a server’s power supply to get a good balance of power and workload, and it all starts with the efficiency of the power supply.

Power supply efficiency is defined as the amount of power actually being provided to the IT equipment, as compared to the amount of power it’s drawing from the datacenter power feed. For example, a 50% efficient power supply that's tasked with providing 50W of power to a server will have to draw 100W from the grid. The extra 50W is lost as heat.  That’s great if your intention is to use your server as a space heater, but bad if you’re having to use more electricity to cool the datacenter, not to mention that 50W of power that you’ve paid for going down the proverbial drain.

Enter the world of the highly efficient power supply

Many of today’s top-rated server power supplies are 90% or better in terms of efficiency, with some of the highly-efficient units reaching 94%-to-96% efficiency, such as HPE’s 80PLUS Platinum and Titanium rated power supplies. When scaled-out over a reasonably sized datacenter, the savings in power cost begin to add up.

Additionally, the savings in the cost of the power isn't the only factor. Those highly efficient power supplies also mean they’re giving off less heat, further reducing your data center cooling cost.

By utilizing a range of available power supply capacities, IT managers can match the right size power supply to the actual load of a server configuration. Not only does this provide the benefit of increased efficiency, right-sizing the power supply also offers two other immediate benefits: reducing hardware cost and avoiding trapped power capacity.

Technology marches on—and your power supplies need to keep pace

Let’s also not forget the ever-progressing march of technology that is allowing us to cram even more power into the same sized form factors, driving increased power density and space efficiency with more watts in the same space, ultimately providing more power for the growing world of dense computing environments.

Another common configuration is the use of additional power supplies in each server.  While one of the goals of using additional power supplies is to provide redundancy for potential power supply failures, using multiple power supplies can also provide flexibility and efficiency by the way power is supplied to the server.

This multiple power supply strategy increases the options available to IT manager on how power is managed and used, and two key configurations are Load-Balanced Mode and High-Efficiency Mode.

For redundant power supplies operating in load-balanced mode, the load is shared equally between the power supplies. In general, load-balanced mode offers better efficiency for loads requiring more than 60% of the primary power supply capacity.

High-efficiency mode allows the primary power supply to operate more efficiently while the redundant power supply remains idle in standby. When the redundant power supply is idle, it provides no output power and consumes very little energy.

When combining these power supply strategies with intelligent tools such as HPE Power Advisor and real-time power consumption data from HPE Metered Power Distribution Units, IT managers can perform functions such as accurately estimating power consumption of server and storage products, select the appropriate power supplies, and configure and plan power usage at a system, rack, and multi-rack level.

Ultimately, IT managers can project power requirements, and provide an automated energy-aware network between their IT systems and facilities.

HVDC – The future of data center power?

We live in an AC / DC world of electricity. For the most part, power that comes from the grid and supplied to homes, office buildings, server rooms and data centers is in the form of AC power.  However, computers typically live in a DC power world.

Converting that AC power to DC power for a server to use is what the bulk of a server’s power supply existence is for, and as we noted earlier, this process results in wasted energy and more heat which must be cooled.

By bringing DC power into the data center, it can be more easily and more efficiently distributed to servers by eliminating much of the conversion process.

The trend to grow the use of High Voltage DC, or HVDC, in the form of the 380V DC standard, is growing and has been installed in many data centers around the world.  Proponents of HVDC note that significant cost savings can be realized, with overall energy usage being reduced by up to 20 percent.

However, while the acceptance of the technology has gained momentum over the last several years, adoption at the server level has faced some challenges in terms of 380V DC power supplies being available for servers.

That’s slowly changing though, as vendors continue to evaluate opportunities to bring 380V DC solutions to their server product lines.

Regardless of what form electricity ends up in the data center, rack and eventually the server itself, the basic tenets of efficiency, density and management optimization will hold true regardless of the flavor of the power.

And, of course, it goes without saying that the power hungry beast will want to be fed, regardless of its diet.

 

What's the future of the Data Center?
Get inspired at Enterprise.nxt.
> Go now

 

About the Author

MichaelPratt

Michael Pratt’s passion is helping customers go further. His job is making products that make servers go further. He spends his days connecting the two.

Events
June 18 - 20
Las Vegas, NV
HPE Discover 2019 Las Vegas
Learn about all things Discover 2019 in  Las Vegas, Nevada, June 18-20, 2019
Read more
Read for dates
HPE at 2019 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2019.
Read more
View all