HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Shifting to Software-Defined
Showing results for 
Search instead for 
Did you mean: 

Metrics that Matter: What you should evaluate when looking at hyperconverged infrastructure



Jesse St Laurent.jpegBy guest blogger, Jesse St. Laurent, HPE Chief Technologist, Hyperconverged & SimpliVity

Hyperconvergence continues to be red hot as a new category of infrastructure. As hyperconverged infrastructure forces a monumental transformation in data center technology, the metrics we use to measure the value of data center technology need to change as well.

Metric shifts happen quite often, such as when Fitbit changed how we measure daily exercise from 30 minutes of activity three days a week to 10,000 steps a day, which is now accepted by The American Heart Association and the World Health Organization.

When it comes to hyperconverged infrastructure, some of us in the IT industry view the merits of hyperconverged infrastructure through the storage lens. This seems logical because hyperconverged technology offers many benefits on how we provision, consolidate, and manage storage. But the metrics that those select few look at are too focused on storage-specific features, such as the number of nodes or terabytes, rather than VM-related measurements commonly used for other software-defined infrastructures such as the cloud. Since hyperconverged infrastructure shifts the paradigm from managing infrastructure components to managing VMs, there should also be a shift in the metrics used to measure it.

But with bias present among the vendors, how will customers find the true metrics that matter?

In 2016 when hyperconverged adoption was expanding the most in its history, ActualTech Media conducted its State of Hyperconverged Infrastructure survey of 1,000 IT professionals. The goal of the report was to assess what the top challenges were and how hyperconverged could address them. When asked which criteria are most important for evaluating IT solutions, respondents indicated cost/ROI, operational efficiency (defined by scalability and performance options), and resiliency (defined by the high availability and integrated backup and replication).


Based on this feedback, positive business outcomes seem to be the top goal when evaluating IT solutions. Looking at the identified themes from the chart, they all involve making a long-term investment that will ultimately save time, money, and in some cases, resources. This is likely why operational efficiency factors are a top criteria, because the company will ultimately save OPEX. Disaster recovery and high availability also fit this mold because these factors improve IT resiliency, which, in turn, limits data center downtime and the risk of financial loss that comes with it.


IDC also found similar results in their survey, as shown in the HPE SimpliVity Hyperconvergence Drives Operational Efficiency and Customers are Benefiting white paper. If we overlook the overwhelming tech refresh result, HPE SimpliVity powered by Intel® customers primarily were looking for cost-saving factors including improved operational efficiency, improved backup/disaster recovery, improved storage utilization, improved scalability, and data center consolidation – all fairly equally.

It stands to reason then that IT professionals are looking for the following metrics when considering hyperconverged solutions:

  • Cost metrics: ROI, total cost of ownership (TCO), CAPEX and OPEX savings
  • Operational efficiency metrics: time to deployment, VM to administrator ratios, device consolidation, power usage effectiveness (PUE)
  • Recovery and availability metrics: ability to sustain a device failure without data loss, recovery point objectives (RTOs/RPOs), downtime/uptime percentage


Business outcomes are what determines the critical metrics that define IT success, and the solutions that meet this goal are ultimately selected. HPE SimpliVity is one example of such a solution. Hyperconverged solutions like HPE SimpliVity aim to improve IT environments based on those top challenges including backup/disaster recovery, storage utilization, scalability, downtime/uptime, and availability, with all the cost and efficiency savings businesses and IT teams need.

Learn more about the important metrics in the IDC white paper and about the benefits of hyperconvergence in this free eBook.

Related link:  What's the future of data storage?


Follow HPE Composable Infrastructure

0 Kudos
About the Author


Nov 27 - 29
Madrid, Spain
HPE Discover 2018 Madrid
Learn about all things HPE Discover 2018 in Madrid, Spain, 27 - 29 November, 2018.
Read more
See posts for
HPE at 2018 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2018.
Read more
View all