Servers & Systems: The Right Compute
1822549 Members
2832 Online
109642 Solutions
New Article ๎ฅ‚
ComputeExperts

HPE stays one step ahead with excellent server performance benchmark results

Benchmarks help guide customers in their choice of infrastructure. And that is why HPE stays ahead in this game by publishing leading results using industry-standard benchmarks.

HPE-Servers-Performance-Benchmarks.pngAs enterprises accelerate their digital transformation journey, infrastructure plays a key role. A major part of this infrastructure is servers. It means server performance is a very important consideration. Why? Because a system engineered for performance delivers better return on investment (ROI), while driving down total cost of ownership (TCO).

When it comes to investing in infrastructure, server performance is a key requirement because it means more โ€œbang for the buck.โ€

Take virtualization, for example. A better performing server allows more VM density, reducing server sprawl in the datacenter. For an online transaction processing application, better performance translates to higher throughput and lower latency while limiting the number of nodes required to support the user base. Similarly, in each case with virtual desktop infrastructure (VDI) or container deployments, better performance usually implies lower investment and higher returns.

Staying one step ahead of the competition with server performance benchmarking

Benchmarks help guide customers in their choice of infrastructure. And that is why HPE stays ahead in this gameโ€”by publishing excellent results using industry-standard benchmarks. These benchmarks draw from real-world workloads, and so are representative of how customers deploy workloads, thus serving as a good guidance for customers.

This is also the reason benchmark scores are mandated as a requirement in most federal RFPs. Silicon vendors like AMD and Intel endorse the performance of HPE servers by publishing HPE server benchmark scores on their leader boards, respectively. Intel HPE leader board results and AMD EPYC HPE leader board results clearly demonstrate our leadership on server platforms.

Why compare? A competitive workload performance measurement strategy includes an assessment of server capabilities, areas of strengths and shortcomings, and the differences between companiesโ€™ results. The evaluation gives you an opportunity to compare alternatives across areas such as quality, time, cost, and efficiency measurements. Here HPE continues with its role as an industry leader.


A key objective of performance benchmarking is to demonstrate the innovations and superior system design of server platforms, using a common yardstick such as industry-standard benchmarks.


The components that go into building a server are mostly commodity elements such as DIMMs, processors, fans, sensors, BMC, NICs, etc. available to every server vendor. The differentiation lies in how the system is designed to deliver best performance. Benchmarks serve to bring out this differentiation and enable customers to make a guided choice of investment in hardware based on published performance results.

The good news is HPE understands your requirements and publishes a wide range of benchmark results across different categories such as performance, scalability, and energy efficiency, using industry-standard benchmark workloads so you can choose which benchmark best fits your needs.

Real-world measurements

The ideal benchmark for any system (hardware or software), comes from measuring performance in their actual deployments. However, given how solutions are architected today, this is not practical as customers build their solutions using components from multiple different vendors, with potentially proprietary approaches to benchmarking their products. Customers need a way to compare solutions across different hardware and software implementations, in a vendor-agnostic manner, without any bias. This is where consortia-developed benchmarks play a crucial role.

These consortia are constituted by vendors across hardware (OEMs like HPE, Lenovo, Dell; silicon vendors like AMD, Intel, Qualcomm, and ARM; and GPU vendors like NVIDIA) and software (OS vendors like RedHat, Microsoft, SuSE, and ISVs such as Oracle and Apple). There is also participation from reputed universities for benchmark development. The benchmarks are thoroughly scrutinized for relevance, importance, fairness, ease of use, and reproducibility.

Additionally, these benchmarks are developed in close collaboration with different vendors and researchers from premier institutes across the world, covering a broad range of customer workloads.

For example, take the Open Systems Group CPU committee within SPEC, which develops, maintains, and publishes SPEC CPUยฉ 2017 metrics. This committee comprises of roughly 44 members from companies and 28 universities that actively contribute to the benchmark development. So, it is no surprise that this is one of the most widely used benchmarks for evaluating server performance and forms a part of performance requirements in most federal RFPs. HPE has representatives in most of the consortia, contributing to different capacities, such as developers, committee chairs, and serving on the board of directors.

Benchmarks validate importance to customers

HPE runs a plethora of benchmarks that represent real-world deployments, enabling customers to use the results to guide their hardware and software investments. HPE runs the following benchmarks that demonstrate value to customers, with practical assessments:

Decision support/relational database

TPC-H is a decision support benchmark consisting of a suite of business-oriented ad-hoc queries and concurrent data modifications. It is representative of decision support systems in enterprise deployments, which examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions. The benchmark offers three business-relevant metrics that describe the transaction rate (number of composite queries per hour, for a given database size, or QphH@Size), response latency (query response time, usually measured in milliseconds on todayโ€™s systems), and the cost of the entire solution deployment for a given level of performance (Price/QphH@Size, usually reported in one of the standard currencies) across different database sizes or scale factors[1].

Virtualization

There are benchmarks covering the most popular virtualization offerings in the market today. Benchmarks like SPEC virt_scยฎ 2013 are focused on single-host virtualization performance, while others such as VMmark and SPECvirt Datacenter 2021 cover multi-host performance. All the benchmarks offer key metrics such as throughput and QoS (โ€œQuality of Serviceโ€, measured by response latency), allowing customers to pick the right hardware and virtualization solution for their business needs.

Server-side Java

  • SPECjbbยฎ 2015 is a server-side Java benchmark that models a world-wide supermarket company with an IT infrastructure handling a mix of point-of-sale requests, online purchases, and data-mining operations.

Compute-intensive metrics

  • SPEC CPU 2017 provides a means to measure the performance of server platforms, covering processor, memory, Operating System, and the compiler stack. This benchmark is very widely used in federal RFPs as a means of establishing performance requirements for servers. It is also heavily used for research, both within the industry and academia, to further the state-of-the-art in processor architectures, memory technology, and compilers.

Energy efficiency

  • SPECpower_ssjยฎ 2008 measures single- and multi-node serversโ€™ energy efficiency. This gives customers the ability to choose energy-efficient systems for their data centers, potentially driving down energy consumption while also reducing their expenditure on the datacenter power budget. The SPECpower committee has worked relentlessly to standardize energy-efficiency measurements, by collaborating with and supporting government agencies (EU Lot 9, U.S. ENERGY STAR, etc.) in their energy-efficiency requirements.

SAP Business Analytics

  • SAP Business Warehouse (SAP BW) Edition for SAP HANAยฎ standard application benchmark is an in-memory benchmark for next-generation real-time data warehousing. It shows the performance of delivery of simple, open, flexible, and highly scalable solutions for in-memory computing.

SAP Enterprise Resource Planning (ERP)

  • SAPยฎ Sales and Distribution (SD) standard application benchmark, two-tier is an ERP OLTP benchmark. It shows the number of users, response times, and the amount of fully business processed line items per hour (SAPS) to help you determine sizing requirements. The benchmark also includes hardware certification and sizing.

Proof points of HPE leadership

Leader boards and sustaining world record results

HPE provides you both performance and scalability to ensure a successful outcome for your businesses. Substantiation of HPE leading performance results include the latest Intel leader board of world records or first achievements and the AMD leader board of #1 results. HPE servers show leadership with 47 world records on the most recent Intel leader board representing important workloads, while the latest AMD leader board shows 22 world records for a total of 69 records.[2]

HPE-Performance-Benchmarks.png

 You can see that the wide range of HPE servers claiming leadership over many diverse benchmarks would meet any of your business needs.

 HPE servers also break performance records by being the first to do so.

 

HPE-Performance-Benchmarks-2.png

 

Bottom line

In this age of accelerated digital transformation, competitive performance benchmarking remains key when it comes to enabling customers with data to guide their infrastructure investments. With the results, HPE shows how its servers are efficient, competitive, and scalable while maintaining leadership. HPE performance results are proof that HPE servers offer superior performance for your business while leading over the competition.


Meet Srinivasan Varadarajan Sahasranamam, HPE Performance Engineering Architect, Compute Solutions 

VS head shot.pngSrinivasan (VS for short) is a performance architect at HPE, specializing in server performance. He is presently focused on full-stack performance in cloud native architectures, with a special focus on kubernetes. Outside of work, VS enjoys cooking for family and friends, listening to Carnatic classical music, and reading. Connect with him on LinkedIn

 


Compute Experts
Hewlett Packard Enterprise

twitter.com/hpe_compute
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/servers

 

[1] TPC benchmarks must be approved by a TPC certified auditor. TPC benchmarks are industry -standard. TPC benchmark specifications and policies require the submittal of complete documentation on these tests, which are then reviewed by the TPC Council.

[2] Benchmark claims as of November 20, 2021

In addition, all TPC benchmarks must be approved by a 3rd party, TPC certified auditor.  If a vendor's TPC benchmark test is determined to be executed improperly or unfairly, a vendor will have to withdraw the result. These rules protect users from misleading or false performance claims and preserves the credibility of TPC benchmark results.

Results and configurations are as of November 10, 2020, or as noted otherwise.

All benchmark data is publicly available on the websites listed below. All performance briefs listed are publicly available on hpe.com or on the server Documents webpages under the Guides section at HPE Server benchmarks.

For further details, see sap.com/benchmark, spec.org (all rights reserved; reprinted with permission), tpc.org, vmware.com, the Intel leader board partner page for HPE results, and the AMD EPYC leader board page.

Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. AMD and EPYC are trademarks of Advanced Micro Devices, Inc. Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE (or an SAP affiliate company) in Germany and other countries. See sap.com/benchmark for further details and sap.com/corporate/en/legal/trademark.html for additional trademark information and notices. VMmarkยฎ is a product of VMware, Inc. VMware vSAN, VMware vSphere, and VMware are registered trademarks or trademarks of VMware, Inc. and its subsidiaries in the United States and other jurisdictions.

TPC benchmarks must be approved by a 3rd party TPC-certified auditor. TPC benchmarks are industry standards. TPC benchmark specifications and policies require the submittal of complete documentation on these tests, which are then reviewed by the TPC Council. If a vendor's TPC benchmark test is determined to be executed improperly or unfairly, a vendor will have to withdraw the result. These rules protect users from misleading or false performance claims and preserves the credibility of TPC benchmark results. See tpc.org.

SPEC and the names SPEC CPU, SPECfp, SPECint, SPECrate, SPECspeed, SPECjbb, SPEC OMP, SPEC VIRT, SPECvirt, and SPECpower_ssj are registered trademarks of the Standard Performance Evaluation Corporation (SPEC); see spec.org. All rights reserved; reprinted with permission. All other product and service names mentioned are the trademarks of their respective companies.

 

About the Author

ComputeExperts

Our team of Hewlett Packard Enterprise server experts helps you to dive deep into relevant infrastructure topics.