- Integrated Systems
- About Us
- Integrated Systems
- About Us
Comparing HPE on-premises infrastructure vs. Amazon Web Services (AWS): Pay-per-use consumption
Part 5: To wrap up our five-part blog series, we focus on pay-per-use consumption models and the evolving dynamics driving to Hybrid IT.
We introduce price-performance with agility as a key topic that must be considered in any Hybrid Cloud strategy. You can download the complete technical white paper here: HPE On-Prem vs. AWS
The rise of the public cloud significantly disrupted the traditional IT model by offering these new capabilities of pay-per-use: no upfront payments, speed of deployment, the ability to shift capital expenses to operational expenses, the promise of better utilization rates, and the ability to dismantle a setup and walk away from it without penalty. In fact, we leveraged these very benefits for deployment of the AWS test environment to accomplish this study.
In this blog series so far, we compared TCO, pure performance and price-performance. The comparisons show that for cloud-scale advanced analytics workloads, AWS comes with higher TCO, lower workload throughput and higher overall price-performance (lower is better) than HPE on-prem. This study is intended to support multi-cloud planning and as a result, it has consistently taken a conservative approach to the price-performance factors.
One question we frequently hear: Have the enterprises, who migrated to public cloud, realized any downsides to their public cloud migration and what are their future plans?
A recent report by IDC on how Cloud Repatriation Accelerates in a Multi-Cloud World*indicates that over the next two years survey respondents plan to repatriate an average of 50% of their existing public cloud applications to on-prem private cloud, hosted private cloud, or on-prem non-cloud. According to IDC’s survey, the key repatriation drivers are security, performance, cost, control and centralization/shadow IT reduction.
As we discussed in Part 4, a major part of price-performance efficiency depends on the ability to control physical infrastructure, which is not possible with public cloud deployments like AWS. Enterprises are recognizing this dependency for certain applications in their portfolios. However, flexible consumption alternatives for on-prem private cloud deployments are now also required from infrastructure providers to play in the market today and in the future.
Enterprise applications that stay or move back on-prem due to the need for increased control over the underlying physical infrastructure, now also require agile, flexible consumption alternatives like those available with public cloud deployments. In the study, we modeled three scenarios involving different consumption requirements. Each scenario establishes a varying workload throughput requirement over time and we modeled the infrastructure required to meet those throughput requirements for both HPE on-prem and AWS public cloud.
- Scenario 1 is a use case where short-duration projects are required at a sufficiently low frequency such that maintaining a continuous infrastructure to support this workload is not cost-effective. A use case for this is analytics for closing the books at the end of each fiscal quarter in a year. That is, four times a year infrastructure is turned on for one week and then not used again until the following quarter close. For this scenario, deploying on-demand models via public cloud, such as the AWS EC2 on-demand offering, seems to be the better choice from a price performance point of view.
- Scenario 2 is where, for the AWS deployment model, there is a baseline amount of infrastructure purchased with long-term commitments to achieve the lowest cost, and additional on-demand infrastructure is purchased to support “seasonal” bursts of incremental throughput requirements up to 2x. For the HPE on-prem deployment we simply modeled infrastructure that was over-provisioned by 2x. This scenario illustrates that over-provisioning the base workload throughput requirement with HPE on-prem by 2x is still 44% lower total cost than attempting to manage out the excess headroom with AWS on-demand during periods of lower utilization.
- Scenario 3 is a case where incremental workload throughput expansion and contraction is expected, but not certain, For the AWS models, we created optimized combinations of both on-demand and three-year, paid all upfront, reserved instances. For the HPE on-prem models, we used HPE GreenLake Flex Capacity.
The HPE GreenLake Flex Capacity option provides expansion and contraction by carrying an on-prem reserve buffer that is billed only while it is in use. The amount of reserved buffer that is used costs the same as the committed level, so there are no cost premiums to scale up as with AWS on demand. HPE GreenLake takes on the risks so that infrastructure is ready when it is needed.
For Scenario 3, we generated a use case for throughput demand. Then we compared HPE GreenLake Flex Capacity to AWS Reserved plus on-demand instances, as you can see in the figure below.
Bear in mind, the throughput service level requirements specify the infrastructure to be deployed and as we know from this study, not all nodes or instances perform the same. Understanding the performance of each node or instance plays a significant role in understanding the price-performance of the entire set of deployments. Since the GreenLake Flex Capacity infrastructure is operated on-prem, all of the benefits of enhanced control apply, including the significant performance advantage.
The figure directly above shows the weekly costs for Scenario 3. To meet the throughput service-level requirement on a week-by-week basis, the cost variance is much wider for the AWS infrastructure as it requires the use of some AWS on-demand instances which are significantly more expensive than AWS reserved instances.
The weekly cost to support the throughput requirement of this workload over a one-year period, HPE Greenlake Flex Capacity was on average one-third the cost of the AWS solution. This lower weekly cost is achieved while providing the agility of pay-per-use and the enhanced control with on-prem placement.
With the introduction of the HPE GreenLake Hybrid Cloud, centralized cloud operations are now possible from HPE Pointnext that span both on-prem and off-prem clouds to deliver the same consumption experience no matter where your workloads are running.
Final thoughts on HPE on-prem infrastructure vs. AWS
When we started this analysis, we anticipated a price-performance advantage for this on-prem workload, but we really didn’t know how the results would turn out. The magnitude of the HPE on-prem price-performance advantage surprised our team.
Price-performance and other control factors are key workload placement decision points for maximizing desired business outcomes in Hybrid Cloud strategies. We hope this groundbreaking study and analysis illuminates these factors in your planning.
Follow the blog series
In this blog series, we present details and insights around the HPE and AWS comparison for the following topics:
- An overview of the study and the summary of findings in the comparison of HPE and AWS
- An AWS primer to provide a brief overview of what is available in AWS EC2 IaaS capabilities
- The configuration options and selected configurations: HPE and AWS
- The “all-in” cost analysis for the on-prem configuration, including costs for maintenance labor, data center infrastructure, energy and cooling, carbon footprint and warranty
- The cost analysis for the AWS configurations based on reasonable configurations and purchase options
- Overview of the cloud-scale advanced analytics workload
- The throughput measurements for each configuration in total Queries per Minute (Qpm)
- Analysis of each architecture and comparison of Local SSDs and EBS
- Price-performance for each configuration
- A look at the cost of high-throughput EBS volumes
- Analysis of ability to control attributes of performance, data sovereignty, privacy, security
- Pay per use with variable payments based on actual metered usage
- Dynamic and instant growth flexibility
- Onsite extra capacity buffer
* IDC, Cloud Repatriation Accelerates in a Multicloud World, Doc # US44185818, August 2018
Meet Infrastructure Insights blogger Lou Gagliardi, Sr. Lab Director,
Enterprise Solutions & Performance, Hybrid IT, HPE.
Lou Gagliardi joined Compaq Computer Corporation in 1988 and has held executive-level positions in server and storage development engineering and WW presales while at Compaq, Hewlett-Packard, Dell, Newisys and Spansion. He returned to HP in 2010 as Sr. Director of Integrated System Test. In 2013, he transitioned to his current role to deepen HPE’s understanding of the character of the New Style IT workloads.
Hewlett Packard Enterprise
- on: HPE InfoSight for Servers expands across the portf...
- on: Just announced: HPE OneView 5.2. What’s new? Plent...
- shazebict on: How virtual server security and compliance procedu...
- on: Achieve a global view into your IT Infrastructure ...
- on: What's New with HPE OneView 5.0?
- on: What are RAID levels, and which are best for you?
- on: What’s New with HPE OneView? Bring more composabil...
- ComputeExperts on: HPE InfoSight: Playing in a league of its own and ...
- on: Let the sun shine in with HPE Hybrid Cloud Small B...
- on: HPE InfoSight for servers: how we’re expanding the...