- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- How underlying infrastructure can make or break th...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
How underlying infrastructure can make or break the “as a Service” experience
In the current as-a-Service world, it’s easy to think that the technology is no longer relevant. Learn why the right (or wrong) infrastructure choice can affect the outcome of your entire experience.
Current trends and cloud initiatives tend to hide the importance of once crucial factor: the supporting physical infrastructure. It’s often left in the hands of the infrastructure provider, who takes care of the sizing, scaling, availability, management, and other tasks. Customers and developers only need to focus on the utilization of that infrastructure, or not even this in Software-as-a-Service or serverless applications.
But the reality is that the infrastructure is still there and runs the applications, eventually even the “serverless” ones, even in the public cloud; and ultimately, it supports (or not) the services and business processes.
The choice of the right infrastructure, with the right features, and right sized is critical to achieve the levels of service businesses need, regardless of the deployment model. Even if the infrastructure provider can do this behind the scenes, it’s worth finding out more.
Let's understand why:
1. Service performance
The infrastructure needs to be right sized for the workload, following best practices. This is even more relevant for critical applications like databases or SAP ERP.
As SAP solutions architects, we always size the infrastructure for SAP workloads considering that the CPU utilization should not approach 100%, as this can delay response times significantly and even if the application may still be running (so technically there’s no outage), the service level or “speed” won’t be acceptable for the business.
The same happens for SAP HANA workloads. the memory needs to be right sized to accommodate not only the evident, the in-memory database, but also the additional memory needed for calculations, temporary results and alike.
In addition, most hardware and software vendors provide best practices to make the most of the infrastructure, such as how to optimally create VMware VMs to run SAP HANA workloads. Not following these best practices can lead to unwanted behaviors and unsupported configurations. If infrastructure didn’t pay a role, these best practices wouldn’t exist.
Lastly, most software vendors require hardware vendors to certify their hardware to run the software applications. Staying in the same example, VMWare certification needs to be achieved for a specific x86 processor generation and on top, SAP needs to certify HANA for that VMWare/hardware combination. This is another signal that yes, infrastructure matters. You are not free to run whatever wherever and expect successful behavior.
2. Service availability
Software fails more than hardware. That’s a fact. Look at your laptop or your phone, how many times you’ve had an application or even OS crash and the device itself was perfectly fine?
But interestingly, nowadays in IT it’s not uncommon to hide not-so-reliable hardware with layers of high availability software and clustering, again overlooking the importance of infrastructure.
Years ago (I would say up to the early 2000s), big mission critical databases used to be single instance and run on single node, big iron UNIX systems or even mainframes. This approach Which of course was extremely reliable but also extremely expensive. But then some software vendors came up with alternatives to run those critical workloads on cheaper commodity hardware—and make up for the lack of hardware RAS features (Reliability, Availability, Serviceability) with software.
A good example is Oracle with Oracle RAC (Real Application Cluster). Oracle RAC, the scale-out version of the well-known database, offered other advantages such as a hot standby, along with a few downsides like possible performance overhead as a result of the inter-node communication. To me, RAC was just a different way to do things, not necessarily better or worse. But it was downplaying the importance of the infrastructure.
In my opinion, when it comes to critical databases or workloads like SAP, to safeguard service you need to start with a solid foundation—a reliable, stable infrastructure designed to minimize failures, with redundant components and mechanisms (ideally firmware-based so you don’t depend on OS responsiveness). You fail over to a different service to keep service operative only if something really major happens that cannot be fixed within the server.
But the main goal should be to avoid the failover, as typically fail overs incur downtime, even if it’s just a few minutes. Companies can lose millions in minutes.
So, the bottom line is this: the infrastructure of choice should match the workload characteristics in terms of RAS. Then, add a cluster if needed. But don’t try to fill in the gap of a weak infrastructure with a cluster. In scale out configurations, try to use lower number of “fat” (large) nodes. High end, mission critical hardware may be more expensive upfront, but it can certainly save you money down the road if it helps your business avoid service outages.
3. Total cost of ownership
TCO includes all costs associated to the infrastructure—not just the direct costs like acquisition price, support, or facilities, but additional costs such as management, security, or availability.
It may seem obvious, but in order to optimize TCO infrastructure needs to be right sized: not too small, not too big. Also, management costs need to be optimized. For example, it could mean reducing the number of servers or operating systems instances (scale up versus out, scale out with fat nodes if scaling up is not an option), or choosing infrastructure with embedded monitoring and self-healing mechanisms.
Last but not least, it’s essential to have reliable, secure infrastructure so you don’t incur in losses as a result of service outage or security breaches.
Tips for as-a-Service success
It’s undeniable and understandable that most companies are adopting an “as a Service” model as this offers many advantages versus a classic “do it yourself IT” model. This as a Service approach defers infrastructure decisions to the provider and allows businesses to focus on the workload. Nevertheless, technology should not be ignored.
My recommendation is that you ask the “as a Service” provider about what technology they are using, how they are sizing it, how they achieve high availability, etc. Don’t ignore this part because it can have consequences for the business. And know that even if the public cloud providers can offer compensation in the forms of credits if they miss an SLA, that’s not going to pay for the damage already done, especially in terms of reputation or lost customers. Ask questions and look underneath the cloud hood.
For those looking for the best of two worlds—the benefits of “as a Service” with the peace of mind of classic IT—HPE offers a range of servers and storage, fully managed and in a pay per use model under our HPE GreenLake edge-to-cloud platform. Need network connectivity? We can also add variety of Aruba switches to the solution. HPE Superdome Flex servers along with premium storage like HPE Alletra or HPE XP storage arrays have been proven to be a winning combination for customers running mission critical applications like SAP and seeking an as a Service, pay per use model.
Furthermore, HPE Greenlake is a true pay-per-use model, with a flat rate for the duration of the contract. No surprise or extra costs, like egress of data transfer fees. Therefore, you can predict how much you’re going to pay for the solution for as long as you have it. No hidden, unknown, or unpredictable costs.
Yes, technology still matters. A lot.
Meet Isabel Martin, SAP Solutions Architect, HPE SAP NA Competence Center
Isabel is a multicultural SAP solutions architect with over 20 years of experience in mission critical and SAP architectures in Europe and North America. A native of Spain, Isabel joined HPE in her home country before moving to the United States a decade ago. Isabel has performed multiple roles in mission critical, ranging from support engineer to field presales and her current position as SAP Competence Center solutions architect. Isabel has a bachelor’s degree in electronic physics. Connect with her on LinkedIn!
Compute Experts
Hewlett Packard Enterprise
twitter.com/hpe_compute
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/servers
- Back to Blog
- Newer Article
- Older Article
- PerryS on: Explore key updates and enhancements for HPE OneVi...
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- ComputeExperts on: Did you know that liquid cooling is currently avai...
- Jams_C_Servers on: If you’re not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYC™ 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
154 -
HPC & SUPERCOMPUTING
137 -
Mission Critical
87 -
SMB
169