Converged Data Center Infrastructure
cancel
Showing results for 
Search instead for 
Did you mean: 

The cat's out of the bag! Latency issues plague public cloud

GaryThome

 

CLOUD Cat out of the Bag image.pngTechnology, like everything else, has trends or cycles. Cloud started more than 10 years ago and was the hot, new tech trend. But now…are things starting to shift again? Are organizations thinking twice before automatically moving essential workloads to the public cloud?

The answer is yes – and for a variety of reasons. A few born-in-the-cloud companies have now moved from the public cloud back to on-premises data centers – DropBox is a high-profile example. And the public cloud performance (or lack thereof) was a big reason why.

Letting the cat out of the bag: Public cloud is all about capacity, not performance

When businesses choose to put their applications in the public cloud, they are sharing infrastructure with a lot of other people. Of course, this can be a good solution because it means that you only pay for what you need when you need it. Public cloud also gives businesses the ability to scale up or down based upon demand.

But don’t forget the whole business model of public cloud: time-sharing. The provider is giving everyone a slice of the timeshare pie, which means that the provider is promising capacity – not performance. I am not the first person to let this particular cat out of the bag. I just want to reiterate it – yes, public cloud providers do place performance limits on the services they provide.

Of course, for workloads you deploy on premises, you get to decide what the performance slice should be. Having this choice is imperative for applications that require reduced latency, such as those for big data and financial services.

Are new technologies making data centers new again?

Looking forward, two new technologies are now available that can boost performance for all applications. These technologies are containers and composable infrastructure. Running containers on composable infrastructure can ensure better performance for all applications.

Containers are open source software development platforms that share a common lightweight Linux OS and only keep the different pieces that are unique to that application within the container. This type of OS-level virtualization means you can hold a lot more containers on a particular server compared to virtual machines (VMs).

A big benefit of containers is increased performance. And when you run containers on bare-metal, performance is increased even more! This is because containers running on bare-metal don’t require a hardware emulation layer that separates the applications from the server.

HPE and Docker recently tested the performance of applications running inside of a single, large VM or directly on top of a Linux® operating system installed on an HPE server. When bare-metal Docker servers were used, performance of CPU-intensive workloads increased up to 46%. For businesses where performance is paramount, these results tell a compelling story.

Yet, some companies have hesitated to move containers out of virtual machines and on to bare-metal because of perceived drawbacks of running containers on bare-metal servers. These drawbacks, such as difficulties with managing physical servers, are definitely relevant when considering yesterday’s data center technologies. Composable infrastructure helps overcome these challenges by making management simple through highly automated operations controlled through software.

Composable infrastructure consists of fluid pools of compute, storage, and fabric that can dynamically self-assemble to meet the needs of an application or workload. These resources are defined in software and controlled programmatically through a unified API, thereby transforming infrastructure into a single line of code that is optimized to the needs of the application.

Because composable infrastructure is so simple to deploy and easy to use, it removes many of the drawbacks you would traditionally encounter when deploying containers on bare-metal. The end result is better performance at lower costs within your own data center. The combination of containers and composable infrastructure is a marriage made in heaven.

A hybrid IT cloud strategy solves the performance problem of public cloud

When considering where to deploy, first consider the performance needs of your application. Then compare those performance needs against the service levels offered by public cloud vendors and what you can deliver on premises. As I wrote in a previous article, businesses need to determine which workloads should be in the public cloud and which ones should remain on traditional IT or a private cloud. And thanks to today’s new technologies, containers and composable infrastructure, staying with traditional data-center deployments may just be the better choice.

To learn more about containers running on HPE bare-metal servers, click here. To read about the benefits of HPE’s first composable infrastructure, HPE Synergy, read HPE Synergy for Dummies. To find out how HPE can help you determine a workload placement strategy and how to best meet your service level agreements, check out HPE Pointnext.

Gary

Follow HPE Composible Infrastructure

  • Composable Infrastructure
0 Kudos
About the Author

GaryThome

Comments
Dennis Faucher

How do data centers like Equinix, with low latency connections to multiple public clouds impact latency?

Ruud van der Hulst

I fully agree with the mentioned issues. With IoT in mind that is one of the reasons Edge computing is emerging "do the crunching at location, keep (Big) data in the Cloud".

For today's reality, InContinuum  (Technology Partner of HPE) has developed an Enterprise Cloud Management Platform to enable organizations to manage Mutlipe Hybrid Clouds. So depending on performance needed, different applications can run on different Platforms, OnPrem or in Public Clouds.  A next step is policy based migration of Service Objects/Containers. One of the policies ofcource being speed/bandwidth.

WhitneyGarcia

Hi Dennis,

Data centers such as Equinix have good connectivity to major Internet networks which can help reduce network latency between servers and users.  If this is a need, then these applications would best be run in datacenters with good Internet connectivity, whether in a public cloud or not.

However, if there are latency sensitivities within the application itself or the application access to data, then these latency concerns could exist within the public cloud in a single datacenter, and reducing Internet latency doesn’t help.  In these cases, a private cloud would be best, whether placed in a corporate datacenter or a co-lo facility.

- Gary

Events
June 6 - 8, 2017
Las Vegas, Nevada
Discover 2017 Las Vegas
Join us for HPE Discover 2017 in Las Vegas. The event will be held at the Venetian | Palazzo from June 6-8, 2017.
Read more
Each Month in 2017
Online
Software Expert Days - 2017
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all