Digital Transformation
cancel
Showing results for 
Search instead for 
Did you mean: 

The Cloud Cliff

mikeshaw747

Over the last couple of months, two pieces of research that HPE commissioned have come to completion. Both pieces looked at the kinds of workloads that are good for public cloud and what are good for on premises.

In my last blog post, I talked about how we sometimes “over-rotate” when new technologies comes along - “everyone is going to read books on e-readers by 2024”, “the iPad will mean the death of the home PC”, “everyone will shop for groceries online”. These research pieces tried to get away from the “you can move pretty much everything to the cloud”.

The first piece of research did in-depth interview with 20 customers who had changed their mix of cloud versus on premises by brining some workloads back to their data centres - “workload come home”, as we call it.

The second research simply asked for customers’ opinions on what workloads worked best on premises and what in the cloud.

While the two research pieces were done independently, the results are very similar. In this post, I’ll give my take on their conclusions.

There are, of course, a number of things that customers considered when choosing to bring their workloads back from the cloud. From reading the research, these considerations seemed to break down into three areas:

  1. Cost of running this particular workload
  2. Performance - was the workload performance critical, and what was the best way to get the requirement performance profile.
  3. Control - the degree of control that was needed for the workload

Let’s look at what the research tells about each of these areas in turn.


1 : Cost
Cost considerations broke down into a number of sub-areas:

a. Billing and predictability of costs
b. Cloud-sprawl and sub-optimal purchasing
c. Administrative costs - actual versus expectations
d. The costs of “special” workloads like analytics of large datasets

a. Billing and predictability of costs

"Public cloud provider’s billing is hard to understand … we were warned”.

Every single one of the 20 interviewees said they found public cloud billing very difficult to understand.

“Charges were hard to predict for the year and/or cost significantly - more than we planned for”.

Probably more importantly, costs varied from month to month and the variation was very difficult to predict. For many, this is a real problem - as a business person, you need to be able to predict costs so that you can predict profit and plan accordingly.

b. Cloud-sprawl and sub-optimal purchasing

“Department heads with access to a credit card … swipe that credit card, and spin up some VMs [typically at “on demand” rates]”

Anyone in a organisation can “slap their credit card on the table and buy cloud services”. This is, of course, why many like cloud - they can bypass their usual interactions with Enterprise IT and get going fast.

But from a cost point of view, this can lead to problems. Any purchasing manager will tell you that a lot of teams buying independently will lead to sub-optimal purchasing. And in the case of cloud, that’s exactly what happens - each “rogue department” (this term was used a lot in the interviews) will typically buy their cloud services at the “on demand” rate - the most expensive rate. If a Chief Financial Office could get a handle on all this desperate purchasing, they would realise that, as an organisation, they are paying over the odds.

It seems it’s difficult to get control over this situation. One interviewee had to resort to highly manual methods …

“[what we now have to do] with the public cloud is manually go in and spin down VMs and servers that are running but not actually being used”

c. Administrative costs

Many cite a reduction in administrative costs as a motivation for moving to cloud. But a number of our interviewees found that the reality didn’t match expectation : “We thought public cloud would reduce IT staff headcount. It never did”

d. Costs of “special workloads” like large datasets

“Data in the public cloud started to get far too costly”

“It costs a lot to move data to and from the cloud for analysis”

“…large files created extremely high bursts of usage that then spiralled billing and costs out of control”

It seems that if you want to do analyses on large datasets, cloud can be expensive.


2 : Performance
Lag can be an issue for cloud-based applications :

"Lag time is an issue and impacts our staff and our customers. When it sometimes takes two minutes to open apps this is irritating a customer because of poor performance is an automatic ‘in the door of the CEO’ for change”

And if high-density data analytics is required, an on prem solution may well be faster :

“Private environment is at least 25% faster [for analytics]”

Frost and Sullivan created the second of our “Cloud Cliff” reports. These are some of their findings regarding performance

“61% of IT decision-makers cite concerns about poor or inconsistent application performance as driving the decision not to deploy a workload in the public cloud”

“80% of financial services firms say that concerns about poor or inconsistent application performance are keeping them from placing their most critical workloads into a hosted cloud environment”

“Public cloud deployments are based on virtualized infrastructure - every application is subject to microseconds of delay from the hypervisor layer. For latency-sensitive applications such as algorithmic trading, or sensor-based alarm systems even that tiny delay is intolerable”

“Because public cloud users have no visibility into the physical infrastructure, enterprise IT organizations may not easily be able to diagnose or fix the root cause of sporadic sluggishness”

synergy imagery.png

 

Workloads that, for performance reasons, may be better on premises
Frost and Sullivan go further and characterise those workloads that, because of performance considerations, may be better on premises… 

High speed transactions (example: financial trading, search functions): When success depends on “speed to execute” transactions, enterprises tend to retain control with an on-premises deployment. Only 5% of such financial trading firms say they trust their proprietary trading platforms to the public cloud.

Customer-facing transactions (example: contact center): Not all performance-sensitive applications support specialized workloads. Any time employee productivity or customer satisfaction are dependent on consistent, speedy system responses, businesses are likely to deploy applications on premises. In addition, to enhance performance, applications are often deployed in proximity with the databases they access. Thus, for example, 40% of businesses maintain their primary customer databases on premises; most of these have deployed their contact center platform in the data center as well.

Big data analytics (example: life-sciences research, consumer geo-location-based applications): Firms that capture and analyze massive amounts of data require consistent, high levels of throughput and processing to support their efforts. Such large-scale applications are often deployed in private compute environments that ensure control over performance factors including availability, speed, continuity, and throughput. Furthermore, businesses are using intelligent analytics to extract value from data that is diverse in format and source.

Consider an example of a large retailer that seeks to optimize inventory through predictive analytics, using both public data (real-time weather maps) and proprietary data (archived customer retail transactions and inventory systems). In this hybrid use case, the application workloads may be optimally deployed in the private data center, along with the proprietary database and system, and reach into the public cloud for the public data. By deploying the application on premises, the business not only can secure its proprietary data, but it has better control over application performance.

Analytics-based business processes (example: supply chain): As real-time or near-real time analytics are infused into more business processes, and as sophisticated intelligence (including artificial intelligence) are built into technology platforms, application performance becomes more important. As data grows in scale and diversity, the applications that access on the increasing volumes must be able to accommodate the growth without loss of performance. That means ensuring not only the application logic, but also the supporting infrastructure must be fine-tuned for fast processing and throughput.

Internet of Things (example: manufacturing production, predictive maintenance, medical machine learning): Although Internet of Things is often associated with public cloud deployments, in fact many machine-to-machine workloads call for the speed and performance of local processing. Consider a highly automated, software-driven manufacturing process, in which sensors continually measure in granular detail how the large-scale equipment is operating. The data is fed to a local processor that triggers real-time action: warnings and alerts, perhaps shutting down the equipment or activating fire-suppression systems without human intervention. Such immediate responses cannot tolerate the delay associated with transmission to or from the public cloud.

I have blogged a couple of times on the new Systems of Action that HPE believes are going to become important. If a System of Action needs a fast response or if it processes lots of data, then the machine learning compute platform that drives it must be local - it can’t be in the cloud.


3 : Control
Control breaks down into four areas ..

a. Control over compliance
b. Data gravity - getting data into cloud and getting it stuck there
c. Vendor lock-in thru applications that only work with cloud, and only one cloud vendor
d. Data resilience - backup and recovery

a. Control over compliance

Back to these “rogue departments slapping the credit card on the table” to autonomously buy cloud services…

“Guys who spun up those VMs didn’t realize they implemented improperly … no fail-overs”.

... some of the groups bought cloud services but then didn’t implement their applications in a way that was compliant. You saw this when S3 East in the US went down - a lot of cloud-based applications stopped working. A number of comments were posted saying that those there were affected had not implemented failover properly - in other words, “it’s easy to create a non-compliant application in the cloud”. You can use automation tools to continuously check compliance if you application is on premises.

”Rogues … created data sovereignty issues”

... as above, a credit card slapper inadvertently violated data sovereignty rules.

“Rogues spinning up VMs on a whim … is a security concern”

... this time it’s compliance to security regulations that can be inadvertently violated.

b. Data gravity

This is where you put all your data into the cloud and then realise that it’s difficult and costly to get it out should you decide to do so. I have this “thing” about the two phases that a modern application goes thru. These days, we don’t do big 18 month projects any more. Instead, we tend to take a series of experimental steps until we decide that, yes, this application is strategic. In other words, our applications go thru two very different phases - the experimental phase and then the strategic phase. While cloud might be a good choice for the experimental phase, the criteria for the application’s platform change dramatically when we enter the strategic phase (I’ve got a blog post queued up on this topic which I’ll post in a few weeks).

And a number of our interviewees have done a “workload come home” as they moved from experimental to strategic phase. The most noteworthy of these is probably Dropbox - as the demands on their service become less volatile and more predictable, they felt that it was better for their company if they used on premises computing.

Should you wish to do a “workload come home”…

“..data gravity makes lock-in worse with [cloud]. With cloud, there are fees to do that and they’re hidden until you try to do it.”

“Cloud usage comes with a level of lock-in. A hedge fund has to extract back old data to onsite. [With public cloud] it would need to be migrated and translated to make it usable. We get charged for accessing the data and using the compute power”

c. Lock-in thru APIs

If you can put your hand on heart and say, “this workload will never, ever, ever come back on premises”, then API lock-in is not an issue. But as I said above, when your application goes from its experimental to its strategic phase, on premises may be a better option.

As a number of the interviewees said...

"If you plan a move to a public cloud, consider how you’ll move back in case you later decide to do so”

d. Data resilience - backup and recovery

In preparation for writing this post, I talked to one of the product managers on our backup and recovery products. He struggles to understand why people talk about cloud being a better option for backup and recovery:

cost : His analysis shows that an on premises solution costs from 3 to 6 times less than cloud.

faster backup recovery : Should you need to recover the data, this can be done more than 10 times faster on premises.

backup testing much easier and less costly : And, one of the most important practices in data resilience is periodically checking that the backup and recovery process actually works (how often have you needed data recovery and been told, “oh - the backup seems to have failed”). Testing a back on premises is fast and doesn’t cost anything. This is not true of cloud backups and for this reason, many have not been tested.

multiple, single points of failure : With cloud backup, you have single points of failure. If, for example, you access the cloud thru a T1 link and workmen cut thru your cable, you have lost access to your backup. Or, should the cloud storage cluster you use die (like S3 East did recently), you have lost your backup. As Michael Crichton points out in his book Airframe, disasters occur when two things fail in series - you need a backup recovery and that backup is not available because someone put chopped your T1 line in half, for example.

There is one type of backup that does seem to suit cloud well. If you have to keep data “for ever” for legislative reasons, then cloud is a good option.


RELATED POSTS

Will cloud will keep growing and growing until it takes over everything? History says, "no" : Will cloud keep growing forever? History tells us that new innovations like cloud will reach an equilibrium with existing technologies. It's already happened with e-books and tablets, for example. 

Cloud myth : once in the cloud, always in the cloud : an important transition point occurs when a digitailly-fuelled application moves from its experimental to its strategic phase. The requirements of its compute platform can change quite dramatically.

The cloud is dead. Long live the Edge : IoT means that when there is lots of data to analyze or where fast action is required, we need to use edge compute. This edge compute can't be provided by the centralized cloud model. 

The full "cloud cliff" research report can be found here


Mike Shaw
Director Strategic Marketing
Hewlett Packard Enterprise

twitter.gif @mike_j_shaw
linkedin.gif Mike Shaw

 

What’s the future of Cloud?
Get inspired at Enterprise.nxt.
 > Go now

 

Mike Shaw
Director Strategic Marketing

twitter.gif@mike_j_shaw
linkedin.gifMike Shaw

About the Author

mikeshaw747

Mike has been with HPE for 30 years. Half of that time was in research and development, mainly as an architect. The other 15 years has been spent in product management, product marketing, and now, strategic marketing. .

Labels
Events
28-30 November
Madrid, Spain
Discover 2017 Madrid
Join us for Hewlett Packard Enterprise Discover 2017 Madrid, taking place 28-30 November at the Feria de Madrid Convention Center
Read more
See posts for dates
Online
HPE Webinars - 2017
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all