Digital Transformation
cancel
Showing results for 
Search instead for 
Did you mean: 

"On prem has not stood still"

mikeshaw747

During interviews with companies that decided to move workloads from public cloud to on prem, a phrase that was mentioned a number of times was, “on prem hasn’t stood still”. In other words, the price and functionality of what was available on prem had changed since they had made their original decision to use public cloud.

My colleagues and I explored this concept in more detail, laying down a very rough history of how on prem had changed in the last five or so years.

Let’s start this post by looking briefly at the on-prem state of play when many of our interviewees first went to the cloud.


At the time people started using public cloud…
Costs were higher
Hardware costs were higher, especially flash memory. Flash memory gave/gives great performance advantages, but at the time, was relatively expensive.

VMs were managed in silos. You’d probably have a group of VMs for each application area. This resulted in relatively high admin costs and lower utilisation rates than was optimal because each VM silo would its own headroom. We’ll see later how two technologies, first hyperconverged and then composable infrastructure, have changed all this.

VMs are .. whole machines, but virtual. And thus, each VM contains all the code to run that virtual machine. Because of this, you get a lot of data duplication as each VM has a lot of common code.

“Mode 2”
Along came what Gartner calls “Mode 2” - developers and data scientists working in a digitally-empowered world where they need speed and continuous innovation. The Mode 2 world is much more experimental than IT was used to - “compose, tear-down, re-compose. Rinse and repeat”.

And Mode 2 developers loved the autonomy that public cloud gave them because they didn’t need to jump thru hoops and delays until the IT department gave them permission to fire up a server. Of course there were private cloud offerings to compare against public cloud, but there weren’t that many, and their functionality lagged that of public cloud.

Fueled by demand from Mode 2 developers, open-source software came to the fore. This seemed to take Enterprise IT by surprise, and when developers / data scientists went to IT asking for help creating, for example, and Hadoop Lake, they were met with blank stares.

And finally, it was a case of “give us the money up front otherwise you can’t have it”. It you wanted a system to try out the concept of predictive maintenance of sewage pumps, say, you would have to pay for the hardware and software up front. Public cloud was different, of course. You paid for what you used - and you stopped paying if you didn’t use anything. For many, this was a game changer. Much of the work that Mode 2 developers do is experimental - is predictive maintenance on sewage pumps actually going to work?, how much compute will it need? If you don’t know the answers to these questions, if you are only given a small amount of money to experiment, and if your experiment may end up “failing”, “pay up front” is simply not an option.



How on premises has continued to innovate
When customers coming back from public cloud said, “on prem hasn’t stood still”, what did they mean? The diagram below is my rough (and possibly a little inaccurate in terms of chronology) take on how on prem has continued to innovate.

stoodstill-timeline.png

Costs and performance
On-prem vendors have done a lot to reduce the costs of running your data center. As you can see from the graph below, *flash memory costs have fallen dramatically*.

stoodstill-flash-cost.png

And so has the *price per giga-flop*, as the graph below shows.

stoodstill-CPU-cost.png

*Hyperconverged* allows all VMs to be managed in the same place. This results in lower admin costs, and an increase in utilisation rates because rather than each VM silo having headroom, we only need headroom across all our VMs - we need less headroom.

And then SimpliVity added *de-duplication and compression* to their Hyperconverged solution, allowing them to lower storage requirements significantly by something like 10 times. And in doing so, they speeded up data access and VM manipulation.

*Composable Infrastructure* (HPE’s Synergy) takes what Hyperconverged started to another level. Rather than being able to have all VMs in one place, you can have physical compute, VMs and containers (containers running on physical hardware, not just in VMs) all using the same “lump” of compute. In other words, you can treat your datacenter as one giant lump of compute that you can use for physical (and let’s face it, a lot of apps still run on physical compute - and will probably do so for a long time to come), VM-based and container-based applications.

So what? Your utilisation can rise even higher. Rather than having headroom for your physical apps, your VM-based apps and your container-based apps, you just have headroom for everything. This means you are looking at something like 80% utilisation rates across your whole datacenter.

This higher utilisation rate is also possible because of the way that you can independently flex resource pools (compute, storage and fabric) with composable infrastructure. With Hyperconverged, storage “hangs off” CPUs - if you need more storage, you may have to add more CPU too. This isn’t the case with composable infrastructure. If you need more storage, you simply add more storage. The same for CPU. So, less resource wastage - no adding of resource you don’t actually need.

So, costs of on prem have fallen due to a whole series of factors so that, in many instances, on prem costs are below those of public cloud.

“Mode 2”
And what about Mode 2 developers with their need for speed, for continuous innovation, for autonomy from IT, for using all the latest open-source software, and for “pay as you go”?

In order to give developers autonomy, both Hyperconverged and composable infrastructure have the concept of WorkSpaces. Developers can be given spaces in which they can “do their Mode 2 thing” as they see fit, without recourse to IT. But IT retains the ability to govern these spaces, looking for wasted resources such as unused VMs.

Developers love APIs, and public cloud always offered programmatic specification of required infrastructure. This is now available to users of composable infrastructure- one API thru which the compute infrastructure can be specified.

Private cloud offerings have matured and there are now a number to choose from (VMWare, Microsoft, Suse, RedHat, Helion, etc). A recent Forrester report on private cloud (ref) shows the private cloud is very much “alive and kicking”. 61% of enterprises have already built or prioritize building a private cloud over the next 12 months. Even among public cloud adopters, interest in private cloud is high. By the end of 2017, 80% of public cloud adopters say they already use or will use internal private cloud.

Open-source software moved from something for the brave to technology that is used by everyone in mainstream applications. Take the Docker open source as an example. Docker has had more than 5 billion downloads and is used in many production systems. This change in attitude and use of open-source software is due to two things. Firstly, the open-source software itself is now much more Enterprise ready, with Enterprise support contracts available. And secondly, IT now has experience in the more common open-source software offerings like Docker and Hadoop. In other words, you no longer need to go to the public cloud to get an open-source-fueled Mode 2 platform.

And last, but certainly not least, you can consume your computing resource on a pay-as-you-use basis. So, as our fearless Mode 2 developers compose, tear down and re-compose, they only pay for what they use.



What’s next for on premises?
As you can probably imagine, I get somewhat frustrated when I see articles that state, “public cloud innovates faster than on premises” (a statement that never carries any proof to back it up). I hope I’ve shown that that in fact hasn’t been the case of the last five years.

And I don’t believe it will be the case in the future either.

500. Or just 1
By way of dramatic illustration, imagine if you will the top 500 super computers in the world all in one room (a very large room with a huge 650 mega-watt [650 mega-VA, if you live in Europe] power supply). One of HPE’s new memory-based computers now in running prototype phase has more compute power than all of these 500 super computers put together. And it consumes a 30th of the power.

Coming slightly more down to earth, let’s look at some of the other innovations we can expect from on premises.

DX-platforms as on-prem services
Public cloud providers do few things many times. This allows them to get really good at that those few things. For example, they might offer a database service, and they will offer that service 200,000 times. An IT department is very different in this regard. They are required to offer a lot of different services, but they never get close to the scale of the public cloud on any one service.

What if you could have the control of having your own compute resource on premises, but with someone else managing that resource for you - someone with the scale of experience that public cloud providers have - someone who manages 10,000 Hadoop lakes, or 10,000 Mesosphere-controlled production systems.

These on-prem managed servers will run the platforms that Mode 2 developers and data scientists require (data & analytics, IoT, mesh application development, etc). In other words, they are not just “raw” servers. They are the platforms that business IT and CDO IT need to do their digital transformation work.

I’m really keen on this concept. I think it provides a great compromise between IT have to do all the work of running a platform on prem, with the public cloud's ability to do one thing many times. 

Managing and governing the public/private mix
HPE (and I) believe that hybrid IT (a mix of public and private) is here to stay. I’ve already written on this topic here. But this mix isn’t fixed. You might decide to bring an app back from the public cloud, or you might decide that all company web sites should go out to cloud. What goes where, at this point in time and in this point in the solution’s lifecycle, will depend upon a number of factors. Gary Thome writes about this here, and I've given a summary on a couple of reports HPE has had commissioned on this subject here

And as we used to say (a lot) when I worked in HP Software’s management software business, “you can’t manage what you can't see”. We at HPE believe that you’ll need tools to allow you to decide on the “right mix”, and then help you get to that right mix quickly and with minimum risk. You’ve already seen our first moves in this space with our acquisition of Cloud Cruiser.

What’s the future of Cloud?
Get inspired at Enterprise.nxt.
> Go now


Mike Shaw
Director Strategic Marketing
Hewlett Packard Enterprise

twitter.gif @mike_j_shaw
linkedin.gif Mike Shaw

Mike Shaw
Director Strategic Marketing

twitter.gif@mike_j_shaw
linkedin.gifMike Shaw

  • Digital_Technlogy
0 Kudos
About the Author

mikeshaw747

Mike has been with HPE for 30 years. Half of that time was in research and development, mainly as an architect. The other 15 years has been spent in product management, product marketing, and now, strategic marketing. .

Labels
Events
28-30 November
Madrid, Spain
Discover 2017 Madrid
Join us for Hewlett Packard Enterprise Discover 2017 Madrid, taking place 28-30 November at the Feria de Madrid Convention Center
Read more
See posts for dates
Online
HPE Webinars - 2017
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all