Grounded in the Cloud
Showing results for 
Search instead for 
Do you mean 

Can I put my Datacenter on Steroids?

CV on ‎02-08-2011 04:52 AM

The private cloud is an evolution of the data centre, people often claim. And they are probably right, but let’s look at what we can achieve through virtualization, automation and standardization. How far can we get.

First, where are we coming from? In traditional data centres, each application runs of its own server or servers which are only used when the application is needed.  The reason for doing that is that one wants to avoid applications interacting with each-other and disturbing their operations.


Equipment has a lifecycle from 3 to 5 years, so, when ordered, future demand needs to be taken into account. As it is very difficult to estimate that, hardware is often over dimensioned. The result of all this is that typical datacentres only have an average efficiency of 10 to 15 %. This means that for 85% of the time the servers are running idle, consuming energy and occupying space for nothing.


Virtualization obviously helps as it allows multiple applications to run on the same server either concurrently or subsequently. Unfortunately not all applications are fit for virtualization and not all software companies support virtualized versions of their packages. Through aggressive virtualization, according to McKinsey in their report titled “Clearing the Clouds”, 25 to 35% efficiency can be achieved. That’s obviously way better than the 10 to 15 %, and actually gets close to the levels achieved by Google and the others.


But those 35% still leave servers idle 65% of the time. Can we not do better? Well, first, the reason we only achieve 35% is because we have to provision for peak times, for maximum capacity. But do we really have to do that.


Here is where cloud actually comes in. There is a concept, called cloud bursting, where workloads can be provisioned in external clouds when we run out of capacity in our private cloud. So, why not take advantage of that concept to augment the usage of our private cloud, by combining it with the provisioning of additional capacity in a remote service such as HP’s Enterprise Cloud Services – Compute. If appropriate security and compliance policies and true service level agreements are in place, one could think of provisioning for, let’s say 70% of the maximum workload, and source the 30% externally. That would increase the average utilization to 50%, reducing the capital expenditure drastically. One would obviously have extra costs during peak time usage.


In practice, is this feasible? The answer is, well it depends. Indeed, there are two important questions to address. First, during peak times are there applications (workloads) the enterprise is willing to run outside their own datacentres, and do they represent enough of workload to cover the 30% we talked about.

The second question regards the data. Obviously, applications cannot run without data. So, what is the sensitivity of the data, can it be stored outside the enterprise, from a compliance point of view, is it ok to store the data with the service provider and do we know where he will store the data? That’s the first set of questions that need to be addressed. If those are responded positively, we can look at how the link between application and data will be performed. There are four scenarios for this. Let me describe each shortly:

  • If the application only requires ephemeral data, there are no issues. The data will be created during the running of the workload and stored where the workload is executed. At the end of the session, the data will be discarded. However, it is important to check with the service provider what clean-up actions are performed when storage is released. Indeed, it is not enough to delete the data in the standard way it is done with Windows or Linux. Hackers can still recover that data. The disk partition needs to be properly wiped out. It is one of the elements to check when auditing the security processes and procedures of the service provider.
  • If persistent data needs to be accessed by the workload, one obvious way of doing it is by the workload to access the data at its original location, this means behind the enterprise firewall. The advantage is that the data is only transiently outside the enterprise, but latency issues due to the remote access may slow the workload down drastically. Thorough testing should take place prior to deciding for such approach.
  • The alternative to this is to have the data residing in the same place then where the workload is performed. One way of doing this is by shipping the data over at the same time as the workload. Obviously similar precautions as the ones discussed for the ephemeral data, are needed here. Such approach may be practical if the amount of data required is small. Otherwise bandwidth and networking costs may become a barrier.
  • The last option is to duplicate the data with the service provider. This allows the workload to have local data available where-ever it runs. This approach provides a de-facto high availability solution in the sense the data is stored in two locations. It however implies additional costs as we now not only have to pay for the running of the workload, but also some persistent storage space in which the copy of the data is maintained. Depending on how often peaks occur, this may be an alternative scenario or not.

So, yes, there are ways of putting a data centre on steroids, but they require a careful planning and imply a number of things are looked at. Through the use of enterprise class cloud service providers, taking care of the security, compliance and SLA aspects; one can build an ecosystem optimizing the use of the data center and the public cloud. There is unfortunately not one way of doing this. I hope I have given you a list of elements to look at when deciding how to engineer such approach.


Related Links:

HP’s overall Cloud Approach

Hybrid Delivery

How do we define a private cloud

Enterprise Cloud Services Video

ECS at a glance

About the Author


Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
1-3 December 2015
Discover 2015 London
Discover 2015 in London, the ultimate showcase technology event for business and IT professionals to learn, connect, and grow.
Read more
November 2015
Software Online Expert Days
Join us online to talk directly with our Software experts.
Read more
View all