BladeSystem Virtualization
Showing results for 
Search instead for 
Did you mean: 

HP Point of View (POV) on a Grid with Virtualizition

Trusted Contributor

HP Point of View (POV) on a Grid with Virtualizition

John was looking for some info:




I have a bank client who is currently running a large grid of 400+ cores on physical BL460 blades within our BladeSystem Chassis. They reached out the other day to ask our Point of View on the concept of running the Grid  within  a virtualization environment. They currently have a mandate to virtualize on VMware as many application candidates as possible to drive out costs and increase consolidation ratios.


I am curious if you have come across some HP documentation, Whitepapers or have some personal perspective of the pros and cons of a vGrid or vHPC environments?  Perhaps you have seen a reference architecture or client already do this.


Appreciate your insight. On a side note can you recommend an HP HPC distribution list that I can also pose this query for additional insight?




opinion by Dave:




I’m not an expert in the field, but I’ll share some of my observations.  You might also want to tickle a Linux mailing list or two since that’s the domain of HPC typically.  My initial reaction is that this is an inappropriate extension of a corporate IT policy that is being done by people who don’t understand the mechanics of HPC and grid computing.


You can do your own Google searches to read up on various opinions, but in general my take is that virtualization doesn’t make much sense for these kind of workloads, and with the quantity of systems involved, economically if one were to employ virtualization, it would have to be a low/no-cost option like KVM because the VMware tax could quickly make a project unachievable.


Since virtualization is based on the principle of shared resources being allocated across varying workloads that all aren’t expected to peak at one time, one of the big benefits is the 1+1=3 effect where I can size a system for a worst case scenario of one or a couple workloads at peak amongst say a dozen, and have that system be significantly smaller than if I needed to size for all 12 at peak.  Since HPC workloads work in concert, there’s no net-sum-gain of shared resources.


Another purpose for virtualization is to take a modern dual-socket system that could have 8/12/16/20 cores and to be able to utilize it with multiple apps that don’t thread well.  With Linux, the typical HPC operating system, you easily can host a well-threaded application or launch multiple instances of threads and have them managed properly.  Perhaps more importantly, HPC geeks are trying to get every last clock cycle doing work for them, so the overhead of virtualization, however small, is generally inconsistent with these goals.


From a cost perspective, where virtualization really helps is in maximizing utilization (not a problem with HPC workloads), the ability to host disparate workloads with different software loads/configurations (HPC is uniform), administrative flexibility through live moves to keep a given workload available (in HPC individual systems aren’t that important, it’s the aggregate), and the TCO advantages of administration time (HPC operations are highly automated through for-sale tools like Insight CMU or many open source utilities), it just doesn’t make sense to me.


That being said, Data Synapse has partnered with VMware, so there’s at least a bit of a perceived need.  There are at least a couple of areas that could make sense, and that’s easily instantiating an HPC grid to utilize spare available resources within existing VMware clusters, or doing an on-demand grid during non-peak hours.  It all depends.


Bottom line:  it needs to be looked at economically as well as from the technology.


My $0.02.




Other comments?