StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

Windows and VMware volume in the same Management group

Occasional Visitor

Windows and VMware volume in the same Management group

We currently have a single management group on our Lefthand Cluster which contains both VMware Volumes and Windows Volumes. The Windows servers DO NOT have access to the VMware Volumes but do run MPIO.


We have experianced the following issue


To Resolve this issue I have been recommended to


1.Uninstall HP Lefthand DSM for MPIO from Windows hosts

2.Shutdown all VMs

3.Shutdown all the ESX hosts

4.Shutdown Lefthand (Shut down the management group, not the nodes individually)

5.Power up the Lefthand and make sure all the nodes are up and volumes are all online

6.Power up the ESX hosts and VMs


The question I have is is it best practise to actually run mixed volumes on the same management groups or do I need to separate the windows volumes into a different management group ?



Thanks in advance

Honored Contributor

Re: Windows and VMware volume in the same Management group



I don't have a mixed LUN setup since I"m running hyper-v, but the posts you linked only really seem to talk about M$ and vmware trying to have access to the same LUNs.  Logically, I would think that it really shouldn't matter if you have somie LUNs mapped to windows and others mapped to vmware as long as the two aren't mapped to the same LUNs there should be no problem.


Are you overloading the IO of the SAN?  What does your CMC perf mon show?


Every link you showed talks about issues where windows AND vmware are trying to connect to the same LUN.  If you said this isn't the case in your situation then I would be looking more closely at the performance monitoring stats on your CMC to make sure the SAN isn't just overloaded.

Occasional Visitor

Re: Windows and VMware volume in the same Management group

The disk latency on the hosts is not exceptionally high, being constantly below 20ms.


However in the host logs I can see that the storage array is reporting a check condition for several LUNs. For example:



2012-01-25T08:52:52.626Z cpu22:8214)NMP: nmp_ThrottleLogForDevice:2318: Cmd 0xc1 (0x412401204740) to dev "naa.6000eb336e48a8240000000000001977" on path "vmhba34:C0:T17:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x9 0x0 0x0.Act:NONE


The D:0x2 part means that the code is coming from the storage array. The 0x2 scsi code in question means BUS BUSY. there is an additional scsi sense code, 0x9 0x0 0x0


Basically the storage array is negating access to the specific LUN claiming it is busy because of the reason 0x9 0x 0x0.


What HP is saying is that MPIO installed on the Windows server creates a connection to every Lefthand node. When the VMware host tries to access the same Lefthand node as the Windows server is currently accessing its blocked by MPIO with BUS BUSY


The I/O of the Lefthand cluster looks fine as its not really been pushed at the minute.  Disk Latency on the VMware volumes is under 20ms until its blocked by MPIO ( When it spikes)