Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

Performance and vMotion problems with VSA 9.5 and ESXi 5.0

lumoder
Occasional Advisor

Performance and vMotion problems with VSA 9.5 and ESXi 5.0

Hello!

We have some problems with HP VSA 9.5 and ESXi 5.0 u1. There are two ESXi 5.0 u1 clusters in our environment. First cluster consists of four BL680c G5 hosts and uses two datastores provided by two VSAs located in second cluster. VSAs are located on two BL460c G6 hosts and has SB40c with RAID10 as a direct attached storage with 1.7 TB space. This space divided into two 750 disks and attached to hosts from first cluster by iSCSI. On each BL460c we have created two vmks for iSCSI traffic and attached to software iSCSI adapter. Additionally we have connected an iSCSI volume from EMC VNX 5300 to ESXi all hosts, but through single vmk in different VLAN. PSP for VSA's datastores is Round Robin.

naa.6000eb323fe02a7e000000000000002e
   Device Display Name: LEFTHAND iSCSI Disk (naa.6000eb323fe02a7e000000000000002e)
   Storage Array Type: VMW_SATP_DEFAULT_AA
   Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba32:C0:T3:L0, vmhba32:C1:T3:L0

naa.6000eb323fe02a7e0000000000000030
   Device Display Name: LEFTHAND iSCSI Disk (naa.6000eb323fe02a7e0000000000000030)
   Storage Array Type: VMW_SATP_DEFAULT_AA
   Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba32:C0:T2:L0, vmhba32:C1:T2:L0

 

When we try operate with VM's located on VSA's datastores we get some warning/error messages in vmkernel.log:

 

T07:28:43.458Z cpu12:5831)Config: 346: "SIOControlFlag2" = 1, Old Value: 2, (Status: 0x0)
T07:28:43.761Z cpu6:390724)FS3Misc: 1440: Long VMFS3 rsv time on 'ds_iSCSI_2' (held for 262 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors
T07:28:43.869Z cpu0:389850)Net: 1652: connected SDWebSrv eth0 to Net_17, portID 0x100001f
T07:28:43.869Z cpu0:389850)NetPort: 1240: enabled port 0x100001f with mac 00:00:00:00:00:00
T07:28:44.513Z cpu3:223511)Config: 346: "SIOControlFlag2" = 0, Old Value: 1, (Status: 0x0)
T07:28:45.689Z cpu13:4109)ScsiDeviceIO: 2322: Cmd(0x4124037f95c0) 0x2a, CmdSN 0x800000bd from world 292039 to dev "naa.6000eb323fe02a7e0000000000000030" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x9 0x4 0x2.
T07:28:45.689Z cpu13:4109)ScsiDeviceIO: 2322: Cmd(0x4124032f8f80) 0x2a, CmdSN 0x800000e4 from world 292039 to dev "naa.6000eb323fe02a7e0000000000000030" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x9 0x4 0x2.
T07:28:57.726Z cpu17:4113)ScsiDeviceIO: 1198: Device naa.6000eb323fe02a7e0000000000000030 performance has improved. I/O latency reduced from 29760 microseconds to 14102 microseconds.
T07:28:59.329Z cpu14:4110)NMP: nmp_ThrottleLogForDevice:2318: Cmd 0xc1 (0x41240200eb00, 172879) to dev "naa.6000eb323fe02a7e000000000000002e" on path "vmhba32:C0:T3:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE
T07:28:59.329Z cpu14:4110)ScsiDeviceIO: 2322: Cmd(0x412402009300) 0xfe, CmdSN 0xdb5a4 from world 172879 to dev "naa.6000eb323fe02a7e000000000000002e" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.
T07:28:59.548Z cpu17:172879)FS3Misc: 1440: Long VMFS3 rsv time on 'ds_iSCSI_1' (held for 218 msecs). # R: 1, # W: 1 bytesXfer: 5 sectors
T08:04:11.046Z cpu22:4118)NMP: nmp_ThrottleLogForDevice:2318: Cmd 0xc1 (0x41240214c100, 5196) to dev "naa.6000eb323fe02a7e0000000000000030" on path "vmhba32:C1:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE
T08:04:11.046Z cpu22:4118)ScsiDeviceIO: 2322: Cmd(0x412402d2d980) 0xfe, CmdSN 0x1f4663 from world 5196 to dev "naa.6000eb323fe02a7e0000000000000030" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.
T08:04:11.046Z cpu22:4118)ScsiDeviceIO: 2322: Cmd(0x4124036ada40) 0xfe, CmdSN 0x1f4664 from world 5196 to dev "naa.6000eb323fe02a7e0000000000000030" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.
T08:04:11.046Z cpu22:4118)ScsiDeviceIO: 2322: Cmd(0x4124022213c0) 0xfe, CmdSN 0x1f4665 from world 5196 to dev "naa.6000eb323fe02a7e0000000000000030" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.
T08:04:11.271Z cpu1:5196)FS3Misc: 1440: Long VMFS3 rsv time on 'ds_iSCSI_2' (held for 224 msecs). # R: 1, # W: 1 bytesXfer: 0 sectors

 Also, if we perform relocating our VMs from VSA's datastore to another (for example, local disk) we get errors from ESXi. Additionally, when we trying to backup VMs on VSA's datastore using Veeam B&R, we can get un-consolidated VM's files or even powered-off VM!

We suppose there is some problems with VSA 9.5 and ESXi 5.0. Perhaps it depends on our configuration, but we cannot see anything misconfigs... Can someone help us with our problem?

 

16 REPLIES
ccavanna
Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

Dunno if this is feasable or not but I would rebuild VSA's with the 9.0 ovf file virtual hardware version 4 and then upgrade it to 9.5 and see if that fixes your problem. I recently had an issue myself with 9.5 ovf and i rebuilt them with the older virtual hardware version 4 ovf file and then upgraded and it works like a top now.

Amar_Joshi
Honored Contributor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

Hi,

I do not have very deep knowledge of ESX but it looks like Round-Robin settings are not completing the IOs and Storage IO Controll settings are toggling and the reason could be a misconfigured ESX or VSA.

 

From the ESX please check if all the path are active for that given LUN. If not try VMware MRU path policy and see if that suppresses the errors.

 

From the VSA side, are you using both NICs (default for v9.5 installation) and ALB for SAN/iQ NICs? If that is the case, I will suggest to delete the first NIC (vmxnet2) and run the VSA with only second NIC (flexible) and not to use any load balancing for NICs (which you can't enable anyways with single NIC). This is discussed several time in various posts hence I am not writing any description about why it's that way.

 

 

Please let us know your findings.

lumoder
Occasional Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

We have tried "VMware fixed" PSP and it generates errors too. We will try to change our PSP to MRU and check vmkernel logs for errors.
Our VSAs have only one "flexible" NIC per VSA, because we have deleted secondary NIC when installing VSA.
Re-installing VSA to 9.0 version is not the best option for us, but if we will not find any other solution, we will try this.

5y53ng
Regular Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

Hi,

 

1. did you bind your physical NICs to the iSCSI software adpater?

 

      If not you are only using one path, even if you are using RR PSP.

 

2. Are you seeing a lot of output drops on your physical switch?

 

    Try adjusting the number of IOPS per path to 10, default is 1000. This helped significantly with latency in my environment and also eliminated output drops on the physical switch interfaces.

 

Also I have to second the notion of using the old VSA and upgrading to 9.5. You will see a big difference in performance.

 

 

lumoder
Occasional Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

We have tried MRU as PSP with no success. Yes, two vmks are binded to iSCSI adapter.  We will try to install old 9.0 version and upgrade it to 9.5. 5y53ng, is iops 10 per path your own investigation or it is recommended, for example, by HP? We have seen some people try to set 1 iops per path or 100.

5y53ng
Regular Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

I have never seen a recommendation from HP, but anything from 3 to 64 seems to provide an improvement.

cheazell
Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

I am nervous about the reinstall path. Would someone walk me through that process. I have two VSA's in one cluster and it is in production. I don't get how you'd reconnect the existing disks to the rebuilt vsa?

ccavanna
Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

If you do the iops per path command I think you have to reset it upon reboot with ESXi 5.0. I don't recall where i saw that at but it maybe something to look into.

Tedh256
Frequent Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

esx 4.x did not preserve the setting after boot - esx 5 does.

 

And I have a left-hand/p4000 vmware best practice white paper that instructs to set the value to "1"

 

 

lumoder
Occasional Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

cheazell, I don't clearly understood what is " reinstall path". Are you planning to reinstall VSAs?
ccavanna
Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

If you have a 3rd box and storage you can setup one on their. Otherwise you will have to rip out the old vsa then rebuild and bring it into the management group. Make sure the FOM is up and running if you are going to do this otherwise you will lose quarom and everything will go down. Its not very pretty but it works I just did one 2 weeks ago. If you don't have a good knowledge of it maybe contact support and see what their thoughts are.

5y53ng
Regular Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

Ted,

 

Do you have a link to that document? Thanks.

ccavanna
Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

5y53ng
Regular Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

I don't see any recommendations regarding the numbers of IOPS per path in that document.

lumoder
Occasional Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

Well... I have done a lot of tests and I have detected my problem. In my case, errors described above had been produced by reservation conflicts of VMware and Windows multi-pathing plugins. I have Windows host with Veeam Backup and Replication installed that have iSCSI connection to volumes dedicated to VMware hosts as datastores. When I have uninstalled HP DSM for MPIO from my Windows host my VSAs works well! There are no errors in logs! I suppose, that VMware and HP multipathing plugins cannot share one volume correctly. I have case opened in VMware support for this problem and tell you about any solution from VMware. Additionally, I will try clean 9.5 install (not 9.0 upgrade to 9.5) and compare performance.

Does anyone have recommendations or some guide to choose value of IOPS parameter? My environment consists of four BL680c ESXi hosts with two VSA's volumes connected and about 40 "standard" VMs (two vCPUs, 2 GB RAM, 30 GB HDD). What value of IOPS will be betters solution for my environment?

Additionally, in HP CMC I have noticed some disk queue on VSAs. Maybe it is produced by improper IOPS parameter?

cheazell
Advisor

Re: Performance and vMotion problems with VSA 9.5 and ESXi 5.0

Ok. So in this instance you remove it from the Management group which then breaks the Mirror (Network RAID 10). Either build a new v9 VSA or roll straight to the 9.5 VSA and attach the Disks in the Edit settings of the VM. Rejoin it to the Management group and allow the Mirror to take place. Once that is done you then repeat on the other unit. Does that sound about right?