MSA Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

 
Dennis Handly
Acclaimed Contributor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

> Have edited link in earlier post as suggested

 

Looks like you need to use the hyperlink menu and make sure the Url field is corrected.

SprinkleJames
Frequent Advisor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

I believe you're right, in that controller ownership is per vDisk. So if ownership changes, it would be for all LUNs that are part of that vDisk.

Last week, just to verify,  I did successfully test path failover on a SAS-attached MSA1040.

I have two ESXi hosts, each with two SAS ports. there is one LUN configured. Each SAS port is connected to a different MSA controller, so the LUN has two paths from each host - one Active, and one Active(I/O). I pulled one SAS cable out of host 1, and storage connectivity stayed available with VMs still running on both hosts. I then plugged that SAS cable back in, ensured both paths showed active again, and uplugged the other SAS cable on that same host. Again, storage connectivity stayed available. 

If it works with SAS, I'm not sure why it wouldn't work the same with iSCSI. Unfortunately I don't have access to an iSCSI MSA to test with.

Kerry3
Occasional Advisor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

Hi James

Thank you for doing this. - that sounds similar to what I have though I have 3 servers and, of course, several volumes on my vdisk.  When I asked HP their response was :

Pulling a network cable out would not simulate failure - the MSA would not see that as a failure quick enough for the path to change - what I have to do is change the controller (from A to B) and see what happens.  This is, of course not the failover that I want - I wanted to see if failover would work if a server or SAN network port failed.

I guessthat unplugging a SAS cable is is but more 'immediate'.  Did you see that the disk ownership changed when you pulled out the SAS cable (from the one that was not connected to the orginal owner)?   As I have (like you) one Active, and one Active(I/O), and not an active-active setup. it looks like the ownership has to change for the failover to work and that is not going to happen if one network card fails in my setup - It's really only going to work if a controller fails.  I guess a switch does give me server card failover protection (though not controller network port failure protection) and I can only get that with 2 connections to the same controller from each server as per Benjamin Smith's note as we saw in the other post https://communities.vmware.com/thread/536168

There an interesting (if old) thread here : https://communities.vmware.com/thread/238726  which talks about ULP - as does this https://h20195.www2.hpe.com/v2/getpdf.aspx/4aa4-7060enw.pdf  but I don't get any active/active connections and it seems that the owner has to change for the other path to be used.  Ce la via - we had been lucky up to now!

 PS - I have now updated the link in my original post - thanks to Dennis

 

 

 

 

jason2713
Occasional Advisor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

Hi - 

I  have the iSCSI HP MSA1040 attached via dual 10GB links to 2 ESXi 6.5 hosts that are Dell R820's, giving me 2 paths per server to two controllers on the MSA 1040.  The MSA1040 is half populated with 12x 900GB 10K disks in a RAID5 array.  I carved out one large LUN and presented the vDisk to my hosts which see the ~7TB datastore for VMs.

I am able to configure VMWare with an active/passive configuration and pull one of the 10GB, I do not lose connection to the back end vDisk holding my VMs.  However, it's come to my attention that the servers are slow when users who connect to my terminal server kept getting disconnected  periodically.  After going into Resource Monitor within Windows Server 2016, I can see the C:\ drive at 100% activity time nearly the entire time, which would cause a lot of slowness and then disconnects when the OS became unresponsive.  I checked my other 5 servers and they too had all their drives running at 100% activity time.

I then installed 600GB 10K disks on the local hosts in a RAID10 configuration and copied each of my VMs to the local data stores.  I booted up the VMs and it was night and day, the performance was great.  The Resource Monitors reflected the disk activity time at fractions of a percent, no longer 100%.  So I figured my RAID5 array did not possess the performance necessary to run VMs from VMWare.  So since my MSA 1040 no longer had any VMs stored on it, I blew away the RAID5 array and made a 10 disk RAID 10 with 2 global spares thinking this would solve the problem.

I copied the servers back to the MSA1040 now in a RAID10 format, and the activity time is still slower than when it was on the local RAID10 array.  It's very noticable, rebooting the server takes more then 2x as long when compared to the local storage, and the activity time for the drives on my VMs are running 100% for probably the first 4-5 minutes of each reboot.  The Server Manager does  not pop up nearly as quickly as the local data store, so I'm totally at a loss.

I've enabled jumbo frames on the MSA 1040.  But I really need to get my VMs off the local data stores and back onto my MSA1040 iSCSI SAN.   Saying all that, it appears that I either have a VMWare configuration problem, or there is a setting within the 10GB HBA that I didn't set correctly (I left everything default so far).  I would think 10GB pipes on a 10 disk RAID10 would perform as well if not better than local storage with a 6 disk RAID10.  Reboots on the local storage take maybe 20-30 seconds.

Hopefully someone can offer some advice.

SprinkleJames
Frequent Advisor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

Hi Jason,

How do you have your iSCSI networks configured? I'd suggest to put each pair of connected ports into their own IP subnet, and do not enable iSCSI vmk port binding. Example:

MSA-A port 1	10.1.10.10 /24
ESXi01 port 1	10.1.10.11 /24

MSA-B port 1	10.1.11.10 /24
ESXi01 port 2	10.1.11.11 /24

MSA-A port 2	10.1.12.10 /24
ESXi02 port 1	10.1.12.11 /24

MSA-B port 2	10.1.13.10 /24
ESXi02 port 2	10.1.13.11 /24

 

Also, have your tested your Jumbo Frames connectivity from the ESXi host to the MSA? Use the following command, replacing the x's as appropriate:

vmkping -I vmkX -d -s 8972 x.x.x.x

 

jason2713
Occasional Advisor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

@SprinkleJames

MSA1040 controller A:
10GB Port 1 - 192.168.100.10/24
10GB Port 2 - 192.168.100.11/24

MSA1040 controller B:
10GB Port 1 - 192.168.100.20/24
10GB Port 2 - 192.168.100.21/24

ESX1 Physical NICs
VMNIC4 - 10GB HBA Port 1 - 192.168.100.12/24
VMNIC5 - 10GB HBA Port 2 - 192.168.100.22/24

ESX2 Physical NICs
VMNIC4 - 10GB HBA Port 1 - 192.168.100.13/24
VMNIC5 - 10GB HBA Port 2 - 192.168.100.23/24

I have each ESXi Host cross connected to each controller providing 2 paths, 1 to each controller.

Here is how I configured ESX1, ESX2 is done the same way but with the different IPs - 
I created 2 VMKernel NICs called vmk1 - 192.168.100.12 (added it to iSCSI_HBA1 portgroup, MTU 9000) and vmk2 - 192.168.100.22 (added it to iSCSI_HBA2 portgroup, MTU 9000)

I created a vswitch for iSCSI and added vmnic4 and vmnic5 as available uplinks, MTU 9000.  Under Failover I added vmnic4 and vmnic5, both are active.

I created 2 Port Groups called iSCSI_HBA1 and iSCSI_HBA2.  The port groups are assigned to the vswitch for iSCSI traffic.
iSCSI_HBA1 - under NIC teaming --> Failover Order I have vmnic4 and vmnic5, vmnic5 is marked for standby (unused)
iSCSI_HBA2 - under NIC teaming --> Failover Order I have vmnic5 and vmnic4, vmnic4 is marked for standby (unused)

Under Storage --> Adapters --> clicked configure iSCSI -->
     Network Port Bindings --> added vmk1 and vmk2
     Dynamic Ports --> added IP of the MSA1040 (192.168.100.11)

It then populated the 4 Static Ports of the MSA1040 --> 192.168.100.10, 192.168.100.11, 192.168.100.20, 192.168.100.21 port 3260

When I unplug any of my 10GB lines, the other line becomes active and does not disconnect the host from the storage.

The failover and redundancy works, each host sees the storage, but the performance is not on par with the local storage.  


 
 

jason2713
Occasional Advisor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

Ok so I see what you're doing, putting each port on each side within their own subnet, each side in the subnet.

I see on the MSA1040 side where to do the IP assignments, that's pretty easy.

The vmware side, do port bind each HBA port to a vmnic like I've done, create a port group and vswitch with MTU9000 for jumbo frames? 

jason2713
Occasional Advisor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

@SprinkleJames

MSA1040 controller A:
10GB Port 1 - 192.168.100.10/24
10GB Port 2 - 192.168.100.11/24

MSA1040 controller B:
10GB Port 1 - 192.168.100.20/24
10GB Port 2 - 192.168.100.21/24

ESX1 Physical NICs
VMNIC4 - 10GB HBA Port 1 - 192.168.100.12/24
VMNIC5 - 10GB HBA Port 2 - 192.168.100.22/24

ESX2 Physical NICs
VMNIC4 - 10GB HBA Port 1 - 192.168.100.13/24
VMNIC5 - 10GB HBA Port 2 - 192.168.100.23/24

I have each ESXi Host cross connected to each controller providing 2 paths, 1 to each controller.

Here is how I configured ESX1, ESX2 is done the same way but with the different IPs -
I created 2 VMKernel NICs called vmk1 - 192.168.100.12 (added it to iSCSI_HBA1 portgroup, MTU 9000) and vmk2 - 192.168.100.22 (added it to iSCSI_HBA2 portgroup, MTU 9000)

I created a vswitch for iSCSI and added vmnic4 and vmnic5 as available uplinks, MTU 9000. Under Failover I added vmnic4 and vmnic5, both are active.

I created 2 Port Groups called iSCSI_HBA1 and iSCSI_HBA2. The port groups are assigned to the vswitch for iSCSI traffic.
iSCSI_HBA1 - under NIC teaming --> Failover Order I have vmnic4 and vmnic5, vmnic5 is marked for standby (unused)
iSCSI_HBA2 - under NIC teaming --> Failover Order I have vmnic5 and vmnic4, vmnic4 is marked for standby (unused)

Under Storage --> Adapters --> clicked configure iSCSI -->
Network Port Bindings --> added vmk1 and vmk2
Dynamic Ports --> added IP of the MSA1040 (192.168.100.11)

It then populated the 4 Static Ports of the MSA1040 --> 192.168.100.10, 192.168.100.11, 192.168.100.20, 192.168.100.21 port 3260

When I unplug any of my 10GB lines, the other line becomes active and does not disconnect the host from the storage.

The failover and redundancy works, each host sees the storage, but the performance is not on par with the local storage.

 

Highlighted
jason2713
Occasional Advisor

Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere

How do you connect iSCSI 10GB controllers on the MSA1040 to non-iSCSI bound 10GB HBA ports on the ESXi hosts if you are not binding them to iSCSI ports within VMWare?