StoreVirtual Storage
1756017 Members
3372 Online
108839 Solutions
New Discussion юеВ

Re: Multipathing with VSAN/VMware

 
SLAING
Occasional Advisor

Multipathing with VSAN/VMware

Hi all,

I have two DL180 servers set up with VMware and the HP Lefthand P4000 VSAN. I have everything working but could use some help with multipathing. I created two iSCSI VMKernels with different NICs assigned to each. The iSCSI adapter is showing both paths to the storage. One path says Active I/O and the other says just Active. I read that to push data down both paths you have to set the iSCSI LUN Path selection to Round Robin (currently it is set to Fixed (VMware). If I set it to Round Robin on both the ESX hosts and start to do any data transfer to the VSAN iSCSI LUN, I get I/O errors because the LUN goes offline. If I leave it as is on Fixed, it works fine but doesn't seem to be using both NICs simultaneously.
Any idea why? I must have missed something but not sure what.

thanks,
8 REPLIES 8
Alexander A
Valued Contributor

Re: Multipathing with VSAN/VMware

Hi! I just read the the HP doc "Running VMware vSphere 4 on HP LeftHand P4000 SAN Solutions". The setup it suggest is to bind both adapters (in esx) to both vmkernel ports first. For this to work correctly, both vmkernel ports must be able to connect to all ports on the storage device.

This differs from the regular SAN design concept of "dual fabric" where you have two (four, eight) ports on either side and half the ports on the host may only see half the ports on the storage device.

I don't know if "round robin" will work if you bonded each vmkernel port with just one NIC in ESX - there is litte to no information about this in the VMware docs. It "should" leave you with two paths to the storage device and if both of them works, round robin "should" be able to use them.

Using the setup suggested in the HP doc, as far as I know, would require that only one LAN is used for the SAN, while the dual fabric works with two isolated LANs. In case you use two switches to connect the storage device, they would need to be stacked: half the traffic would flow through both switches and thus is not a solution I would choose at the moment. But my experience so far with this is limited to an environment using two isolated switches - maybe it is possible to get a much faster SAN using the round robin configuration. If you have the time and equipment, please try it and post about it :)

Best regards,
Alex
larry-cgb
Advisor

Re: Multipathing with VSAN/VMware

I found parts all around the web. I put togeter a word doc, hope this helps.

Every site was missing some step.
teledata
Respected Contributor

Re: Multipathing with VSAN/VMware

Larry,

Nice job on the doc. That's a very good guide to multipathing for vSphere 4 with SAN/iQ.

Wish I had it last summer, as I had to figure out those steps for myself.. heh

I've posted this before, but this is a MUST read for anyone doing vSphere/Storage work in general. It's a great multi-vendor primer and helps you understand how the iSCSI pathing works with VMware vSphere 4.0

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

http://www.tdonline.com

Re: Multipathing with VSAN/VMware

Hi guys,
I'm curious is anyone setting the IOOperationLimit to 1 with the P4000 kit?
http://virtualgeek.typepad.com/virtual_geek/2010/03/understanding-more-about-nmp-rr-and-iooperationslimit1.html
cheers
DC
Alexander A
Valued Contributor

Re: Multipathing with VSAN/VMware

Excellent post teledata, too bad I didn't find it when I researched this myself.

I do wonder though, why you aren't using different subnets for the vmkernel ports? According to VMware, all vmkernel ports use the same routing table; if you have addresses on the same subnet for the two ports, and one port can only connect to half the storage ports (like your picture on step 4), how will the TCP/IP stack know what vmkernel port to use?
teledata
Respected Contributor

Re: Multipathing with VSAN/VMware

Alex,

I'm confused about your question. You can ONLY have your vmkernels on one subnet.

What did you mean by "half the storage ports"? Each vmkernel is a unique distinct path to ALL the storage on that particular array.

VMware (as you pointed out) uses the same routing table (which is why we can't do this when using a LeftHand true Multi-Site Cluster).

The VMware kernel will decide which port to use based on your storage path selection policy policy: (Fixed, Round Robin, or MRU)
http://www.tdonline.com
Alexander A
Valued Contributor

Re: Multipathing with VSAN/VMware

teledata,

You can have your vmkernel ports on different subnets. I know, because that is how I did it ;)

I used two vmkernel ports on two different vSwitches (I didn't know then how to seperate them in one vSwitch). The storage system (Windows Storage Server) likewise had one interface to each switch and different subnets. That way, each "fabric" had it's own subnet.

Like this: the "nic0"-interfaces are connected to the same switch, and traffic on one switch cannot reach the second switch.
vmknic0: 10.0.0.1/25
vmknic1: 10.0.0.129/25
spnic0: 10.0.0.2/25
spnic1: 10.0.0.130/25

The routing table will now specify what interface to use for any wmkernel traffic trying to reach the 10.0.0.0/25 or 10.0.0.128/25 network.

I you were to use the same addresses but with a 24 bit mask, the routing table would contain two rows for the same network, and all traffic would be sent to the first row. Maybe the VMware iSCSI initiator has a way of finding the right NIC anyway; but if so, how?
teledata
Respected Contributor

Re: Multipathing with VSAN/VMware

Ah, yes I misspoke, you CAN have different (local) subnets, however the iSCSI will ONLY use 1 route (the default gateway for iSCSI)...

The way I understand that would be, you cannot use custom route statements for iSCSI traffic on different subnets that would require routing... (perhaps?)

When I did a multi-site (mind you, NOT multi pathing round robin) SAN I had to put in 2 different iSCSI networks, so that I could have both VIPs in the storage paths. In order to allow both sites to have routes to BOTH VIP subnets, I had to create 2 stretched storage VLANs across the sites (since VMware iSCSI kernel doesn't allow custom routes)...

This way if a NODE failed at site 1, it had another route to site 2. And if the site link failed it still has a path to the local SAN.

In your case you are using a subnet mask to limit the IP addressing, so it basically does an inventory of everything it can "SEE" on each storage path, and routes the storage path based on that (in combination with the storage path policy).

I think you are right, if you had a 24 bit subnet mask you could have problems...
http://www.tdonline.com