StoreVirtual Storage
1752567 Members
5304 Online
108788 Solutions
New Discussion юеВ

vSphere guidance (looking for a field guide for v4)

 
SOLVED
Go to solution
Steven Vallarian
Occasional Contributor

Re: vSphere guidance (looking for a field guide for v4)

Bennie,

Click on the paperclip on Gauche's post.

kghammond
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

We have been trying to understand the LeftHand side of MPIO.

Assume you have two nodes in a LeftHand cluster. Both nodes have bonded Nic's, then you have a VIP for the cluster.

When you setup MPIO with two nics on vSphere with two IP's on the same subnet, from our testing, both IP's create a path to the bonded nic for one node in the cluster.

Two questons:

1) Will one path use one nic in the bond and will the other path use the other nic in the bond? Or will they both use the same nic in the bond? If the latter is true than MPIO really doesn't gain anything.

2) Since both paths are to the same bonded nic, does LeftHand redirect "some" of these I/O requests to the other node of the cluster? Or will all traffic go to the node identified by path selection.

3) Do we need to break the LeftHand bonded interfaces and put seperate IP's on each nic for each node in the cluster, to get more paths created and thus round robin will then utilize more nic's? Do you need to place MPIO nic's across subnets to get true MPIO from end to end from vSphere to LeftHand?

Thank you,
Kevin
Ben02
Occasional Advisor

Re: vSphere guidance (looking for a field guide for v4)

Thank you very much. The link was not working when I originally posted. Looks good now.
Gauche
Trusted Contributor

Re: vSphere guidance (looking for a field guide for v4)

Sheesh. I gotta pay more attention to this thread. Guess this is what I get for travelling so much and not paying enough attention to the forum.

Trying to hit the major questions here.
1- Yes bond the NSMs, ALB is the most common, 802.3ad if you have the switches to support it. The bond is the best way to link aggregate out of the NSM.
2- The VMware MPIO allows for link aggregation of the ESX host, this was the biggest issue with 3.5, that you could not use more than one network adapter with perforamnce benefits. I'm sooooo glad that is a solvable problem now.
3- yes the connections go to the same Node but so long as that node has a bond you have both dual nic load balancing TO the SAN via the new MPIO, and nic load balancing FROM the SAN via the bond. This makes a huge difference in using regular gigabit ethernet in that you can do 220 MB/s instead of 110 from a rather normal ESX config
4- You could see some way more advanced stuff from us in the ESX MPIO realm in future releases, but I can't promis in a forum, cmon now.
5- As I said it was a draft so I'm attaching my latest update, that has a better screenshot.
Adam C, LeftHand Product Manger
Steven Vallarian
Occasional Contributor

Re: vSphere guidance (looking for a field guide for v4)

Gauche,

Ok, i've implemented the MPIO, but I've got a question.

When I added the secondary paths using
esxcli swiscsi nic add -n vmk1 -d vmhba33
and
esxcli swiscsi nic add -n vmk0 -d vmhba33

and rescanned the hba's, I would think that I would have 2 paths to each LUN. (So 5 LUNS = 10 paths). Instead I have 15 paths. (10 new + the existing 5 paths using vmhba33:C0, vmhba33:C1, vmba33:C2). I'm just not sure if I need to remove the existing paths or not.

I've got a DL360 with 6 NICs.
Gauche
Trusted Contributor

Re: vSphere guidance (looking for a field guide for v4)

Interesting. 10 paths like you had expected should be right, one for each of the vmkernels you bound to iSCSI. Might be a temporary condition or one of the odd VC refresh things that seem to happen often. If not then it is a new thing that I've not seen. If 15 paths stay... can you verify you only have 2 vmkernels?
Adam C, LeftHand Product Manger
M.Braak
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

We are using vSphere with a 4 node P4500 SAN.
I have connected the vSphere nodes with MPIO as you described. vSphere is creating 2 path to each LUN. But the two paths are to the same P4500 node.
Shouldn't it be better to create two paths to two different nodes?
Or isn't this possible until HP releases an SAN/IQ MPIO plugin for vSphere?

The second question i have is: Why does I/O get interrupted for approx 30 seconds (path failover) if one of the paths goes down or comes back up? i still have one path left.
Tyler Modell
New Member

Re: vSphere guidance (looking for a field guide for v4)

@M.Braak

Are you connecting to a Load Balanced cluster using the VIP or are you connecting directly to an individual node IP address?

Another thing to check which I'm surprised hasn't been noted in this thread is that the HP doc is actually wrong with regards to setup of the paths. The HP doc reccomends having each nic from the VMkernels setup as standbys for one-another but from what I heard at VMworld, direct from VMware engineers, and countless other sources in addition to my own testing is that this will not work correctly. You may see multiple connections on the LeftHand side but it doesn't mean that ESX will failover/load-balance correctly.

My suggestion for you to try, after you perform all of the steps as outlined by the HP doc, run the following command on your ESX server: vmkiscsi-tool -l -V vmhba## | where vmhba## is the hba number of your software iSCSI initiator. If you setup everything according to the HP doc (including the commands, Round-Robin on the datastore, etc)... you should see it list the multiple VMKnics but with no information about the paths/VMknics (I.e. Tx, Rx, etc). If you go back and pull the standby VMKnics out and make them un-used on one-another and run that command again, you should see data list correctly after running the command and you should see multiple paths build on your storage side (given you are using VIP load-balancing).

Let me know if you have any questions or need any additional information.

Thanks,

Tyler
M.Braak
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

@Tyler

I'm connecting to Load Balanced cluster using the VIP address. this results in two connections to the first SAN node.

You had found a error in the HP doc. That is correct, however this error is already fixed in the latest doc. See the reply of Gauche at Sep 24, 2009 01:43:30 GMT (new version of the doc is attached)

So i already have the second nic set up unused.

vmkiscsi-tool shows me two connected interfaces.

I hope HP will soon come with a MPIO plugin which connects to all nodes in the cluster like the MPIO DSM for Windows does.

However i still dont understand why a approx 30 sec i/o stall is neccesary when one of the path's in a Round Robin configuration fails.
kghammond
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

Our configuration sounds idential to Braack's. We are connecting to the VIP and we see two paths to the same node, one for each vmkernel.

We also have each nic set as unused for the other vmkernels as recommended in VMware's iSCSI guide. I didn't double check the HP field guide. We have round robin turned on and set as the default path selection option.

Running esxtop (n, s 2, T) shows both iSCSI nics sending and receiving roughly 50% of the traffic.

Kevin