Email Subscription Notifications Suspended Temporarily
We are in the process of making navigation in the Servers and Operating Systems forums simpler and more direct. While doing this, we have to temporarily suspend email notifications for subscriptions. If you are subscribed to one or more discussion boards or blogs in the community, please check them daily to see new content. Notifications will be turned back on in a few days. We apologize for any inconvenience this may cause. Thanks, Warren_Admin
StoreVirtual Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

vSphere guidance (looking for a field guide for v4)

SOLVED
Go to solution
Steven Vallarian
Occasional Contributor

vSphere guidance (looking for a field guide for v4)

I'm looking for something similar to the vi3 field guide, but updated for the new features in vSphere.

Specifically my questions are:
Do we need to setup the SAN using the same network guidance listed in the field guide?

Does the new Distributed switch need to be used?

Are there any documents related to the vSphere and San/IQ somewhere?

27 REPLIES
kghammond
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

We are also looking for a vSphere guide. In particular, we are looking for details on the MPIO implementation in vSphere 4 and how it interacts with LeftHand and how to set it up properly.

Thank You,
Kevin
Gauche
Trusted Contributor
Solution

Re: vSphere guidance (looking for a field guide for v4)

I've got that guide, it is just a matter of actually getting it published onto the HP website. Sounds easy right...
Oh well.
For now here it is as a draft on the forum.

**** edit *****
It is published now...
http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-0261ENW
Adam C, LeftHand Product Manger
kghammond
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

Thank you for the documentation. It looks like we had everything configured correctly for MPIO. We pulled the informaiton from the iSCSI Configuration Guide and from some DELL Equalogic guides.

The only remaining question we have is:

If you are using MPIO, should the nics on the individual LeftHand nodes be bonded or not. Since MPIO is many paths, we thought there might be some benefit to haveing paths from all vSphere NIC's to all Cluster node NIC's.

Do you know if LeftHand will be able to provide a MPIO solution where vSphere/LeftHand can dynamically pick paths (nic's) with little load to help balance the iSCSI LAN traffic?

Thank you,
Kevin
Ole Thomsen_1
Trusted Contributor

Re: vSphere guidance (looking for a field guide for v4)

Great stuff, Gauche!

Can you provide similar docs for vSphere, EVA4400 and FC?

Ole Thomsen
HPEStorageGuy
HPE Blogger

Re: vSphere guidance (looking for a field guide for v4)

The document that was attached has been updated and the updated version is available on hp.com: http://bit.ly/yOfCC. The title is different - I don't know what else has changed.
Tyler Modell
Occasional Visitor

Re: vSphere guidance (looking for a field guide for v4)

@HPstorageGuy

Looks like the guide you posted was for a VI3 configuration with the P4000 SAN whereas the prior attachment was for running vSphere 4 with an HP P4000 SAN.

Tyler
kghammond
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

It appears the major difference in the documentation between VI3 and vSphere is the section on MPIO.

As I stated earlier the guidance is virtually identical to VMware's iSCSI Configuration Guide for MPIO.

The one clarification is that it appears LeftHand supports the Round Robin Path Selection to balance and aggregate bandwidth across multiple nic's. From our testing this is working perfectly and it should allow an individual VM to exceed 1 Gbps of iSCSI traffic.

I am curios if HP/LeftHand have a long term vision to implement a custom MPIO provider that may look at utilization loads per node and per nic and allow vSphere to dynamically shift path selections based on load.

Kevin
zxr_1
Occasional Contributor

Re: vSphere guidance (looking for a field guide for v4)

does the multi pathing section also pertain to the VSA product. My nodes are two Dell 2950's using local storage. If it does will I see better performance implementing multi pathing.

Thanks
Ben02
Occasional Advisor

Re: vSphere guidance (looking for a field guide for v4)

I would also like a copy relevant to VSphere? Can you please post again?
Steven Vallarian
Occasional Contributor

Re: vSphere guidance (looking for a field guide for v4)

Bennie,

Click on the paperclip on Gauche's post.

kghammond
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

We have been trying to understand the LeftHand side of MPIO.

Assume you have two nodes in a LeftHand cluster. Both nodes have bonded Nic's, then you have a VIP for the cluster.

When you setup MPIO with two nics on vSphere with two IP's on the same subnet, from our testing, both IP's create a path to the bonded nic for one node in the cluster.

Two questons:

1) Will one path use one nic in the bond and will the other path use the other nic in the bond? Or will they both use the same nic in the bond? If the latter is true than MPIO really doesn't gain anything.

2) Since both paths are to the same bonded nic, does LeftHand redirect "some" of these I/O requests to the other node of the cluster? Or will all traffic go to the node identified by path selection.

3) Do we need to break the LeftHand bonded interfaces and put seperate IP's on each nic for each node in the cluster, to get more paths created and thus round robin will then utilize more nic's? Do you need to place MPIO nic's across subnets to get true MPIO from end to end from vSphere to LeftHand?

Thank you,
Kevin
Ben02
Occasional Advisor

Re: vSphere guidance (looking for a field guide for v4)

Thank you very much. The link was not working when I originally posted. Looks good now.
Gauche
Trusted Contributor

Re: vSphere guidance (looking for a field guide for v4)

Sheesh. I gotta pay more attention to this thread. Guess this is what I get for travelling so much and not paying enough attention to the forum.

Trying to hit the major questions here.
1- Yes bond the NSMs, ALB is the most common, 802.3ad if you have the switches to support it. The bond is the best way to link aggregate out of the NSM.
2- The VMware MPIO allows for link aggregation of the ESX host, this was the biggest issue with 3.5, that you could not use more than one network adapter with perforamnce benefits. I'm sooooo glad that is a solvable problem now.
3- yes the connections go to the same Node but so long as that node has a bond you have both dual nic load balancing TO the SAN via the new MPIO, and nic load balancing FROM the SAN via the bond. This makes a huge difference in using regular gigabit ethernet in that you can do 220 MB/s instead of 110 from a rather normal ESX config
4- You could see some way more advanced stuff from us in the ESX MPIO realm in future releases, but I can't promis in a forum, cmon now.
5- As I said it was a draft so I'm attaching my latest update, that has a better screenshot.
Adam C, LeftHand Product Manger
Steven Vallarian
Occasional Contributor

Re: vSphere guidance (looking for a field guide for v4)

Gauche,

Ok, i've implemented the MPIO, but I've got a question.

When I added the secondary paths using
esxcli swiscsi nic add -n vmk1 -d vmhba33
and
esxcli swiscsi nic add -n vmk0 -d vmhba33

and rescanned the hba's, I would think that I would have 2 paths to each LUN. (So 5 LUNS = 10 paths). Instead I have 15 paths. (10 new + the existing 5 paths using vmhba33:C0, vmhba33:C1, vmba33:C2). I'm just not sure if I need to remove the existing paths or not.

I've got a DL360 with 6 NICs.
Gauche
Trusted Contributor

Re: vSphere guidance (looking for a field guide for v4)

Interesting. 10 paths like you had expected should be right, one for each of the vmkernels you bound to iSCSI. Might be a temporary condition or one of the odd VC refresh things that seem to happen often. If not then it is a new thing that I've not seen. If 15 paths stay... can you verify you only have 2 vmkernels?
Adam C, LeftHand Product Manger
M.Braak
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

We are using vSphere with a 4 node P4500 SAN.
I have connected the vSphere nodes with MPIO as you described. vSphere is creating 2 path to each LUN. But the two paths are to the same P4500 node.
Shouldn't it be better to create two paths to two different nodes?
Or isn't this possible until HP releases an SAN/IQ MPIO plugin for vSphere?

The second question i have is: Why does I/O get interrupted for approx 30 seconds (path failover) if one of the paths goes down or comes back up? i still have one path left.
Tyler Modell
Occasional Visitor

Re: vSphere guidance (looking for a field guide for v4)

@M.Braak

Are you connecting to a Load Balanced cluster using the VIP or are you connecting directly to an individual node IP address?

Another thing to check which I'm surprised hasn't been noted in this thread is that the HP doc is actually wrong with regards to setup of the paths. The HP doc reccomends having each nic from the VMkernels setup as standbys for one-another but from what I heard at VMworld, direct from VMware engineers, and countless other sources in addition to my own testing is that this will not work correctly. You may see multiple connections on the LeftHand side but it doesn't mean that ESX will failover/load-balance correctly.

My suggestion for you to try, after you perform all of the steps as outlined by the HP doc, run the following command on your ESX server: vmkiscsi-tool -l -V vmhba## | where vmhba## is the hba number of your software iSCSI initiator. If you setup everything according to the HP doc (including the commands, Round-Robin on the datastore, etc)... you should see it list the multiple VMKnics but with no information about the paths/VMknics (I.e. Tx, Rx, etc). If you go back and pull the standby VMKnics out and make them un-used on one-another and run that command again, you should see data list correctly after running the command and you should see multiple paths build on your storage side (given you are using VIP load-balancing).

Let me know if you have any questions or need any additional information.

Thanks,

Tyler
M.Braak
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

@Tyler

I'm connecting to Load Balanced cluster using the VIP address. this results in two connections to the first SAN node.

You had found a error in the HP doc. That is correct, however this error is already fixed in the latest doc. See the reply of Gauche at Sep 24, 2009 01:43:30 GMT (new version of the doc is attached)

So i already have the second nic set up unused.

vmkiscsi-tool shows me two connected interfaces.

I hope HP will soon come with a MPIO plugin which connects to all nodes in the cluster like the MPIO DSM for Windows does.

However i still dont understand why a approx 30 sec i/o stall is neccesary when one of the path's in a Round Robin configuration fails.
kghammond
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

Our configuration sounds idential to Braack's. We are connecting to the VIP and we see two paths to the same node, one for each vmkernel.

We also have each nic set as unused for the other vmkernels as recommended in VMware's iSCSI guide. I didn't double check the HP field guide. We have round robin turned on and set as the default path selection option.

Running esxtop (n, s 2, T) shows both iSCSI nics sending and receiving roughly 50% of the traffic.

Kevin
teledata
Respected Contributor

Re: vSphere guidance (looking for a field guide for v4)

Was just reading this tonight. I'm setting up a customer with vSphere 4, and we are configuring a Multi-Site SAN (Campus SAN) with 2 VIP subnets... was banging my head against the wall until I read this article:

ANYONE using vSphere 4 and Lefthand SANs should read this article!!!!!

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html
http://www.tdonline.com
Gauche
Trusted Contributor

Re: vSphere guidance (looking for a field guide for v4)

The guide has been published now. so for future reference please use this link instead of the attachement I provided earlier. The link will actually get future updates.

http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-0261ENW
Adam C, LeftHand Product Manger
teledata
Respected Contributor

Re: vSphere guidance (looking for a field guide for v4)

I'm still having problems understanding the pathing issues...

I setup a customer with a multi-site san.
We configured 2 VIPs (each in their own unique subnet) and then did a stretched VLAN of BOTH storage vlans across the WAN. (It's a 10gb fiber connection and all VLANing is done via switches, so there's really no significant extra hops)

I understand you can't do binding/MPIO with the multi-site san (because of a VMware restriction of not performing vmkernel routing)

So I've got 2 vmkernels. one in subnet a, and one in b... and it DOES do some failover.. although there is an oddity.. instead of seeing 2 paths, 1 in each subnet I see 2 paths, but it lists them both using the same subnet:

ie:
Storage Paths:
Path1: vmhba34:C0:T4:L0 (target 172.30.5.18:3260)
Path2: vmhba34:C1:T2:L0 (target 172.30.5.18:3260)

Dynamic Discovery targets:
172.30.5.10 (VIP of the local 5 storage subnet)
172.30.105.10 (VIP for the remote 105 storage subnet)

vmkernel01: 172.30.5.54
vmkernel02: 172.30.105.54

Shouldn't I see 1 path on the 5 network, and the 2nd on the 105 network?
http://www.tdonline.com
M.Braak
Frequent Advisor

Re: vSphere guidance (looking for a field guide for v4)

@Gauche: What is HP's opinion about the best practice when connecting a second iSCSI target within a Windows VM?
Use VMWare native RDM mapping or use the MSiSCSI inititiator with Lefthand MPIO DSM?

Before VMware could't do multipathing so the MSiSCSI with DSM was best pratice but now VMWare also does multipathing things could have changed.

The best solution in my opinion is that HP releases a MPIO plugin for vSphere, but i think that will take a will to come out. (if it's even planned to be created)
Ben02
Occasional Advisor

Re: vSphere guidance (looking for a field guide for v4)

Any news on this? I still don't see Vsphere truly multi-pathing to multiple LH nodes. The current VMWare multi-pathing simply makes multiple connections to the same node. Any ideas?