- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- vSphere guidance (looking for a field guide for v4...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-23-2009 10:33 AM
тАО09-23-2009 10:33 AM
Re: vSphere guidance (looking for a field guide for v4)
Click on the paperclip on Gauche's post.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-23-2009 11:05 AM
тАО09-23-2009 11:05 AM
Re: vSphere guidance (looking for a field guide for v4)
Assume you have two nodes in a LeftHand cluster. Both nodes have bonded Nic's, then you have a VIP for the cluster.
When you setup MPIO with two nics on vSphere with two IP's on the same subnet, from our testing, both IP's create a path to the bonded nic for one node in the cluster.
Two questons:
1) Will one path use one nic in the bond and will the other path use the other nic in the bond? Or will they both use the same nic in the bond? If the latter is true than MPIO really doesn't gain anything.
2) Since both paths are to the same bonded nic, does LeftHand redirect "some" of these I/O requests to the other node of the cluster? Or will all traffic go to the node identified by path selection.
3) Do we need to break the LeftHand bonded interfaces and put seperate IP's on each nic for each node in the cluster, to get more paths created and thus round robin will then utilize more nic's? Do you need to place MPIO nic's across subnets to get true MPIO from end to end from vSphere to LeftHand?
Thank you,
Kevin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-23-2009 12:24 PM
тАО09-23-2009 12:24 PM
Re: vSphere guidance (looking for a field guide for v4)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-23-2009 05:43 PM
тАО09-23-2009 05:43 PM
Re: vSphere guidance (looking for a field guide for v4)
Trying to hit the major questions here.
1- Yes bond the NSMs, ALB is the most common, 802.3ad if you have the switches to support it. The bond is the best way to link aggregate out of the NSM.
2- The VMware MPIO allows for link aggregation of the ESX host, this was the biggest issue with 3.5, that you could not use more than one network adapter with perforamnce benefits. I'm sooooo glad that is a solvable problem now.
3- yes the connections go to the same Node but so long as that node has a bond you have both dual nic load balancing TO the SAN via the new MPIO, and nic load balancing FROM the SAN via the bond. This makes a huge difference in using regular gigabit ethernet in that you can do 220 MB/s instead of 110 from a rather normal ESX config
4- You could see some way more advanced stuff from us in the ESX MPIO realm in future releases, but I can't promis in a forum, cmon now.
5- As I said it was a draft so I'm attaching my latest update, that has a better screenshot.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-24-2009 05:26 AM
тАО09-24-2009 05:26 AM
Re: vSphere guidance (looking for a field guide for v4)
Ok, i've implemented the MPIO, but I've got a question.
When I added the secondary paths using
esxcli swiscsi nic add -n vmk1 -d vmhba33
and
esxcli swiscsi nic add -n vmk0 -d vmhba33
and rescanned the hba's, I would think that I would have 2 paths to each LUN. (So 5 LUNS = 10 paths). Instead I have 15 paths. (10 new + the existing 5 paths using vmhba33:C0, vmhba33:C1, vmba33:C2). I'm just not sure if I need to remove the existing paths or not.
I've got a DL360 with 6 NICs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-24-2009 06:59 PM
тАО09-24-2009 06:59 PM
Re: vSphere guidance (looking for a field guide for v4)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-20-2009 03:17 AM
тАО10-20-2009 03:17 AM
Re: vSphere guidance (looking for a field guide for v4)
I have connected the vSphere nodes with MPIO as you described. vSphere is creating 2 path to each LUN. But the two paths are to the same P4500 node.
Shouldn't it be better to create two paths to two different nodes?
Or isn't this possible until HP releases an SAN/IQ MPIO plugin for vSphere?
The second question i have is: Why does I/O get interrupted for approx 30 seconds (path failover) if one of the paths goes down or comes back up? i still have one path left.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-20-2009 03:50 AM
тАО10-20-2009 03:50 AM
Re: vSphere guidance (looking for a field guide for v4)
Are you connecting to a Load Balanced cluster using the VIP or are you connecting directly to an individual node IP address?
Another thing to check which I'm surprised hasn't been noted in this thread is that the HP doc is actually wrong with regards to setup of the paths. The HP doc reccomends having each nic from the VMkernels setup as standbys for one-another but from what I heard at VMworld, direct from VMware engineers, and countless other sources in addition to my own testing is that this will not work correctly. You may see multiple connections on the LeftHand side but it doesn't mean that ESX will failover/load-balance correctly.
My suggestion for you to try, after you perform all of the steps as outlined by the HP doc, run the following command on your ESX server: vmkiscsi-tool -l -V vmhba## | where vmhba## is the hba number of your software iSCSI initiator. If you setup everything according to the HP doc (including the commands, Round-Robin on the datastore, etc)... you should see it list the multiple VMKnics but with no information about the paths/VMknics (I.e. Tx, Rx, etc). If you go back and pull the standby VMKnics out and make them un-used on one-another and run that command again, you should see data list correctly after running the command and you should see multiple paths build on your storage side (given you are using VIP load-balancing).
Let me know if you have any questions or need any additional information.
Thanks,
Tyler
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-20-2009 04:37 AM
тАО10-20-2009 04:37 AM
Re: vSphere guidance (looking for a field guide for v4)
I'm connecting to Load Balanced cluster using the VIP address. this results in two connections to the first SAN node.
You had found a error in the HP doc. That is correct, however this error is already fixed in the latest doc. See the reply of Gauche at Sep 24, 2009 01:43:30 GMT (new version of the doc is attached)
So i already have the second nic set up unused.
vmkiscsi-tool shows me two connected interfaces.
I hope HP will soon come with a MPIO plugin which connects to all nodes in the cluster like the MPIO DSM for Windows does.
However i still dont understand why a approx 30 sec i/o stall is neccesary when one of the path's in a Round Robin configuration fails.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-20-2009 04:56 AM
тАО10-20-2009 04:56 AM
Re: vSphere guidance (looking for a field guide for v4)
We also have each nic set as unused for the other vmkernels as recommended in VMware's iSCSI guide. I didn't double check the HP field guide. We have round robin turned on and set as the default path selection option.
Running esxtop (n, s 2, T) shows both iSCSI nics sending and receiving roughly 50% of the traffic.
Kevin