- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: P 4000 VMkernel ports
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-10-2010 06:04 AM
тАО06-10-2010 06:04 AM
I red the
http://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a801?ciid=e87f9708310e4210VgnVCM100000a360ea10RCRD
Now it's get confusing:
1. Figure 3 page 5. The figure is only showing 1 VMKernel port with 2 NICs bound
2. Page 6. It states from "Multi-pathing iSCSI for vSphere 4" ative iSCSI multi-pathing in vSphere 4 provides superior bandwidth performance by aggregating
network ports. Configuring iSCSI multi-pathing requires at least two network ports on the virtual
switch. The following steps must be performed on each ESX or ESXi server individually.
яВ╖ Create a second VMkernel port on the virtual switch for iSCSI.
яВ╖ For each VMkernel port on the virtual switch assign a different physical network adapter as the
active adapter. This ensures the multiple VMkernel ports use different network adapters for their I/O.
Now it's get confusing.
For round robin MPIO to work correct i know a 1:1 mapping is the right approach. So that means we need to make VMKernel ports binded to 2 physical nics and a active/unused configuration as show in figure 5. I do see in figure 5 two vmkernel ports.
Now my question : Is figure 3 something wrong in the documentation as that isn't best practise?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-10-2010 06:31 AM
тАО06-10-2010 06:31 AM
Re: P 4000 VMkernel ports
figure 3 appears to be a perfectly functioning configuration that would let the NETWORK stack perform the failover. However what they are showing is NOT a multi pathing storage configuration.
IMHO I agree with you, that if you have 6 ports (and don't need to dedicate any for a DMZ or anything) I would opt for a true vSphere 4/LeftHand multipathing setup with 2 separate iSCSI kernels each on their own vSwitch and configure Round Robin pathing through the storage stack.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-10-2010 06:37 AM
тАО06-10-2010 06:37 AM
Re: P 4000 VMkernel ports
Well the million dollar question is why shouldn't people use a multipath setup in P4000 infrastructure.
I guess figure 3 be applied only when you u use a other storage path policy then round roubin. But we want best practise and best peformance.
I guess the document is alright but i think the author didn't seperated it in a logical way :)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-10-2010 06:56 AM
тАО06-10-2010 06:56 AM
SolutionStorage multi pathing also appears to failover MUCH faster than network stack, so you reduce your chances of an iSCSI timeout (due to a NIC/cable/port failure) using storage multi pathing...
I guess that's why we have the forums.. We all know better than to rely only on "official" documentation... ;)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-10-2010 08:11 AM
тАО06-10-2010 08:11 AM
Re: P 4000 VMkernel ports
The correct configuration in Figure 3 would be to add a second VMkernel port group. Maybe somebody did modify an ESX3 version of the document but forgot to replace the screenshot.
On page 6:
"Configuring iSCSI multi-pathing requires at least two network ports on the virtual switch."
Well, you can also do multi-pathing with one initiator port and if you target offer the disk on multiple ports. Did work on ESX 3.5 already.
> I'm curious as well, why would a true round robin NOT be
> considered the recommended and best practice configuration.
They seem to do (for non multi-site configurations) - on page 7:
esxcli nmp satp setdefaultpsp --satp VMW_SATP_DEFAULT_AA --psp VMW_PSP_RR
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-10-2010 05:35 PM
тАО06-10-2010 05:35 PM
Re: P 4000 VMkernel ports
set VI_USERNAME=root
set VI_PASSWORD=LHN
set VI_SERVER=esx01
esxcli swiscsi nic add -n vmk1 -d=vmhba33
esxcli swiscsi nic add -n vmk2 -d=vmhba33
esxcli corestorage claiming unclaim --type location
esxcli nmp satp setdefaultpsp --satp VMW_SATP_DEFAULT_AA --psp VMW_PSP_RR
esxcli corestorage claimrule load
esxcli corestorage claimrule run
(The esxcli corestorage claiming unclaim... command always returns an error).
This is running in 3 separate production environments. Hope that I've got it right!