StoreVirtual Storage
1752355 Members
5363 Online
108787 Solutions
New Discussion юеВ

Re: P 4000 VMkernel ports

 
SOLVED
Go to solution
Ivanbre
Frequent Advisor

P 4000 VMkernel ports

Hi,

I red the

http://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a801?ciid=e87f9708310e4210VgnVCM100000a360ea10RCRD

Now it's get confusing:

1. Figure 3 page 5. The figure is only showing 1 VMKernel port with 2 NICs bound
2. Page 6. It states from "Multi-pathing iSCSI for vSphere 4" ative iSCSI multi-pathing in vSphere 4 provides superior bandwidth performance by aggregating
network ports. Configuring iSCSI multi-pathing requires at least two network ports on the virtual
switch. The following steps must be performed on each ESX or ESXi server individually.
яВ╖ Create a second VMkernel port on the virtual switch for iSCSI.
яВ╖ For each VMkernel port on the virtual switch assign a different physical network adapter as the
active adapter. This ensures the multiple VMkernel ports use different network adapters for their I/O.

Now it's get confusing.

For round robin MPIO to work correct i know a 1:1 mapping is the right approach. So that means we need to make VMKernel ports binded to 2 physical nics and a active/unused configuration as show in figure 5. I do see in figure 5 two vmkernel ports.

Now my question : Is figure 3 something wrong in the documentation as that isn't best practise?
5 REPLIES 5
teledata
Respected Contributor

Re: P 4000 VMkernel ports



figure 3 appears to be a perfectly functioning configuration that would let the NETWORK stack perform the failover. However what they are showing is NOT a multi pathing storage configuration.

IMHO I agree with you, that if you have 6 ports (and don't need to dedicate any for a DMZ or anything) I would opt for a true vSphere 4/LeftHand multipathing setup with 2 separate iSCSI kernels each on their own vSwitch and configure Round Robin pathing through the storage stack.
http://www.tdonline.com
Ivanbre
Frequent Advisor

Re: P 4000 VMkernel ports

Hi,

Well the million dollar question is why shouldn't people use a multipath setup in P4000 infrastructure.

I guess figure 3 be applied only when you u use a other storage path policy then round roubin. But we want best practise and best peformance.

I guess the document is alright but i think the author didn't seperated it in a logical way :)
teledata
Respected Contributor
Solution

Re: P 4000 VMkernel ports

Yes.. a bit unclear. I'm curious as well, why would a true round robin NOT be considered the recommended and best practice configuration.

Storage multi pathing also appears to failover MUCH faster than network stack, so you reduce your chances of an iSCSI timeout (due to a NIC/cable/port failure) using storage multi pathing...

I guess that's why we have the forums.. We all know better than to rely only on "official" documentation... ;)
http://www.tdonline.com
Uwe Zessin
Honored Contributor

Re: P 4000 VMkernel ports

Figure 3 does not make sense in an ESX4 world. As described later, in ESX4 you have a 1:1 relationship between the SW iSCSI initiator port on a VMkernel port and a pNIC. You cannot bind the SW iSCSI initiator to a port group with multiple active pNICs.

The correct configuration in Figure 3 would be to add a second VMkernel port group. Maybe somebody did modify an ESX3 version of the document but forgot to replace the screenshot.


On page 6:
"Configuring iSCSI multi-pathing requires at least two network ports on the virtual switch."

Well, you can also do multi-pathing with one initiator port and if you target offer the disk on multiple ports. Did work on ESX 3.5 already.


> I'm curious as well, why would a true round robin NOT be
> considered the recommended and best practice configuration.

They seem to do (for non multi-site configurations) - on page 7:

esxcli nmp satp setdefaultpsp --satp VMW_SATP_DEFAULT_AA --psp VMW_PSP_RR
.
Dave Finch
Advisor

Re: P 4000 VMkernel ports

I believe that figure 3 is incorrect. The vSwitch should have 2 network adapters but the VMkernel ports should only have one each assigned. In addition, I've been running the following script to enable multipathing in ESXi 4.0 and higher:
set VI_USERNAME=root
set VI_PASSWORD=LHN
set VI_SERVER=esx01
esxcli swiscsi nic add -n vmk1 -d=vmhba33
esxcli swiscsi nic add -n vmk2 -d=vmhba33
esxcli corestorage claiming unclaim --type location
esxcli nmp satp setdefaultpsp --satp VMW_SATP_DEFAULT_AA --psp VMW_PSP_RR
esxcli corestorage claimrule load
esxcli corestorage claimrule run

(The esxcli corestorage claiming unclaim... command always returns an error).

This is running in 3 separate production environments. Hope that I've got it right!