1752510 Members
4848 Online
108788 Solutions
New Discussion

Re: MPIO paths

 
SOLVED
Go to solution
danletkeman
Frequent Advisor

MPIO paths

Hello,

 

My test setup:

 

2x ESXi 5 Hosts

2x P4500 Nodes

2x Dedicated iscsi switches

 

Each node has one nic connected to each switch, each host has one nic connected to each switch for a total of 2 icsci pnic's.

 

Created the vswitch with two vmkernels for two iscsi connections, and added the two nics under the software iscsi adapter settings....

 

Changed the datastore pathing to round robin, and all is well.  Tested some traffic and the esx host is perfectly balancing the load between the two nics.

 

Now for the problem.  When looking at the paths available in the screen where you select round robin it shows two paths to that particular lun, but both paths are to the same physical san node.  This results in only 1gbps throughput for this entire lun because it is only using the one nic from the one node.  Created a second volume on the san, and looking at that datastore's path on esx shows the same thing except now its to the other san node. 

 

Does the volume really only reside on one node?  Or Am I doing something wrong with the esx setup?

 

Thanks,

Dan.

9 REPLIES 9
gerance
Occasional Advisor

Re: MPIO paths

hello,

 

can you tell us the type of the volume you created ( networkraid 10 or not)?

 

regards

Jay Cardin
Frequent Advisor

Re: MPIO paths

That is how it works.  VMware does not have the ability to connect to every node for every LUN like the DSM for Windows.  It doesn't matter if you use MPIO, Network RAID-10 or Network RAID-0; it still only talks through a single gateway node.

 

When I setup my LH SAN, I created 1 datastore for each LH node and manually load-balanced the system.   

 

If you are setup for Network RAID-10 and you have two P4500's, then the complete volume exists on both.  Even though Vmware is only talking to a single gateway node, if that node fails, the other P4500 will take over.  It takes about 10 to 15 seconds for the failover to complete.  Once the original P4500 is back online and restriped, it will resume the duties of the gateway (on 9.0 and higher)

5y53ng
Regular Advisor

Re: MPIO paths

As Jay mentioned, you're seeing the paths to each gateway connection.  You can obtain some additional performance by changing the path selection policy to IOPS and changing the number of consecutive IOPS per path to something lower than the default, which is 1000.

 

To get a list of devices use:

 

ls /vmfs/devices/disks/naa.6000eb* (I think most HP volumes will start with naa.6000eb)

 

To get the path selection policy settings:

 

esxcli storage nmp psp roundrobin deviceconfig get --device=<x> (Where X is obtained from above)

 

To set the path selection policy to IOPS and specify the numbe of consecutive IOPS per path use:

 

esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops=<X> --device=<X>

 

The best number to use for consecutive IOPS is open for debate, but I settled on 3. I never noticed much of a difference for any value between 1 and 64.

danletkeman
Frequent Advisor

Re: MPIO paths

Network raid 10 volume.

danletkeman
Frequent Advisor

Re: MPIO paths

I'll have to try changing the IOPS policy.  With the default IOPS policy it looks like I can only get about 1GBPS to the node from a host with two vmkernel's.  Will chaning the IOPS policy increase this speed?  Because it seems as if all of the traffic is going from the esx host to one nic on one node.

 

I also have an odd problem with the traffic going from the esx host to the san is going through the trunk port on the switches instead of going directly to the san.  Called hp support and they were stumpped.

 

Eg:

 

Each node is connected to each switch, but this is how the traffic flows:

 

Esx host ---vmk1 -----switch1-----node1(gateway node for lun1)

          |                                     |

          -------vmk2------switch2

 

When it should flow like this:

 

Esx host ---vmk1 -----switch1-----node1(gateway node for lun1)

          |                                                           |

          -------vmk2------switch2--------

5y53ng
Regular Advisor

Re: MPIO paths

Could you just disallow the iSCSI VLAN on the trunk between the switches?

Gediminas Vilutis
Frequent Advisor
Solution

Re: MPIO paths

 

 

The desribed behaviour is exactly how TCP over ethernet works and nothing can be done - it is all hardcoded down to ethernet working logic. IP address is maped to ethernet MAC address and this is 1-to-1 mapping. So when P4500 node trunks two NICs to logical ALB interface, it still announces itself to network via one interface and all incoming traffic goes to node via this interface. Second interface on ALB trunk can be (and is) used for outgoing traffic only.

 

Gediminas

 

 

danletkeman
Frequent Advisor

Re: MPIO paths

No you cannot disallow the iscsi vlan on the trunk port. You loose quarum.
5y53ng
Regular Advisor

Re: MPIO paths

Ya know... I  really missed the obvious there. Sorry about that.