- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: MPIO paths
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-10-2012 02:05 PM
09-10-2012 02:05 PM
Hello,
My test setup:
2x ESXi 5 Hosts
2x P4500 Nodes
2x Dedicated iscsi switches
Each node has one nic connected to each switch, each host has one nic connected to each switch for a total of 2 icsci pnic's.
Created the vswitch with two vmkernels for two iscsi connections, and added the two nics under the software iscsi adapter settings....
Changed the datastore pathing to round robin, and all is well. Tested some traffic and the esx host is perfectly balancing the load between the two nics.
Now for the problem. When looking at the paths available in the screen where you select round robin it shows two paths to that particular lun, but both paths are to the same physical san node. This results in only 1gbps throughput for this entire lun because it is only using the one nic from the one node. Created a second volume on the san, and looking at that datastore's path on esx shows the same thing except now its to the other san node.
Does the volume really only reside on one node? Or Am I doing something wrong with the esx setup?
Thanks,
Dan.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2012 01:21 AM
09-11-2012 01:21 AM
Re: MPIO paths
hello,
can you tell us the type of the volume you created ( networkraid 10 or not)?
regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2012 06:05 AM - edited 09-11-2012 06:09 AM
09-11-2012 06:05 AM - edited 09-11-2012 06:09 AM
Re: MPIO paths
That is how it works. VMware does not have the ability to connect to every node for every LUN like the DSM for Windows. It doesn't matter if you use MPIO, Network RAID-10 or Network RAID-0; it still only talks through a single gateway node.
When I setup my LH SAN, I created 1 datastore for each LH node and manually load-balanced the system.
If you are setup for Network RAID-10 and you have two P4500's, then the complete volume exists on both. Even though Vmware is only talking to a single gateway node, if that node fails, the other P4500 will take over. It takes about 10 to 15 seconds for the failover to complete. Once the original P4500 is back online and restriped, it will resume the duties of the gateway (on 9.0 and higher)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2012 07:13 AM
09-11-2012 07:13 AM
Re: MPIO paths
As Jay mentioned, you're seeing the paths to each gateway connection. You can obtain some additional performance by changing the path selection policy to IOPS and changing the number of consecutive IOPS per path to something lower than the default, which is 1000.
To get a list of devices use:
ls /vmfs/devices/disks/naa.6000eb* (I think most HP volumes will start with naa.6000eb)
To get the path selection policy settings:
esxcli storage nmp psp roundrobin deviceconfig get --device=<x> (Where X is obtained from above)
To set the path selection policy to IOPS and specify the numbe of consecutive IOPS per path use:
esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops=<X> --device=<X>
The best number to use for consecutive IOPS is open for debate, but I settled on 3. I never noticed much of a difference for any value between 1 and 64.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2012 02:46 PM
09-11-2012 02:46 PM
Re: MPIO paths
Network raid 10 volume.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2012 02:53 PM
09-11-2012 02:53 PM
Re: MPIO paths
I'll have to try changing the IOPS policy. With the default IOPS policy it looks like I can only get about 1GBPS to the node from a host with two vmkernel's. Will chaning the IOPS policy increase this speed? Because it seems as if all of the traffic is going from the esx host to one nic on one node.
I also have an odd problem with the traffic going from the esx host to the san is going through the trunk port on the switches instead of going directly to the san. Called hp support and they were stumpped.
Eg:
Each node is connected to each switch, but this is how the traffic flows:
Esx host ---vmk1 -----switch1-----node1(gateway node for lun1)
| |
-------vmk2------switch2
When it should flow like this:
Esx host ---vmk1 -----switch1-----node1(gateway node for lun1)
| |
-------vmk2------switch2--------
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2012 10:23 AM
09-12-2012 10:23 AM
Re: MPIO paths
Could you just disallow the iSCSI VLAN on the trunk between the switches?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2012 10:52 AM
09-12-2012 10:52 AM
Solution
The desribed behaviour is exactly how TCP over ethernet works and nothing can be done - it is all hardcoded down to ethernet working logic. IP address is maped to ethernet MAC address and this is 1-to-1 mapping. So when P4500 node trunks two NICs to logical ALB interface, it still announces itself to network via one interface and all incoming traffic goes to node via this interface. Second interface on ALB trunk can be (and is) used for outgoing traffic only.
Gediminas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2012 02:18 PM
09-12-2012 02:18 PM
Re: MPIO paths
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2012 02:16 PM
09-13-2012 02:16 PM
Re: MPIO paths
Ya know... I really missed the obvious there. Sorry about that.