StoreVirtual Storage
1752786 Members
5829 Online
108789 Solutions
New Discussion юеВ

Re: iSCSI/MPIO/Load Balancing Question P4300

 
JoLa_3
Occasional Advisor

iSCSI/MPIO/Load Balancing Question P4300

Hi everybody,

we have the following system running:
(nearly same setup like the 'Scalable Rack HA' in http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-0100EEW.pdf)
2* DL380 G6 each w/ 2 NC364T Quad Port Gbit Ethernet
2* P4300 w/ 10GBe Option
2* HP ProCurve 2910-al w/ 2-Port CX-4 Modules,
divided onto two sites.
At the moment the sites are connected with 1 Gbit Fibre.
The Lefthands are connected to the 2910-al with an Active/Passive bond consisting of one port CX-4 10GBe and one port onBoard Gbit.
On the servers I installed the Lefthand MPIO DSM and configured six Ports in den Proliant Network Configuration Utility as a Team using 802.3ad. I configured these six ports on the 2910-al as a Trunk Group using LACP.
The hosts are running Windows Server 2008 R2 Datacenter in a Hyper-V Failover Cluster configuration.
At the moment the CSV (Cluster Shared Volume) is configured as a Full-Provisioned 2-Way Replication Volume with Availability.
I configured another volume the same way for test purposes (because one should not work with the CSV manually).
If I now run tests against the test volume the speed is limited to 1 Gbit although the bonding shows 6 Gbit/s connection speed in Taskmanager and Network Configuration Utility.
I disabled replication to exclude the site-to-site connection as a limitation, but nothing changed.
Is there a way to configure additional 'virtual' IP-Adresses on the Lefthands (like eth0:0 etc. on a standard Linux distribution)
to use Multipathing the right way? If I understand Multipathing right, each link needs to be on a different subnet. But in the current setup I could not configure different subnets with only one link from the Lefthands to the switch and six from the server to the switch. I tried to configure six single links with different IP-Adresses in the SAME subnet on the servers, but now perfomance improvment at all.
Is under performance and failover aspects the best solution to buy 10GBe cards for the servers and create an Active/Passive Bond like on the Lefthands?
Thanks in advance,
Joerg
5 REPLIES 5
Darren Hutton_1
Occasional Advisor

Re: iSCSI/MPIO/Load Balancing Question P4300

Table 2 in "Building high-performance, highly available IP
storage networks with HP SAN/iQ├В┬о Software" states that it will only operate at 1 gigabit - but this might just be an omission of 10GbE.

Both ALB and LACP state that they are not supported with mixed ports, but there's no mention of mixed ports in Active/Passive mode.

Have you tried removing the bond on the P4300 and trying with just the 10GbE?
Andrew Steel
Advisor

Re: iSCSI/MPIO/Load Balancing Question P4300

Hi Joerg,

It sounds like you tried this ("I tried to configure six single links with different IP-Adresses in the SAME subnet on the servers, but now perfomance improvment at all") but just in case you missed a step:

Just a quick note on the server NIC config - as far as I know the MPIO does not support NIC teaming. So the config should be individual NICs on the same subnet with the MPIO installed - then configure the iSCSI connections (use iscsicpl on server core 2008 R2) - find the target, click connect, select enable multipath and the click advanced, then specify microsoft iSCSI initiator, the source IP and dest IP. Then repeat this for each source IP (for each of your iSCSI dedicated NIC's). You should see iSCSI connections from each source IP appear on the SAN with "DSM" beside them.

Hope that helps, though I'm trying the same thing with a 4 node P4300 cluster and as soon as I add the volume to CSV it goes offline - works fine on a 2 node P4300 cluster - still trying to work that one out.

Cheers
Andrew Steel
Advisor

Re: iSCSI/MPIO/Load Balancing Question P4300

Hi Joerg,

Also just found this in one of the Lefthand manuals (not sure if it is current though):
The LeftHand Networks DSM for MPIO installer lays down the necessary
Microsoft├в ┬в MPIO software in order to connect to the SAN via MPIO. The only
host server configuration the Administrator needs to be concerned with is if
they would like to enable the LeftHand Networks DSM to utilize multiple
network cards (non-bonded/teamed) in the host server connected to the SAN.
This is used for fault-tolerance only (active/passive) and does not provide two
simultaneous I/O paths to the SAN (active/active). This is done by checking
the ├в Enable multi-path├в check-box when logging into the iSCSI volume.
Note: Enabling multiple SAN network cards in the host server is the
ONLY scenario in which Enable multi-path should be selected at
volume login, is only applicable with SAN/iQ 6.6 or later, and if using
the LeftHand DSM
Note: In a Microsoft├в ┬в cluster environment, selecting Enable
multi-path at volume login on the first Microsoft cluster node may
prevent the second cluster node from ever seeing the volume.
JoLa_3
Occasional Advisor

Re: iSCSI/MPIO/Load Balancing Question P4300

Hi Andrew,

thanks for your support.
I'm sorry for my late answer.
In the meantime things have changed a little:
we're going to buy another 2 2910-al with CX-4 plugin and Intel EXPX9502CX4 cards for the Hyper-V Host servers. I think perfomance won't be a problem then. Although looking forward for the P4000 Series G2 (SAN/iq 8.5) release on March, 29th.
We had problems with Data Protection Manager 2010 RC and the Lefthand VSS Writer. The Lefthand VSS Writer made backups with DPM impossible.
Hope this will be fixed, too.
Did you solve your problem with the 4 node cluster?
Cheers
David Biddle2
New Member

Re: iSCSI/MPIO/Load Balancing Question P4300

How did you make out when purchasing those 10GB network cards for the host clusters? I think that would've helped because iscsi supposedly only works with two physical NICs when using MPIO. It sounds like you were originally trying to use the teamed nics for the iscsi traffic.