Array Setup and Networking
1752749 Members
4990 Online
108789 Solutions
New Discussion

ISCSI switchport aggregation with MPIO?

 
SOLVED
Go to solution
rhcjmo136
Occasional Contributor

ISCSI switchport aggregation with MPIO?

I have a question regarding an architecture setup by a vendor that I am taking over supporting.  The current environment is VMware 5.5, two dedicated NICs per server for iscsi traffic to a switch stack that connects to a CS300.


Currently, they have setup the VMware iscsi adapters per multipath recommendations (bonded VMKs with Round-Robin), however I am a bit confused on the potential impact of the network configuration.  Currently, they have everything setup for ISCSI best-practice (dedicated switches, non-routed iscsi vlan, jumbo frames, etc), however the pair of stacked switches are aggregating the links to the NICs on the server. 


Host egress traffic, from my understanding, shouldn't be effected, but return traffic from the Nimble would have to be processed by both NIC at least to layer 2 then one of them would have to drop traffic as it isn't the intended destination MAC for the incoming frames, correct?  Since we are using software iscsi via VMware (see Nimble's VMware guide page 5), LACP or other bonded link protocols seem to me to add unnecessary overhead in both traffic and processing for ingress traffic on the host's CPU and NIC.  Am I correct or mistaken? 

Reference:

iscsi.jpg

4 REPLIES 4
pvalenta125
New Member

Re: ISCSI switchport aggregation with MPIO?

Josh,

I can't say for sure about traffic being dropped. With LACP you'll typically only get the throughput of one NIC per TCP stream and depending on the hashing algorithm used you'll probably only ever see one NIC speeds with your setup.

I have my environment setup with MPIO and no LACP following Dell network practices (using two Force10 switches that are VLT'd). Having LACP or bonded links adds extra config overhead in my view. I'd suggest remove the aggregation, give NIC 1 and NIC 2 it's own IP and configure software iscsi per page 5 like you pointed out.

Phil

rhcjmo136
Occasional Contributor

Re: ISCSI switchport aggregation with MPIO?

Links are bonded with HP's Trunk protocol.  VMK1 and VMK2 have their own IPs, are bonded to both NICs, each one with a different primary VMNIC with the secondary as failover.  They are not teamed at the VMware side.  I'm not an expert on how VMware handles traffic at the NIC level, so I was hoping for some confirmation on my suspicion that there's nothing gained by aggregating the links at the switch level to the hosts, and in fact may reduce performance.

iscsi2.jpg

pvalenta125
New Member

Re: ISCSI switchport aggregation with MPIO?

I would say nothing gained by aggregating switch to host.     

chris24
Respected Contributor
Solution

Re: ISCSI switchport aggregation with MPIO?

Hello,

If both of the NIC's are bonded to the software iscsi initiator, that means you are running a flat subnets across both of the VMk's. Your interconnect / ISL between your two stacked switches will be a source of contention. Why? The switches have a finite cache, depending on the switch typically >9MB shared between all ports, if you are using LACP as your interconnect between your stacked switches then these ports will consume some some of the total cache. You want to eliminate traffic across your ISL, ideally one should use the bisect / even odd option (Administration > Networking > Subnet) when using a single subnet to prevent traffic across your ISL.

Many thanks,

Chris