Array Setup and Networking
1752690 Members
5472 Online
108789 Solutions
New Discussion

Re: to vPC peer or NOT vPC peer Nexus for UCS/Nimble Smart Stack.

 
SOLVED
Go to solution
joshik67
New Member

To vPC peer or NOT vPC peer Nexus for UCS/Nimble SmartStack

The Nimble KB doc says to vPC peer the Nexus but when working recently with a Nimble SE he recommended not to vPC peer the Nexus in order to avoid iSCSI traffic crossing the peer link.

The scenario would be as follows:

dual iSCSI subnets/VLANs (best practice according to Nimble SE).  One for FI-A and one for FI-B

both of those VLANs of course would reside on both Nexus due to vPC requirements.

Nimble, TG-1 (iSCSI-A) conencted to Nexus 1, TG-2 (iSCSI-B) connected to Nexus 2 (not vPCd)

FI's vPC'd to each Nexus

In my picture below, the yellow lines represent iSCSI-A traffic coming from FI-A.  Any thoughts?

nimb.jpg

3 REPLIES 3
ageiger3171
Occasional Advisor

Re: to vPC peer or NOT vPC peer Nexus for UCS/Nimble Smart Stack.

It depends on where you want to have the Nimble connected.  We opted to connect as appliance ports on the FI.  For upstream communication to non-UCS hosts we created appliance port uplink port channels.

Going the appliance port route allows you make sure you have an A/B storage path config.  If all of your storage consumers are on the UCS, it makes it even more efficient.  The only downside would be burning FI port licenses.

joshik67
New Member

Re: to vPC peer or NOT vPC peer Nexus for UCS/Nimble Smart Stack.

Nimble is connected to the 2 x Nexus.   Our company best practice is to not attach storage to the FIs.  

The Nimble KB doc shows both methods, FI Applicance port, or to a 10G switch which is a Nexus vPC pair.

In the Nexus vPC pair example, they have the Nexus paired as a vPC as i mentioned in my original post.   This still would cause traffic to cross the vPC peer link.

ageiger3171
Occasional Advisor
Solution

Re: to vPC peer or NOT vPC peer Nexus for UCS/Nimble Smart Stack.

You could try setting up a second set of uplink port-channels from the FI and use vlan pinning to assign the storage traffic to the uplinks.

On our initial Nimble implementation we were using a single subnet(1.4 days) and connecting to a Nexus 5K and having the storage traffic ride over the FI uplinks with all the other traffic.  We had a chance to do a ground up redesign over the summer and went with the appliance ports and 2 data vlans in the new implementation.  I now feel comfortable using Jumbo frames in the fabric since I only have the FI, hosts, and array to worry about.