Array Setup and Networking
1751976 Members
4650 Online
108784 Solutions
New Discussion

Re: iSCSI Set up on N9K with ACI - vPC

 
mikecarpendale125
New Member

iSCSI Set up on N9K with ACI - vPC

Has anyone set up iSCSI off a N9K ACI Core switch - I'm having issues when trying to use vPCs

I managed to get connectivity to the Nimble array without any vPCs on the N9Ks

I used Port 11 on the FI and port 43 on the N9K Leaf01 – as documented below

single connection 

 

 

 

 

As soon as you configure any vPCs, traffic stops

Here is how we have configured – BTW what I cannot see is how to set the Nimble Virtual Interfaces to LACP-Active mode

And the Virtual Interface set up looks weird, I would expect the IP address to float across both adapters

vPC

Cheers

Mike

4 REPLIES 4
alex_goltz
Advisor

Re: iSCSI Set up on N9K with ACI - vPC

Hi Mike,

I'm curious if there is 'newer' documentation out there that is telling people to use vPCs on interfaces going to the Nimble tg ports.  Can you post the documentation you're using to do this?  The reason I ask, is because I think you should be only using vPC for VLANs that are used for VM network traffic.  Not iSCSI VLANs.  If you create a vPC domain between your 9Ks, (unless things have changed), do NOT allow your two iSCSI VLANs to traverse that vPC.  And use only a port-channel (LACP) on each switch locally for their respective interfaces.
Can someone from Nimble weight in on this?

mikecarpendale125
New Member

Re: iSCSI Set up on N9K with ACI - vPC

Hi Alex, thanks for the reply,

We looked into this a bit more yesterday - in short, no vPC to the Nimble array, I'll post a detailed reply shortly.

With respects to your comments - are you suggestion connectivity like such?

ACI port channel

Cheers

Mike

mikecarpendale125
New Member

Re: iSCSI Set up on N9K with ACI - vPC

 Yesterday we tinkered a bit more with the setup and had a reply from Nimble Support as well.

I can see from the email below that you're able to attached the Nimble iSCSI array to port 43 on the N9K Leaf01, which is a supported design. However, when you said you wanted to configure Nimble Virtual Interfaces to LACP-Active mode, this would not work since Nimble does not support LACP. That's the reason Nimble would not be able to be attached to Cisco ACI a vPC-attached node.

We've removed the vPC on the Nimble side and managed to get traffic flowing on iSCSI-a and iSCSI-b

Essentially here is the config as it stands now

ACI - No contracts - working

or ACI with Contracts - will consider a move to this later

Connectivity

Traffic Path

I believe under “Normal circumstances” traffic should flow as below, ie not traverse the vPC peer links (in this case the Spines)  - in the process of trying to prove. 

Results

~ # vmkping -s 8972 -I vmk2 10.10.10.90

PING 10.10.10.90 (10.10.10.90): 8972 data bytes

8980 bytes from 10.10.10.90: icmp_seq=0 ttl=64 time=0.296 ms

8980 bytes from 10.10.10.90: icmp_seq=1 ttl=64 time=0.233 ms

8980 bytes from 10.10.10.90: icmp_seq=2 ttl=64 time=0.233 ms

~ # vmkping -s 8972 -I vmk2 10.10.10.91

PING 10.10.10.98 (10.10.10.91): 8972 data bytes

8980 bytes from 10.10.10.91: icmp_seq=0 ttl=64 time=0.229 ms

8980 bytes from 10.10.10.91: icmp_seq=1 ttl=64 time=0.241 ms

8980 bytes from 10.10.10.91: icmp_seq=2 ttl=64 time=0.265 ms

  

Traffic will/should traverse the vPC peer links (in this case the Spines) if there is an outage as below

Results

~ # vmkping -s 8972 -I vmk2 10.10.10.90

PING 10.10.10.90 (10.10.10.90): 8972 data bytes

8980 bytes from 10.10.10.90: icmp_seq=0 ttl=64 time=0.298 ms

8980 bytes from 10.10.10.90: icmp_seq=1 ttl=64 time=0.234 ms

8980 bytes from 10.10.10.90: icmp_seq=2 ttl=64 time=0.240 ms

~ # vmkping -s 8972 -I vmk2 10.10.10.91

PING 10.10.10.91 (10.10.10.91): 8972 data bytes

8980 bytes from 10.10.10.91: icmp_seq=0 ttl=64 time=0.272 ms

8980 bytes from 10.10.10.91: icmp_seq=1 ttl=64 time=0.244 ms

8980 bytes from 10.10.10.91: icmp_seq=2 ttl=64 time=0.225 ms

Looking at the results for 10.10.10.90 above even if it did transverse the Spine the delay is not evident - hard to say what that might be under load?

Keen to hear thoughts and ideas??

Cheers

Mike

alex_goltz
Advisor

Re: iSCSI Set up on N9K with ACI - vPC

Did Nimble support give you any help on this?  You are correct on the non-LACP.  I was wrong.

I think this discussion needs attention from the SmartStack group, if they want customers to have a flawless configuration.

Maybe you can help me understand your diagram, or the design you have. Why are you tagging two different VLANs on one Nimble interface?