BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

High Number of Discards on VirtualConnect Flex10 shared uplinks

SOLVED
Go to solution

High Number of Discards on VirtualConnect Flex10 shared uplinks

Here is the setup.
C3000 enclosure with two Flex10 interconnect modules in Bay1 and Bay2.
X1 & X2 of Bay1, and X5 & X6 of Bay2 are all part of the a Shared Uplink set. X1 & X2 connect to switch1 as an LACP trunk (currently the standy set), X5 & X6 connect to switch2 also as an LACP trunk. Both switch1 and switch2 are Procurve 6600. All connections are 10G using direct attach cables. Their is also a 10G link between both switches. (Spanning tree is in use).
This has been deployed for several months and everything seems to working fine. However, I'm evaluating SolarWinds Orion NPM software and added these switches and virtual connect modules to be monitored and after 24 hours I'm seeing a ton of discards on the interfaces (see attached).

Again everything seems to working, but I'm trying to understand what might be going on.
4 REPLIES
Antonio Milanese
Trusted Contributor

Re: High Number of Discards on VirtualConnect Flex10 shared uplinks

Hello Margius,

Is flow control enabled on 6600?
Port personality are "auto" or 10GbFD ?

i suggest you to verify the "real" statistics on 6600 ports e flex modules.

on 6600:

show lacp
show int x

where x are uplinks to flex

on VCNET

show statistics enc0:1:x1
show statistics enc0:1:x2
show statistics enc0:2:x5
show statistics enc0:2:x6

May be your discards could be a flapping LACP channel but could be anything so we need some more stats..

best regards,

Antonio

Re: High Number of Discards on VirtualConnect Flex10 shared uplinks

Flow control is not enabled. And I didn't make any changes to the port personality so I believe it is still on auto.

Attached are the outputs from both switches and the vcnet running the commands suggested.

One point of clarification from my original post. X1 & X2 of Bay1 connect to Switch2 and X5 & X6 of Bay2 connect to Switch1.
Antonio Milanese
Trusted Contributor
Solution

Re: High Number of Discards on VirtualConnect Flex10 shared uplinks

Hello Margius,

reading yours logs my guess is that's a behavior related to the way brodcast/multicast traffic is recieved/forwarded on different sides of LACP channel:
the discards, mostly inputs, are on the VCnet
side, and seem to increase synchronously with inbuond mcast/bcast traffic (BTW do you have any mcast applications?there is a lot of mcast traffic on yours switch..are you using IGMP?)

procurve/VCNet both work similarly spreading unicasts using SA/DA macs pairs plus L3 (IP src/dst) bits if presents; VCNet if i recall correctly can use L4 (tcp/udp) ports too..with mcast/bcast theoretically they use only one designated port to forward but seams that 6600 are round robin: so VCnet discard the one that's not the designated one for that conversation.

This is my guess but counters does not indicate neither oversubscribed (high rx/tx buffers drops) nor media/ports problems.
Enabling flow control it's probably irrelevant.

To confirm those findings try to clear statistics and observe if discards and mcast are in sync or if you can, shutdown on interface per LACP channel.

Hope this help you.Regards,

Antonio

Re: High Number of Discards on VirtualConnect Flex10 shared uplinks

Actually, Multicasting makes sense. There is a good amount of it on the network because of a multicast Network Load Balancing cluster of terminal serves.

Thanks for looking into this and giving me some additional insight.