HPE Aruba Networking & ProVision-based
1835942 Members
2651 Online
110088 Solutions
New Discussion

Default switching policy (throughput) without QoS

 
HEKnet
Advisor

Default switching policy (throughput) without QoS

Hello,

we have a 5412zl. Lets assume a very simple setup with 5 downlinks and 1 uplink (the numbers are only examples). I would like to know how the switch behaves and distributes the availaible bandwidth. For this end, assume that all ports are running at maximum load and that no other bottlenecks exist. I know it is an academic question. For example assume that ethernet packets are equally sized and are artificially created (for example by the tool iperf).

 

Case 1: The uplink is 1GBit/s, the 5 downlinks are 1GBit/s, too. I expect that the switch handles all ingress buffer in some kind of round-robin policy and forwards packets fairly. Hence, I would expect a net rate of 200MBit/s for each downlink.

 

Case 2: The uplink is 1GBit/s again, but only 1 downlink is 1GBit/s the other 4 are only 100MBit/s. What happens now?

 

Assumption 1: The 4 fast ethernet links get 100MBit/s net rate, and the gigabit ethernet link get the missing 600MBit/s.

 

Assumption 2: The available total of 1000MBit/s is distributed according to a 10:1:1:1:1 ratio. This means the gigabit interface gets 716 MBit/s and the 4 fast ethernet links get 71 MBit/s each.

 

What is correct? Thank you, Matthias

2 REPLIES 2
Vince-Whirlwind
Honored Contributor

Re: Default switching policy (throughput) without QoS

The port handling the incoming traffic from your uplink doesn't care where the destination traffic is going to be forwarded from, so the destination is irrelevant to the dropping of packets that occurs when your 1Gb uplink becomes congested with incoming traffic.

Apachez-
Trusted Contributor

Re: Default switching policy (throughput) without QoS

Sounds odd.

Assuming your uplink is 1x1Gbps and your downlinks are 5x1Gbps, in case of full upload from all 5 downlink clients each client will experience a drop of 20%.

This is because the switch looks at dstmac to know where this frame should be outputed.

TCP will automagically adjust for this while UDP will experience the drops more dramatically.

Incoming traffic (assuming your uplink is towards internet) will most likely not experience any drops because the 1Gbps suddently have 5x1Gbps to send data to.

Also note that a small microburst will at first be cached by the portbuffers available on your switch (so no drops will occur).

This page has a summary of port buffer sizes (and if they are dedicated or shared) among common vendors and models: http://people.ucsc.edu/~warner/buffer.html

http://www.h3c.com/portal/products___solutions/technology/lan/troubleshooting/200812/623016_57_0.htm is also a good page regarding what you can look for in various counters and what these values most likely mean:

* all: All fault types
* bad-driver: Too many undersized/giant packets
* bad-transceiver: Excessive jabbering
* bad-cable: Excessive CRC/alignment errors
* too-long-cable: Excessive late collisions
* over-bandwidth: High collision or drop rate
* broadcast-storm: Excessive broadcasts
* duplex-mismatch-HDx: Duplex mismatch. Reconfigure to Full Duplex
* duplex-mismatch-FDx: Duplex mismatch. Reconfigure port to Auto
* link-flap: Rapid detection of link faults and recoveries
* loss-of-link: Link loss detected. (Sensitivity not applicable)

That is if you got a counter increasing for CRC/alignment errors thats most likely due to a bad cable.

Also note that some vendors and models (dunno if this is true for 51412zl you are asking about) will have various issues with peak performance once you enable QoS (even if the traffic itself doesnt have to get reordered when passing through your device).