- Community Home
- >
- Networking
- >
- Switching and Routing
- >
- HPE Aruba Networking & ProVision-based
- >
- Default switching policy (throughput) without QoS
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-31-2014 05:50 AM
07-31-2014 05:50 AM
Default switching policy (throughput) without QoS
Hello,
we have a 5412zl. Lets assume a very simple setup with 5 downlinks and 1 uplink (the numbers are only examples). I would like to know how the switch behaves and distributes the availaible bandwidth. For this end, assume that all ports are running at maximum load and that no other bottlenecks exist. I know it is an academic question. For example assume that ethernet packets are equally sized and are artificially created (for example by the tool iperf).
Case 1: The uplink is 1GBit/s, the 5 downlinks are 1GBit/s, too. I expect that the switch handles all ingress buffer in some kind of round-robin policy and forwards packets fairly. Hence, I would expect a net rate of 200MBit/s for each downlink.
Case 2: The uplink is 1GBit/s again, but only 1 downlink is 1GBit/s the other 4 are only 100MBit/s. What happens now?
Assumption 1: The 4 fast ethernet links get 100MBit/s net rate, and the gigabit ethernet link get the missing 600MBit/s.
Assumption 2: The available total of 1000MBit/s is distributed according to a 10:1:1:1:1 ratio. This means the gigabit interface gets 716 MBit/s and the 4 fast ethernet links get 71 MBit/s each.
What is correct? Thank you, Matthias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-31-2014 04:32 PM
07-31-2014 04:32 PM
Re: Default switching policy (throughput) without QoS
The port handling the incoming traffic from your uplink doesn't care where the destination traffic is going to be forwarded from, so the destination is irrelevant to the dropping of packets that occurs when your 1Gb uplink becomes congested with incoming traffic.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2014 08:27 AM
08-02-2014 08:27 AM
Re: Default switching policy (throughput) without QoS
Assuming your uplink is 1x1Gbps and your downlinks are 5x1Gbps, in case of full upload from all 5 downlink clients each client will experience a drop of 20%.
This is because the switch looks at dstmac to know where this frame should be outputed.
TCP will automagically adjust for this while UDP will experience the drops more dramatically.
Incoming traffic (assuming your uplink is towards internet) will most likely not experience any drops because the 1Gbps suddently have 5x1Gbps to send data to.
Also note that a small microburst will at first be cached by the portbuffers available on your switch (so no drops will occur).
This page has a summary of port buffer sizes (and if they are dedicated or shared) among common vendors and models: http://people.ucsc.edu/~warner/buffer.html
http://www.h3c.com/portal/products___solutions/technology/lan/troubleshooting/200812/623016_57_0.htm is also a good page regarding what you can look for in various counters and what these values most likely mean:
* all: All fault types
* bad-driver: Too many undersized/giant packets
* bad-transceiver: Excessive jabbering
* bad-cable: Excessive CRC/alignment errors
* too-long-cable: Excessive late collisions
* over-bandwidth: High collision or drop rate
* broadcast-storm: Excessive broadcasts
* duplex-mismatch-HDx: Duplex mismatch. Reconfigure to Full Duplex
* duplex-mismatch-FDx: Duplex mismatch. Reconfigure port to Auto
* link-flap: Rapid detection of link faults and recoveries
* loss-of-link: Link loss detected. (Sensitivity not applicable)
That is if you got a counter increasing for CRC/alignment errors thats most likely due to a bad cable.
Also note that some vendors and models (dunno if this is true for 51412zl you are asking about) will have various issues with peak performance once you enable QoS (even if the traffic itself doesnt have to get reordered when passing through your device).