HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem Virtual Connect
cancel
Showing results for 
Search instead for 
Did you mean: 

FlowControl discussion

 
chuckk281
Trusted Contributor

FlowControl discussion

Joey and Rciard had a good "FlowControl" discussion that I thought you might like.

 

**************

Richard: Not bad overall.  I would pit a few nits though.

 

> From Joey: 

 

> Traditionally, Ethernet was designed with a completely shared

> collision domain in mind . . and retransmission was the solution.

> In the days of repeaters, an Ethernet end point (MAC address) would

> attempt to transmit to another end point, but if another end point was

> attempting to transmit on the same 10BaseT/10Base5/10Base2 conduit at

> the same time, then an electrical collision would occur that created

> garbage . . and each would wait a random quantum before trying again. 

> Ultimately, the onus was on upper level protocols to accommodate the

> delay and/or retransmit if necessary . . .

 

Richard:  I'm not sure I would say that a collision created garbage.  I would put it as "the collision would be noticed by both end points.  They would both back-off a random length of time before trying again.  If an end point collided enough times in a row for the same packet it would give up, and rely on higher level protocols to retransmit as necessary."

 

Joey: >When isolating collision domains became pervasive with switches, the

> probability that an end station could be overwhelmed with incoming

> Ethernet packets increased substantially.

 

Richard: Um, no.  It is really no more likely for an *end station* to be overwhelmed with switches in place than it was when everything was still "pure Ethernet" or CSMA/CD.  What is now possible with switches is for the egress side of a given switch port to become overwhelmed because two or more other switch ports, and the stations and/or switches connected to them, could send traffic at a rate which summed to more than the egress rate of that switch port.

 

Now, that doesn't mean that an end station might not wish to assert flow control, only that it isn't any more likely for the end station to become overwhelmed than it was before.

 

Joey: > Packets would be simply dropped and, again, the upper level protocols

> were burdened with retransmission and/or packet order correction.  The

> farther you move up the network stack, however, the more expensive it

> becomes from a latency perspective (at least from the standpoint of

> the application).  So Ethernet PAUSE frames ("flow

> control") were adopted as a means for the receiving end point to

> inform the transmitting end point that its buffers were full . . .

 

Richard:  And heaven forbid the IEEE ever say you have to use higher layers if they could come-up with a way to do the same things down at Layer 2 and below :) All this pause stuff is, I presume, required for FCoE to function since the IEEE has to recreate in Ethernet what Fibre Channel provides.

 

Joey:> 

 

> Thus, turning this feature OFF is not necessarily detrimental per the

> accommodating design of TCP/IP . . but network latency could be

> introduced from the perspective of the application if an end point

> buffers ever get full.  Of course, a big assumption is that the

> application/OS is using layer 3/4/5 protocols that can handling out of

> order packets, trigger retransmissions, etc. . . . like TCP :^)

 

Richard: Though if the pause frames were telling transmitters to pause "too long" it could actually speed things up to disable it.  It really depends quite heavily on the nature of the traffic being carried as to whether pauses or packet drops are worse for overall throughput.

 

**********

 

Any comments or questions?

1 REPLY
chuckk281
Trusted Contributor

Re: FlowControl discussion

Input from Mark:

 

*************

 

Adding to what others have said…

First of all, by default, VC does not enable flow control on uplinks and stacking links.  Only on server down links.  The CLI commands can change to flow control enabled or disabled on all ports.  There is no option to control RX/TX separately.

 

But remember that when we say enable flow control “On”, it depends on the technology.  10G uplinks are typically fixed and no autonegotiation at all.  So by default, flow control are enabled on 10G uplinks.  1G does support autoneg, so if enabled via CLI command, VC enables *advertisement* of flow control on 1G uplinks.  Similarly for downlinks.  10G KR technology supports autonegotiation, so when we say "by default, VC enables flow control on downlinks", it is once again VC enabling *advertisement* of flow control on downlinks.  Whether flow control is enabled or not is dependent on the outcome of the autonegotiation.

 

So in the CLI –

“Auto” is how VC is by default. Only the downlinks advertise Tx/Rx flow control. Uplinks/Stacking Links off.

“On” is no change for the downlinks, they still advertise Tx/Rx. The uplinks depending on the speed maybe “ON” Tx/Rx or “advertise” Tx/Rx.

“Off” is no Tx/Rx anywhere.

 

******************