Switches, Hubs, and Modems
cancel
Showing results for 
Search instead for 
Did you mean: 

Poor Gigabit Performance on 4108gl

Poor Gigabit Performance on 4108gl

I'm using a 4108gl switch with 6 modules currently:
Card A - J4863A - 100/1000BaseTX
Card B - J4863A - 100/1000BaseTX
Card C - J4863A - 100/1000BaseTX
Card D - J4863A - 100/1000BaseTX
Card G - J4862B - 10/100BaseTX
Card H - J4862B - 10/100BaseTX

The gigabit modules are my focus here. Connected to the Gb modules are a few file servers and several desktop machines. The endpoints are running a combination of Win2k Pro, Win2K Server, Mac OS X, and Mac OS X Server. All of the desktop machines are running Mac OS X.

Users have been complaining for a while about sporadic poor performance. So, over the last couple weeks, I've thoroughly monitored the network from end to end. The servers and the client machines don't appear to be the cause of any bottlenecks so I've started focusing on the switch. My testing has consisted of mostly file copying from one device to another. This is a typical daily activity that users have noted as being slow. Throughout this testing I've been monitoring port counters and also have installed Procurve Manager.

For examples sake, I'll concentrate an modules A & B. Here's the port map:
A1 - Win2K Server 1
A2 - Win2K Server 2
A3 - Win2K Pro 1
A4 - Win2K Pro 2
A5 - Win2K Pro 3
A6 - Win2K Pro 4
B1 - Win2K Server 3
B2 - Win2K Server 4
B3 - OS X 1
B4 - OS X 2
B5 - Win2K Pro 5
B6 - Win2K Server 5


Major performance degredation is reproducible when 2 nodes send to a single node at the same time. For example if A5 & A3 are both copying a large file (1000MB for my testing) to A1, the transfer rate for each copy drops to about 8.5MB/s. Compare that to a copy that consists of just one port copying to another port. An A5 -> A1 copy transfers at about 35MB/s. Granted, the many-to-one scenario should be slower by its nature, however, I believe that it's excessively slow. Monitoring the port counters for each of the above copys, I see this:

A5,A3 -> A1: A5 has 1780 Drops Rx, A3 has 1807 Drops Rx, A1 has 0 Drops Rx
A5 -> A1: both ports have 0 Drops Rx

Note that the above example uses intra-module transfers. When doing inter-module transfers, I don't get the same results. For example:

A5,A6 -> B2: 0 Drops Rx on all ports, transfer speed is around 14MB/s

Note that there are no drops when doing an inter-module transfer. Even at 14MB/s, I believe that the switch (or something) is still limiting bandwith. When 2 machines are sending to the same port at the same time, and all are on Gb, I believe that the transfer speed should be greater than 14MB/s.

Now consider this...the Win2K machine on port A4 has an Intel PRO/1000 MT adapter in it. (The other Win2K machines all have Intel PRO/1000 XT adapters) Using Intel's ProSet config software, I can go in and tweak the NIC settings. Both the XT & MT adapters have identical configurable settings except one. The MT has a setting that's not available on the XT called "Interrupt Moderation Rate". I've found that this setting can make a big difference in both Drop Rx counts and transfer speed. For example if I do the same intra-module transfer as above, but use A4 as one the two sending ports, I get the following:

A5,A4 -> A1: 0 drops on all ports, both copys progress simultaneously at about 23MB/s

Even though only one of the sending devices has a configurable Interrupt Moderation Rate, that seems to be enough to speed things up. What I don't know then, is the Interrupt Moderation Rate a fixed value on the XT adapters and the Apple adapters? BTW, the OS X machines experience the same intra-module problems as noted above. In fact, they are worse in that simultaneous intra-module transfers to the same same port often result in one of the sending machines to completely drop the connection and fail the copy.

Or, is the switch responsible for causing the drops and slow transfer speeds? This switch was initially running SW v7.53 when I began monitoring and testing. I've since updated to v7.70. I've also implemented the qos-passthrough-mode feature (All tests above were done with this latest config). Through all of my monitoring wit PCM, none of the performance counters have reach a level anywhere near their warning threshold.

I'll talk to Intel too, but I wanted to post here and get any feedback from anyone who may have some indight into this.

Thanks
12 REPLIES
Les Ligetfalvy
Esteemed Contributor

Re: Poor Gigabit Performance on 4108gl

Sounds to me that the switch is not doing so well as a storage device. Whenever there is a mix of port speeds, the switch turns into a store and forward device and relies on internal buffers which I suspect are in short supply. If you read the release notes ftp://ftp.hp.com/pub/networking/software/Release-Notes-G-07-70-4100gl-59903067-0411.pdf on page 7 there is mention of tweaking QOS.
Les Ligetfalvy
Esteemed Contributor

Re: Poor Gigabit Performance on 4108gl

Sorry, I spoke too soon. I just re-read your post and see you have the QOS pass-through enabled.
André Beck
Honored Contributor

Re: Poor Gigabit Performance on 4108gl

Hi,

just two thoughts:

1) To test throughput, use throughput testing programs first. Testing file transfers depends on other factors, like disk performance and the quality of the protocols used (which in case of Windows SMB is especially adverse to any reliable testing). You should test with pure TCP streams that are sourced and sinked on the CPU. Have a look at NetIO, for instance.

2) When indeed interrupt mitigation seems to help you gaining throughput, this is a clear sign of the end systems beeing a limiting factor. This feature will do nothing but reduce the actual rate of NIC interrupts seen by the CPU to less than one per received frame. The switch is (almost) completely out of the equation here, except for maybe adverse effects of faster feedback from the host (which should not make a difference due to full duplex operation).

BTW, do you have 802.3 flow control active?
It could help prevent input queue overruns to some extend.

Re: Poor Gigabit Performance on 4108gl

Since last posting I've made some good progress. I implemented a utility called iperf to measure network bandwith. What I'm finding is that the biggest factor affecting throughput is TCP window size.

My testing has basically involved two scenarios; 1) 2 Win2K clients simultaneously transfering to a Win2K server and 2) 2 OS X clients simultaneously transfering to an OS X server.

I've determined that the Win2K machines should use a TCP window size of 17520 bytes. With that setting, I get acceptable speeds and no drops reported on the switch. Above that and I get dismal speeds and tons of drops on the switch. This is likely related to this tech note: http://www.intel.com/support/network/sb/cs-000037.htm

By adjusting the registry keys
TcpWindowSize
GlobalMaxTcpWindowSize

I'm able to force the OS to use this desired value.

So the Win2k machines seem to be taken care of. The OSX machines however are suffering from a similar problem. Using iperf, which allows you to transfer data using specific TCP window sizes, I've determined that 17520 is also a good value to use for the OSX machines. iperf reports ~38MB/s when doing simultaneous transfers. Also, zero drops are reported on the switch. When using the OSX default of 64K, iperf reports ~6MB/s and I get >600 drops reported on one of the sending machines switch port. Similar to the way Win2K allows editing the registry to force a specific TCP window size, OSX has TCP parameters that can be altered as well (which work similarly to other UNIX OSes).

net.inet.tcp.sendspace
net.inet.tcp.recvspace

are the two params affecting TCP window size. I've tried changing them to 17520 however, simultaeous Finder copies still crawl along at 6MB/s and I get tons of drops reported on the switch.

Does anyone have in depth knowledge of these TCP params in a UNIX OS? Should't they take effect immediately after altering them? It's as if the OS is not using the new values and is sticking to a 64K window size.

Also, although I can clearly alter performance by making changes on the client & server, could this problem still be related to the way the switch operates, i.e. the lower TCP window sizes are just masking an issue with the switch.

In regards to droped frames on the switch, how can I go about determining what is causing those drops to occur?

Thanks for help on any of the above.

Re: Poor Gigabit Performance on 4108gl

Oh, and also, I have not tried turning on Flow Control at this point. Just out of curiosity, is more common for a gigabit network to have flow control turned on or turned off?
Justin Capablanca
Occasional Visitor

Re: Poor Gigabit Performance on 4108gl

Hi Chris, you're not alone. We too run a similar network with a mixture of Wintel / OSX server and client, but mainly OSX client desktops. Just yesterday I removed all clients off the 4108 and into a trusty 4000M to get things going again. Tests indicated that a client set at auto, plugged into a switch port 10/100/1000 (4108), also set to auto, would transfer at about 7Mb/s.
About a month ago I reported this issue to HP support and I too was asked to run the QOS pass-through fix. Worked for a little while, but now its back. Are your Wintel clients complaining about things slowing down? or is it just the OSX side of things?
Found another interesting thing yesterday. The uplink between our 4108 and a 5308 was being flagged by PCM as utilising alot of network badwidth. Checked config, both set to auto, upon running a network consistency check, the report stated that the 4108 end of the link was going at 1000Mb/s, and the 5308xl was running at 10Mb/s. Weird... Both running latest firmware. I've got an open case with HP and will call in the morning.

I'll let you know how I go.

Re: Poor Gigabit Performance on 4108gl

It's been a couple months since I worked on this issue. Unfortunately, I haven't found a solution to the problem. After all of my testing/troubleshooting, which was pretty extensive, my best guess is that this switch's internal architecture isn't designed to handle many-to-one gigabit transfers very well. I too opened a call to HP but that didn't produce any answers. They don't have any OSX machines to test with so it's nearly impossible to resolve any OSX related issues through their tech support. As stated earlier, adjusting the TCP window size definitely improves performance. One thing I found out though, the AFP protocol on OSX Server overides any system level TCP tweaks that are applied. As a result it's impossible to change the TCP window size when client/server connections are via AFP. So, I still have dismal performance when multiple OSX clients are sending to my OSX server.

I discussed this issue on a couple forums with several people who know their stuff and the general consensus was that the switch was the culprit. One guy had a similar setup with a different Gb switch (a Dell switch I believe) and was not having this issue.

My next step is to try to get my hands on a different switch an run the same tests that I've been doing. So far I haven't had a chance to do that though.
Justin Capablanca
Occasional Visitor

Re: Poor Gigabit Performance on 4108gl

Hi Chris. Just a quick question. Are you plugging any of your 10/100 devices into your J4863A - 100/1000BaseTX cards?

I've been told that with the 4108, the guts were essentially a different beast. In your case having the 10/100 cards and 10/100/1000 in the same switch do the following (if you hadn't already):

Do not plug any 10/100 devices into a 10/100/1000Mb card, it will interfere with all devices plugged into that card. plug all 10/100 devices into your J4862B - 10/100BaseTX cards and your 1000Mb clients only into your J4863A - 100/1000BaseTX cards.

In our case, we borrowed a 10/100 card and segregated the nodes in the prescribed way = better results.

J.

Re: Poor Gigabit Performance on 4108gl

10/100 devices were not a factor in any of my testing mentioned above. As for the issue you mention, I'm quite familiar with it. In fact I was one of the first (if not the first) persons to submit a trouble ticket to HP about that. Here's a thread that details that issue:

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=680906

Re: Poor Gigabit Performance on 4108gl

Does the 4108 do store-and-forward packet processing only, or is it capable of cut-through processing? Is it feasible that, if it only does store-and-forward, that's the root of the problem? Would a cut-through device make a difference here perhaps? Finally, do the new models do cut-through? (5406zl, 4208vl)

I'm looking at new switches so I'm reviving this thread. I want to make sure I don't have the same performance issues if I do replace the 4108.
Matt Hobbs
Honored Contributor

Re: Poor Gigabit Performance on 4108gl

As far as I know, all ProCurve switches are store and forward.

This is needed to check CRC's on packets, and is also required for multilayer operation (layer 2 & 3).

There's a good article about switching methods here:

http://www.ciscopress.com/articles/article.asp?p=357103&seqNum=4&rl=1

Re: Poor Gigabit Performance on 4108gl

Has HP abandoned Fast Path technology on the newer switches? In researching the 4100, it was apparent that HP was touting their Fast Path tecnology along with that model. I can't seem to find any references to Fast Path in regards to their newer chassis switches. Are they still using a similar architecture perhaps but just not giving it a fancy marketing name?