Switches, Hubs, and Modems
cancel
Showing results for 
Search instead for 
Did you mean: 

Procurve 1800s

 
Highlighted

Procurve 1800s

Just wondering if anyone has ever experienced this situation:

We have 6 1800 switches, 2 ports trunked together between each switch. 3 ML350 servers hang off 1 switch, all 3 with Gb cards. Approximately 2 months ago, users started complaining that the network was "slow" for all server resources. Not all users were complaining, only certain ones. Investigating, the only users with problems were users running at 100 Mb (link utilization of 1% - 2% max). Not all 100 Mb users were experiencing trouble, however, 0 Gb clients experienced problems. After trying about 10 different fixes (smb signing disable, etc.) we started isolating the switch configuration. We moved 1 problem PC to the same switch with the servers, everything worked perfectly. After playing with various configurations, we finally set fixed trunks between the switches, and enabled flow control on both ends of the trunk.

My main question is "why did this happen?" If flow control is critical, why do I not need to enable flow control on every 100 Mb port? Will flow control cause any adverse effects for my Gb users?

Thanks for all your help!
18 REPLIES 18
Highlighted
Honored Contributor

Re: Procurve 1800s

Was it setting the static trunk or the flow control that seemed to make the real difference?

Since it seems to be okay for 100Mbit clients connected to the same switch as the servers, it doesn't seem like a switch buffer limitation which is where flow control would usually come in useful.

The other thing I would try is just using a single port between the switches to see if it's in any way related to the trunking.
Highlighted

Re: Procurve 1800s

It seemed to be the flow control that really helped. I tried connecting only 1 port together (either configured as a single trunk, 1 port of a pair of ports as 1 trunk, or 1 port, not set as a trunk)- no dice. Same results in every case until I initiated flow control.
Highlighted
Honored Contributor

Re: Procurve 1800s

I would also be concerned with having to enable flow-control between the two switches. When flow-control is enabled it sends pause frames when one of the 2 devices on either end of the link run out of buffers.

Maybe internally the 10/100 side of the switch it has flow control automatically enabled (I'm really just hypothesising here) so when the Gigabit enabled server on one switch sends traffic through to the other switch (also connected at gigabit), when the receiving switch then has to buffer it to 10/100 that's the point where it runs out of buffers and sends a pause frame back through the uplink.

The problem with this is that it will pause all traffic across that link, possibly degrading the performance of those gigabit clients. It may be imperceivable though so I guess the trade-off is up to you.

What I don't understand at the moment is why this only affects 100Mbit client on the other switch and not on the same switch as the servers.

What if you just enable flow-control on the 10/100 ports instead, does that make any difference?

What sort of performance degradation were you seeing? Can you give a before and after in Mbit/s? What type of performance testing where you running? I'm guessing file sharing and primarily TCP based.

I'm not sure if the 1800 supports it, but what I would do is try and snmpwalk the switches - in particular RFC2665.mib contains the 802.3x information so you can see what ports are pausing traffic inbound and outbound. I've never had a need to do this myself but it's where I'd start. If you need a mib browser I'd recommend the free version of iReasoning MIB browser and you can find the mib file here: http://www.hp.com/rnd/software/MIBs.htm
Highlighted

Re: Procurve 1800s

Matt,

Thanks for all the responses, and I'm glad that someone else is as baffled as I.

Your concern about flow control is exactly what I was wondering - I'm potentially seriously degrading the link between the two switches, and in turn, all the clients connected to the secondary switch, not just the single client requesting 1 particular data stream.

I attempted to enable flow control only on the ports of those running 100 Mbit - no dice. Same results with slow throughput. For performance testing, I simply zipped the i386 directory of a windows XP disc. It's about a 700 MB file. With degraded performance, I was getting windows estimating between 93 and 125 minutes to copy, or roughly, 1 Mbits / sec, or .125 MByte / sec. Hardly "performance." With flow control enabled on the trunks, I'm seeing ~37 MBits / sec or 4.7 MBytes / sec. Not spectacular, but WAAAAY faster than without flow control.

I will download the Mib browser and see if I can look into the 1800 a little closer.

Thanks again for all your help!
Highlighted
Honored Contributor

Re: Procurve 1800s

Alan,

This seems easy enough to reproduce, if you can upload a basic network map of your setup there I'll see if I can try and reproduce it next week.

It's quite possible that this is 'expected' behaviour due to the most likely very small buffers on these switches. The 2800 and 4100's 16-port gigabit module also suffered from this type of issue but HP were able to provide a 'qos-passthrough-mode' command which optimised the buffers for 100-1000 transfers. Due to the basic capabilities of the 1800 I'd be surprised if this type of feature could be implemented.

Having said that, that is only if it is a buffering issue which at this point in time I'm not entirely convinced - given the strange problem description that it only happens on the switch that does not have the servers on it.

These 1800's would be considered as optimised for gigabit so if you needed an excuse to upgrade the 100Mbit machines this would be a perfect opportunity.

Highlighted

Re: Procurve 1800s

Matt,

I've attached a very crude Visio diagram of our network. Let me know if there are any additional questions or if I am unclear on some points. The small buffers may indeed be an issue. I guess it's a decent reason to upgrade the 100 Mb holdouts. I am a little disappointed in the performance, but perhaps you will uncover something I cannot. I did look into the iresource mib viewer, however, it does not appear HP has an MIB for the 1800 series switches.
Highlighted
Honored Contributor

Re: Procurve 1800s

For the purpose of checking flow-control, all you should need to load is the RFC2665.mib. I'll let you know how I go next week.
Highlighted
Honored Contributor

Re: Procurve 1800s

Hi Alan,

I ran a few quick tests today and was unable to reproduce it. For my tests I was using FTP and iperf to test the performance. I was unable to test with SMB as the machines were from different domains and it didn't seem to like that.

I needed to also use a 100Mbit switch connected to the second 1800 as I only had gigabit clients.

With FTP and iperf though, I was getting 100Mbit performance consistently between the gigabit and 100Mbit device.

I was only using one link between the 1800's, and have not logged into the web interface to check the configurations.

If I get some more time later this week I'll see if I can set it up with a 2 port trunk and also I'll make sure to get SMB file sharing working too.

Highlighted
Honored Contributor

Re: Procurve 1800s

Hi Alan,

I tried it again today, this time with 2x 1800's only, a 100Mbit device and a gigabit server, and I still couldn't reproduce this. (Both single link and 2 port trunk between the switches - no flow control).

If you can reproduce this easily and provide some more detail on exactly how you're testing I'll run a few more tests here.

Matt