- Community Home
- >
- Networking
- >
- Legacy
- >
- Switches, Hubs, Modems
- >
- Re: Procurve 1800s
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-31-2007 01:54 PM
тАО05-31-2007 01:54 PM
Procurve 1800s
We have 6 1800 switches, 2 ports trunked together between each switch. 3 ML350 servers hang off 1 switch, all 3 with Gb cards. Approximately 2 months ago, users started complaining that the network was "slow" for all server resources. Not all users were complaining, only certain ones. Investigating, the only users with problems were users running at 100 Mb (link utilization of 1% - 2% max). Not all 100 Mb users were experiencing trouble, however, 0 Gb clients experienced problems. After trying about 10 different fixes (smb signing disable, etc.) we started isolating the switch configuration. We moved 1 problem PC to the same switch with the servers, everything worked perfectly. After playing with various configurations, we finally set fixed trunks between the switches, and enabled flow control on both ends of the trunk.
My main question is "why did this happen?" If flow control is critical, why do I not need to enable flow control on every 100 Mb port? Will flow control cause any adverse effects for my Gb users?
Thanks for all your help!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2007 05:18 AM
тАО06-01-2007 05:18 AM
Re: Procurve 1800s
Since it seems to be okay for 100Mbit clients connected to the same switch as the servers, it doesn't seem like a switch buffer limitation which is where flow control would usually come in useful.
The other thing I would try is just using a single port between the switches to see if it's in any way related to the trunking.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2007 05:41 AM
тАО06-01-2007 05:41 AM
Re: Procurve 1800s
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2007 08:53 AM
тАО06-01-2007 08:53 AM
Re: Procurve 1800s
Maybe internally the 10/100 side of the switch it has flow control automatically enabled (I'm really just hypothesising here) so when the Gigabit enabled server on one switch sends traffic through to the other switch (also connected at gigabit), when the receiving switch then has to buffer it to 10/100 that's the point where it runs out of buffers and sends a pause frame back through the uplink.
The problem with this is that it will pause all traffic across that link, possibly degrading the performance of those gigabit clients. It may be imperceivable though so I guess the trade-off is up to you.
What I don't understand at the moment is why this only affects 100Mbit client on the other switch and not on the same switch as the servers.
What if you just enable flow-control on the 10/100 ports instead, does that make any difference?
What sort of performance degradation were you seeing? Can you give a before and after in Mbit/s? What type of performance testing where you running? I'm guessing file sharing and primarily TCP based.
I'm not sure if the 1800 supports it, but what I would do is try and snmpwalk the switches - in particular RFC2665.mib contains the 802.3x information so you can see what ports are pausing traffic inbound and outbound. I've never had a need to do this myself but it's where I'd start. If you need a mib browser I'd recommend the free version of iReasoning MIB browser and you can find the mib file here: http://www.hp.com/rnd/software/MIBs.htm
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2007 10:08 AM
тАО06-01-2007 10:08 AM
Re: Procurve 1800s
Thanks for all the responses, and I'm glad that someone else is as baffled as I.
Your concern about flow control is exactly what I was wondering - I'm potentially seriously degrading the link between the two switches, and in turn, all the clients connected to the secondary switch, not just the single client requesting 1 particular data stream.
I attempted to enable flow control only on the ports of those running 100 Mbit - no dice. Same results with slow throughput. For performance testing, I simply zipped the i386 directory of a windows XP disc. It's about a 700 MB file. With degraded performance, I was getting windows estimating between 93 and 125 minutes to copy, or roughly, 1 Mbits / sec, or .125 MByte / sec. Hardly "performance." With flow control enabled on the trunks, I'm seeing ~37 MBits / sec or 4.7 MBytes / sec. Not spectacular, but WAAAAY faster than without flow control.
I will download the Mib browser and see if I can look into the 1800 a little closer.
Thanks again for all your help!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2007 03:09 PM
тАО06-01-2007 03:09 PM
Re: Procurve 1800s
This seems easy enough to reproduce, if you can upload a basic network map of your setup there I'll see if I can try and reproduce it next week.
It's quite possible that this is 'expected' behaviour due to the most likely very small buffers on these switches. The 2800 and 4100's 16-port gigabit module also suffered from this type of issue but HP were able to provide a 'qos-passthrough-mode' command which optimised the buffers for 100-1000 transfers. Due to the basic capabilities of the 1800 I'd be surprised if this type of feature could be implemented.
Having said that, that is only if it is a buffering issue which at this point in time I'm not entirely convinced - given the strange problem description that it only happens on the switch that does not have the servers on it.
These 1800's would be considered as optimised for gigabit so if you needed an excuse to upgrade the 100Mbit machines this would be a perfect opportunity.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2007 04:05 PM
тАО06-01-2007 04:05 PM
Re: Procurve 1800s
I've attached a very crude Visio diagram of our network. Let me know if there are any additional questions or if I am unclear on some points. The small buffers may indeed be an issue. I guess it's a decent reason to upgrade the 100 Mb holdouts. I am a little disappointed in the performance, but perhaps you will uncover something I cannot. I did look into the iresource mib viewer, however, it does not appear HP has an MIB for the 1800 series switches.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-01-2007 04:27 PM
тАО06-01-2007 04:27 PM
Re: Procurve 1800s
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-04-2007 02:01 PM
тАО06-04-2007 02:01 PM
Re: Procurve 1800s
I ran a few quick tests today and was unable to reproduce it. For my tests I was using FTP and iperf to test the performance. I was unable to test with SMB as the machines were from different domains and it didn't seem to like that.
I needed to also use a 100Mbit switch connected to the second 1800 as I only had gigabit clients.
With FTP and iperf though, I was getting 100Mbit performance consistently between the gigabit and 100Mbit device.
I was only using one link between the 1800's, and have not logged into the web interface to check the configurations.
If I get some more time later this week I'll see if I can set it up with a 2 port trunk and also I'll make sure to get SMB file sharing working too.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-05-2007 08:58 AM
тАО06-05-2007 08:58 AM
Re: Procurve 1800s
I tried it again today, this time with 2x 1800's only, a 100Mbit device and a gigabit server, and I still couldn't reproduce this. (Both single link and 2 port trunk between the switches - no flow control).
If you can reproduce this easily and provide some more detail on exactly how you're testing I'll run a few more tests here.
Matt