Switches, Hubs, and Modems
1753971 Members
7932 Online
108811 Solutions
New Discussion юеВ

Procurve 4104 and Proliant Adapter Teaming and Spanning Tree

 

Procurve 4104 and Proliant Adapter Teaming and Spanning Tree

Hello,

we have an installation with several procurve 4104 Switches and maybe run into problems with the spanning tree configurations.

Excatly we have 6 Procurve 4104 located in different rooms. We decided to implement a ring configuration, where each switch is connected to 2 other by a s Wire LACP Link. This results in a ring configuration, so we have to activate RSTP on all switches.

In the diffrent rooms we have also Proliant DL 585 and DL385 which use the build in adapter teaming in TLB mode (Transmit load Balancing). Each Team consists of 2 nics and every NIC is connected to a differnt switches. The switches are configured that all proliant ports are in the same VLAN / broadcast domain.

Mainly all of this works well, the spanning tree operates ok, the teaming is working, all seems good. However only one server application on the proliant has troubles with lost TCP connections.

Questions:
Is there any special configuration neeeded on procurves when used with proliant TLB teaming on 2 switches?
Is there any precise documentationon that?
Does anybody has experience with that?

Any help is well come...

Kind Regards....

Benedikt
4 REPLIES 4
Matt Hobbs
Honored Contributor

Re: Procurve 4104 and Proliant Adapter Teaming and Spanning Tree

There's nothing special that needs to be configured on the switches for TLB.

What's the application that's having trouble? If you simply disconnect one NIC out of the team, does the application stop having problems?
cenk sasmaztin
Honored Contributor

Re: Procurve 4104 and Proliant Adapter Teaming and Spanning Tree

hi Benedikt
you can use between switchs links hp Trunk
and remove on server nic teaming config
after repeat try

(config)#trunk 25-26 trk1 trunk




and you send all switch sh tech command print

goog luck
cenk

Re: Procurve 4104 and Proliant Adapter Teaming and Spanning Tree

Hi,

thanks for the answers. I will try to anser the questions:

The application is a newsrooms application which is used to distribute incoming news agencies to the users and allows to create features and articles. Today we have up to 500 Users connected to the system simultaneously.

The clients communicate with the server using TCP connections to one specific port. Some times it happens that the main services disconnects several users at the same second ( sometimes 3, sometimes over 50 users).

The software vendor reports that the application disconnects must be caused by TCP disconnects but we don't have any more hints on that, so we have to dig deep...
The switches and the servers communicate over Gigabit Ethernet Links while the clients are connected using the customers main network through 100 MBit Uplinks.

We have the same configuration (servers, clients application) for the same customer on a different site. The only difference is that these servers are directly connected to Cisco switches which operate on Gigabit.

We have removed one of the NIC on friday and it behaves a little bit better know, but there is no breakthrough.

We have also updated the switches firmware to the last revision G.07.107 yesterday night and installed a network analyzer to check the traffic between the clients and the server.

I hope this answers all questions....;-)

One question
We use RSTP for Spanning Tree and LACP for trunking between the switches? Is that the best choice?

Kind Regards

Benedikt
Matt Hobbs
Honored Contributor

Re: Procurve 4104 and Proliant Adapter Teaming and Spanning Tree

RSTP and LACP are fine. One thing to quickly check would be the stability of your spanning-tree with 'show span' - if you're seeing very frequent topology changes you'll need to address that.

A couple of other things you could try at the moment would be to experiment with flow-control, try enabling it on the server and on the server's port, likewise try the same thing on the client machines.

You will often get these types of issues that you're seeing when going from Gigabit servers to 100Mbit clients due to limited buffers in the switches. Actually if you're using the 20-port gigabit module you can try the 'qos-passthrough-mode' command. Very worthwhile trying.