Switches, Hubs, and Modems
1753834 Members
8429 Online
108806 Solutions
New Discussion юеВ

Re: Better load balancing on trunk lines

 
SOLVED
Go to solution
Willman Lee
Occasional Advisor

Re: Better load balancing on trunk lines

Thanks for the sample configs Dan, I'll look at them and see if I can adapt it for our two core HP switches.
Willman Lee
Occasional Advisor

Re: Better load balancing on trunk lines

Thanks for that document link. I read it and noticed something in it:

================
Trunking in a Layer 3 Environment
Traditional trunking uses MAC (Layer 2) addresses to determine which link in the trunk a
particular traffic flow travels over to avoid the problem of out-of-sequence packets. In a Layer 3
environment between two routing switches this would cause all packets to flow over only one
link because the source and destination MAC addresses for all packets would be the same ├в the
MAC address of the two connected routing switches.
To avoid this situation the ProCurve Switch 5300 Series uses the source and destination IP
addresses to determine which link a particular packet flow uses. This will provide a good overall
distribution of traffic across the different links in the trunk.
================

That seems to be the case which is happening on our two core switches (using the switches as SA/DA). Is there a configuration setting I'm not seeing that is causing the switches to use MAC addresses instead of IP addresses for distribution? The two switches are layer 3 and acting as routers.
Matt Hobbs
Honored Contributor

Re: Better load balancing on trunk lines

Nope, you're not missing anything, that occurs automatically when you have a point-to-point L3 routed link between two switches. It needs to do this obviously since only two mac-addresses are involved between the two routers.
Teknisk Drift_1
Occasional Advisor

Re: Better load balancing on trunk lines

hmm.. tricky one, this..

It's was said here that distribution of traffic is random. That isn't quite right, if you read the manual (You also see that in the manual quote that you included in the last post). Rather, the switch calculates (from the SA/DA pair) which port to send traffic through. HP uses an XOR of the last three bits of each address on other equipment, I would guess it's something along those lines here too.

This means that the imbalance in traffic /could/ be based on the distribution of IP-adresses that are communicating. In that case, your only option is to add more links to the trunk, or change which IP's are in use.

You have 800 sources, and 30 destinations, but that doesn't mean you have 800x30 SA/DA pairs, because traffic isn't between random hosts. This will result in an uneven load distribution.(Use PCM or other tool to see who are responsible for the majority of load, there could be clues to better balancing of traffic/addresses there)


Secondly: what is "a lot" of errors? How large percentage of packets fail?
(That is: are you sure you have a problem?)

But, something else strikes me here:

1. Certain HP equipment doesn't distribute all traffic to all ports
2. When you added/removed a link, distribution didn't change, you just moved a whole bunch of traffic from one port to the next.

Could this be because multicast is always forwarded on the same port?

The ProCurve 6400/5300 manual says that non-unicast traffic is spread evenly. But for our GbE2's the manual say: "Multicast, broadcast, and unknown unicast will be forwarded on the lowest port number in the trunk by default".

Could it be that the ProCurve manual is wrong? I'd run this by ProCurve Support.
Matt Hobbs
Honored Contributor

Re: Better load balancing on trunk lines

That's a great point. Multicast/broadcast traffic will only utilise one link. Going by the traffic screenshots attached earlier, this looks to be what's happening.

The only way to load balance that better would be via the MSTP method, certain VLANs utilising certain links.

Willman Lee
Occasional Advisor

Re: Better load balancing on trunk lines

Great replies, finally I get an answer on why we have so much traffic on the one port. We do have a ton of multicast traffic on the network and that would finally explain why we are having our problems.

In regards to the first two paragraphs in Teknisk's reply, all the multicast traffic originates from IP addresses 10.10.101-117.x (VLANs 101-117) directed to the 12 video display units at 10.10.20.x (VLAN 200). We have used PCM to look at traffic and it's fairly even from the VLANs (there are 4 or 5 that have a bit more than others but not excessively). Could the traffic imbalance be due to the fact that all destination addresses for multicast traffic is on the 10.10.20.x devices? I have already tried adding more links with out success. Changing IPs is not an option as the video display units have to be in the same VLAN for the video system to work properly.

A lot of errors is about 1000 every 5-10 minutes on our secondary core switch. I know that HP's acceptable level of errors is zero but in real life situations about 5-10 over a course of a week is ideal. We definately know we have a problem as we can see the dropped packets on the video streams. The way our video system works (according to the manufacturers) is that when a multicast stream is called from a video display the first packet is like an initilization packet. If that packet gets dropped, the video display units are not smart enough to know that the packet got dropped and all that follows is video that is black. Partly this is a design fault that they are addressing, but at the same time their engineers say that most good networks should have near zero dropped packets so black video is normally very rare.

I have Dan's sample configs on configuring VLAN traffic on specific ports between switches but I'm finding it hard to understand as it's between a HP and Cisco switch. I have no experience on Cisco equipment so if anyone has a sample config on how to configure VLAN traffic directed to different ports on Procurve switches it would be most appreciated.

Thanks.
Matt Hobbs
Honored Contributor

Re: Better load balancing on trunk lines

For those drops, I still highly recommend you use the 4-port gigabit module only for your switch to switch links. The 16-port is oversubscribed and has to share more buffer space with other ports.

Now that it appears that the broadcast/multicast traffic appears on the first port of the trunk - if you could move that module to slot A and at least use one of it's ports as part of the trunk, you should be able to get around those drops.
Teknisk Drift_1
Occasional Advisor

Re: Better load balancing on trunk lines

>>would finally explain why we are having our problems.

Excellent. But since the manual says that it should distribute multicast evenly, I'd chek it out with HP. Maybe there's a fix...

>>Could the traffic imbalance be due to the fact that all destination addresses for multicast traffic is on the 10.10.20.x devices?

Well, theoretically, I guess. But that would be if most of your DA's have the same last three bits in the address. Even then, you'd have to be unlucky in the distribution of the SA's.
I'm more for the multicast-on-one-port-solution.


>if anyone has a sample config on how to
>configure VLAN traffic directed to
>different ports on Procurve switches

Uh.. No expert on this, but I think that multiple trunks and MSTP is your option there. How Dan's config works, I have no idea, where is STP in there?

Anders :)