Switches, Hubs, and Modems
Showing results for 
Search instead for 
Did you mean: 

number of pps for storm control


number of pps for storm control



does anyone have some guidelines on what is an acceptable value of number of broadcast/multicast packets on a GBit interface?  I did some wiresharking this morning on a trunk port (test switch with trunk hosting some different vlans, no other ports online) and measured a 20 minute period.  Basically i noticed about 222 pps of trafic.  Which in our environment seems kind of normal (to me anyway).  I see the usual stuff like ssdp and llmnr stuff, which we should think of eliminating on Windows systems.  What do others out there regard as expected?

I'm also in the process of configuring storm constrain on our switches.  What is not clear to me is that, if the upper limit of that constrain is reached, and we defined the block function...  I assume the switch then only blocks broadcast (or multicast if defined) for a limited time but leaves other trafic through (i would at least hope it does so)?

Honored Contributor

Re: number of pps for storm control

It may vary depending on the switch vendor/model, but on the whole, storm control should only block any broadcast pakets over and above what you've set as your threshold.

I can't recall the pps figure, but I have previously looked at broadcast traffic on a broadcast segment that had something like 1,000-odd devices on it. I got a rate of about 6Mb. Depending on the size of the packets (there was no voice), this could have been about 1000pps.
(I think the bandwidth is much more important than the pps).

In that context, 200pps seems like about right for a subnet the way I would design them (/24), if it was reasonably full.

I think your priorities should be, in order:
 - make sure STP is configured properly
 - make sure loop protect is configured
 - configure IGMP to reduce multicast traffic

My biggest broadcast issues (assuming the above is done) are usually:
 - dodgy servers misconfigured with multiple interfaces and giving ARP responses that don't match the MAC address the server actually uses to originate traffic.
(fixed by disabling one of the interfaces - the server guys eventually notice, but just tell them you disabled it as a result of an incident, tell them their server is badly configured, and tell them to submit a change request to get the port re-enabled. This forces them to either fix it or go away)
 - rarely, faulty hardware.