HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

Multicast and Blade systems

 
Linus Hedström
Occasional Visitor

Multicast and Blade systems

Hi,

We have experienced some strange issues when we run some multicast tests with two setups:

1. C7000 enclosure, bl685c Blades, GB2ec intraconnect switches. Red Hat 5.2, one of the servers has mrouted installed as a igmp routing daemon. (I have understood that the GB2ec switches can't act as one)

2. Same as above but with Virtual Connect modules instead of GB2ec, connected to Extreme switches which acts as IGMP routers.


Generally we have noticed that the servers that sends UDP traffic has very high load on the CPU, but the receivers seem to be okay.

The strangest issue, which is our real problem, happened when we tried some tests on the configuration 1 above.
When doing multicast tests, we can't get higher speed than 100Mbit/s as a total within the enclosure.

We first tried to send multicast from server A to server B.
The result was that server A had a transfer rate of 100MBIT and the listener as well.

We then tried to send multicast from server C to server D at the same time.
The result was that both server A and server C had a transfer rate of 50MBIT.

It seems like there is some limitation within the enclosure/switches that limits this for us?