- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Multicast and Blade systems
BladeSystem - General
1745832
Members
4404
Online
108723
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-13-2008 03:21 AM
11-13-2008 03:21 AM
Multicast and Blade systems
Hi,
We have experienced some strange issues when we run some multicast tests with two setups:
1. C7000 enclosure, bl685c Blades, GB2ec intraconnect switches. Red Hat 5.2, one of the servers has mrouted installed as a igmp routing daemon. (I have understood that the GB2ec switches can't act as one)
2. Same as above but with Virtual Connect modules instead of GB2ec, connected to Extreme switches which acts as IGMP routers.
Generally we have noticed that the servers that sends UDP traffic has very high load on the CPU, but the receivers seem to be okay.
The strangest issue, which is our real problem, happened when we tried some tests on the configuration 1 above.
When doing multicast tests, we can't get higher speed than 100Mbit/s as a total within the enclosure.
We first tried to send multicast from server A to server B.
The result was that server A had a transfer rate of 100MBIT and the listener as well.
We then tried to send multicast from server C to server D at the same time.
The result was that both server A and server C had a transfer rate of 50MBIT.
It seems like there is some limitation within the enclosure/switches that limits this for us?
We have experienced some strange issues when we run some multicast tests with two setups:
1. C7000 enclosure, bl685c Blades, GB2ec intraconnect switches. Red Hat 5.2, one of the servers has mrouted installed as a igmp routing daemon. (I have understood that the GB2ec switches can't act as one)
2. Same as above but with Virtual Connect modules instead of GB2ec, connected to Extreme switches which acts as IGMP routers.
Generally we have noticed that the servers that sends UDP traffic has very high load on the CPU, but the receivers seem to be okay.
The strangest issue, which is our real problem, happened when we tried some tests on the configuration 1 above.
When doing multicast tests, we can't get higher speed than 100Mbit/s as a total within the enclosure.
We first tried to send multicast from server A to server B.
The result was that server A had a transfer rate of 100MBIT and the listener as well.
We then tried to send multicast from server C to server D at the same time.
The result was that both server A and server C had a transfer rate of 50MBIT.
It seems like there is some limitation within the enclosure/switches that limits this for us?
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP