- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Performance Degradation due to Inter-BladeServer B...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2010 06:15 AM
тАО04-22-2010 06:15 AM
The Problem is this: Bridging mechanism on flex-10 broadcast the whole traffic to all ports! and this causes the performance degradation! is there anyway to control the traffic? should we increase the interconnect modules or mezzanine cards or something??
I really do apologize if this seems to be silly! But I'm a newbie on BladeServers!!
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2010 09:36 AM
тАО04-23-2010 09:36 AM
Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing
You can find me from Twitter @JKytsi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2010 09:35 PM
тАО04-24-2010 09:35 PM
Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing
We use HP Proliant BL460c G6 servers. and no external mezzanine cards!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2010 10:46 PM
тАО04-25-2010 10:46 PM
Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing
You can find me from Twitter @JKytsi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2010 11:27 PM
тАО04-25-2010 11:27 PM
Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing
INPUT-Net: uplink:Bay1:X3 LAG ID: 25 10g (LOM:1-a on server in bay 7) receives the 10G traffic
PROCESS-Net: (LOM:1-a on server in bay 7 and LOM:2-a on other servers) transmits 10g traffic from server in bay 7 to all servers
Management-Net: uplink:Bay1:X2 LAG ID:27 100MB LOM1-b (on each server) is used for management purposes
OUTPUT-Net: uplink:Bay:1X2 9.9gb (LOM:1-b on each server except the server in bay 7)
----------------------------
other servers are in bays: 1,2,3,4,5,6
HP VC Flex-10 Enet Module: Bay 1
------
I also have enclosed the Server Port Information as a txt file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-27-2010 04:50 AM
тАО04-27-2010 04:50 AM
Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing
What does the blade in Bay 7 do? Does it have bridging turned on? Your description seems to indicate so.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-27-2010 06:28 AM
тАО04-27-2010 06:28 AM
Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing
We observe that each lom:1-a port on each server receive the whole traffic instead of just correctly targeted packets. We guess it happens because of bridged nature of VC modules. But this causes terrible performance degradation! What should we do to keep up the performance?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-27-2010 10:45 AM
тАО04-27-2010 10:45 AM
Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing
Can you verify that the frames being received on all ports are indeed unicast MAC addresses? Are the destination MAC addresses a special application-type MAC address or is it really the MAC address of the Server NICs?
VC will operate like any bridge and unicast forward traffic to known, learned MAC Addresses and will unicast flood traffic to MAC Addresses that it has not learned.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-27-2010 11:16 AM
тАО04-27-2010 11:16 AM
Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing
Let me guess! The ever-flooding occurs because we indicate wrong MAC addresses! If YES, then I have another question! is this true to initially send some dummy packets for each mac address to speed-up the learning of the VC?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-27-2010 11:51 AM
тАО04-27-2010 11:51 AM
Solution