BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

Performance Degradation due to Inter-BladeServer Bandwidth Sharing

 
SOLVED
Go to solution
Highlighted
Regular Advisor

Performance Degradation due to Inter-BladeServer Bandwidth Sharing

We are using a c-3000 enclosure with 8 device modules and two interconnect modules (bay1 and bay2). we have defined 3 networks: input,process and output. Input network uses the uplink in bay2 to receive incoming 10g traffic and to dispatch it through its port on bay1 (2-a) to Process network. 7 modules on process network receive the 10g bandwidth (for each module) from and send the results to the output (1-a bay1).
The Problem is this: Bridging mechanism on flex-10 broadcast the whole traffic to all ports! and this causes the performance degradation! is there anyway to control the traffic? should we increase the interconnect modules or mezzanine cards or something??
I really do apologize if this seems to be silly! But I'm a newbie on BladeServers!!
15 REPLIES 15
Highlighted
Honored Contributor

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

You have flex-10 in interconnect bay 1 and 2 ? What blades and what mezzazine cards You have.
Remember to give Kudos to answers! (click the KUDOS star)

You can find me from Twitter @JKytsi
Highlighted
Regular Advisor

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

Thanks for your reply! I have to correct my previous statement! We have just one interconnect module on bay 1. I really do apologize!
We use HP Proliant BL460c G6 servers. and no external mezzanine cards!
Highlighted
Honored Contributor

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

How are your VC networks configured ?
Remember to give Kudos to answers! (click the KUDOS star)

You can find me from Twitter @JKytsi
Highlighted
Regular Advisor

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

--------------------------------------
INPUT-Net: uplink:Bay1:X3 LAG ID: 25 10g (LOM:1-a on server in bay 7) receives the 10G traffic

PROCESS-Net: (LOM:1-a on server in bay 7 and LOM:2-a on other servers) transmits 10g traffic from server in bay 7 to all servers

Management-Net: uplink:Bay1:X2 LAG ID:27 100MB LOM1-b (on each server) is used for management purposes

OUTPUT-Net: uplink:Bay:1X2 9.9gb (LOM:1-b on each server except the server in bay 7)

----------------------------
other servers are in bays: 1,2,3,4,5,6

HP VC Flex-10 Enet Module: Bay 1
------

I also have enclosed the Server Port Information as a txt file.
Highlighted
Honored Contributor

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

From what you have described, I don't think there is a bridging mechanism in VC that is causing this.

What does the blade in Bay 7 do? Does it have bridging turned on? Your description seems to indicate so.
Highlighted
Regular Advisor

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

Blade Server in bay 7 receives the 10g Network traffic from LOM:1-a [Input-Network] and an application in server uses an algortithm (i.e. performs eXclusive OR on source and dest. IP of packets and targets one of servers in blades 1,2,3,4,5,6 using their MAC address) and sends the packet to the appropriate server through its lom:2-a (10g) [Process-Network]. each server receives the packets by its lom:1-a port (10g) and performs the process and exports the result from its lom:1-b (9.9gb) [Output-Network].

We observe that each lom:1-a port on each server receive the whole traffic instead of just correctly targeted packets. We guess it happens because of bridged nature of VC modules. But this causes terrible performance degradation! What should we do to keep up the performance?
Highlighted
Honored Contributor

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

VC should unicast-forward the traffic to the correct Blade server port assuming the Server NIC's MAC address is the destination MAC in the frames.

Can you verify that the frames being received on all ports are indeed unicast MAC addresses? Are the destination MAC addresses a special application-type MAC address or is it really the MAC address of the Server NICs?

VC will operate like any bridge and unicast forward traffic to known, learned MAC Addresses and will unicast flood traffic to MAC Addresses that it has not learned.
Highlighted
Regular Advisor

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

we indicate destination MAC addresses for each blade according to the MAC address that "ifconfig" command shows in each blade server. Should we use other MAC addresses? Does LOMs have different MAC addresses than local MAC addresses! How should we find the right MAC addresses then?
Let me guess! The ever-flooding occurs because we indicate wrong MAC addresses! If YES, then I have another question! is this true to initially send some dummy packets for each mac address to speed-up the learning of the VC?
Highlighted
Honored Contributor
Solution

Re: Performance Degradation due to Inter-BladeServer Bandwidth Sharing

Ifconfig will show the MAC address being used by the LOM. The MAC address being used by the LOM will be either the one burned-into the LOM, or the one provided to it via VC if you have decided to do that.
there is no rest for the wicked yet the virtuous have no pillows