BladeSystem - General
Showing results for 
Search instead for 
Did you mean: 

Re: Flex 10 Virtual Connect load balancing issue

Occasional Advisor

Flex 10 Virtual Connect load balancing issue

Hi All,

I have c7000 Enclosure with 5x BL490c G6 and 2x VC Flex-10 Enet.

There is one 10Ge CX4 link from each VC module back to the cisco core switch. The issue i have is that you cant create an active/active lag group across the Virtual Connect modules. So i have setup each link to in its on shared uplink set as a dot1q trunk and virtual connect is tagging the vlans in auto mode. Essentially each link is mirrored with the trunk and vlan tags.

The server profile uses the "multiple vlans" option on the Flex nics (example LOM:1a > VC1-VLAN76-A / LOM:2a > VC2-VLAN760-B and so on) so that vc trunks the vlans down to the blade which is running ESX and then vswitch can pull the tags.

The issue ive got is that load balancing in esx wont work correctly without IP hash and since i dont have any LACP/LAG because i dont have multiple links on each vc on one link and i dont think VC supports ip hash anyways...?

Has anyone set this up with single links on each vc? So that you have load balancing - I have managed to get fault tolerence working on the vswitch but with normal traffic its just going down one vc link and not sharing the load.

Attached is a diagram showing what i have setup
Honored Contributor

Re: Flex 10 Virtual Connect load balancing issue

Hi Nick.

I dont have any Flex10 VC modules so I may be misunderstanding their capabilities, however looking at your diagram, I only have two comments

You identify your Shared_Uplink_Set as being Bay n, Port X1, which looks correct if I follow the red line back to the module. And all of your vLANs look to be correctly assigned. What I don't understand is that the connection to the Cisco Switches (Yellow Line) is through a different, Unidentified, 10G port, which is not part of the uplink set.

The second question is about your "multiple vLANs statement (and I apologize if this is obvious to a Flex10 user). I am not sure if the Onboard NIC's are LOM1 & LOM2, with ports a, b, c, etc., or LOMa & LOMb with ports 1, 2, 3, etc.

I am very interested in this post because I/we are considering investing in Flex10's for future blade enclosure deployments

Occasional Advisor

Re: Flex 10 Virtual Connect load balancing issue

Hi Dave

the red line is just an "label" to what the shared uplink is tagging on each of the orange cx4 links

the red line dosent depict any physical cabling i shld of made that clearer it was very rough this morning :P

its really basic setup 1 10GB cx4 link from one cisco module to 1 10GB x1 port on the virtual connect for side A (left)
and the exact same for side B (right).

The issue is when you have VC inline bays 1 and 2 on the back of the c7000 enclosure they are TWO seperate switches and you can not aggregate between them - if you do with virutal conenct one of the links becomes a failover(standby) and is not used - the traffice from right side will go via the left side link using the vc backplane (if the right side was the primary link in the shared uplink set. I would need x2 links on each virtual connect but because the only come with 1 cx4 port in each the rest of the 10gig SFP+ are fibre based so youd need to run fibre from core to chassis.

Essentially my management dont want to do this and they want to get the use out of both of the 10gig links - its a big waste to use one 10gig link as pure failover/standby.

So what i have done is create shared uplink containing one port only on right VC and the same for left VC.
The same vlans are tagged on each side - and what i do is create a server profile to reflect this on the blade.

Example - Physically - Blade in Bay 9 has x2 flex nics (they all have this with the 2 vc modules and bl490c G6). That is 8 LOMs (lan on motherboard) x4 on 1st flex nic which goes to the right side VC and x4 on 2nd flex nic that goes to the left side VC.

Now for the server profile you have these nics and you can send the tagged vlans down them - if you use multiple vlans option its essentially passing the dot1q to the blade like so:

VC1-VLAN76-A 'multiple vlan' >> LOM:1-a => Bay 1

VC2-VLAN76-B 'multiple vlan' >> LOM:2-a => Bay 2

and so on and so on... you can specify multiple tagged vlans that are defined in the shared uplinks / but they can be single it vlans above is my managment hence only one vlan but it gives us both 10gig links - basically what were doing is creating dot1q mirrors on each side with single 10gig cx4 links.

Now with esx and vswitch ive got this setup and working with fault tolerence you have to use beacon probing on ur vswitch or vswitches it dosent support link status.

I am trying to see if the default load balancing based on virtual port id in esx is working you can use IP hash coz that requires lag/lacp or etherchannel.

the reason i posted was to try find other ways to do load balancing / lag with single links on each vc - it sucks that the chassis dosent see it as one switch!!
New Member

Re: Flex 10 Virtual Connect load balancing issue

Hi, I am new to this forum and do not know how to directly contact either party that responded to this post. I also have c7000 with BL490C G6 Servers and will be using ESX 3.5 Update 4 and would like to talk about network configs but need lots of guidance can I contact either of you directly my info is
Honored Contributor

Re: Flex 10 Virtual Connect load balancing issue


the IP Hash Load Balancing Method in ESX is not compatible with VC. IP Hash requires that all the vmnics in the vswitch be connected to the same external switch and that external switch would need to be configured for static 802.3ad port trunking (or equivalent)

Route based on Originating Virtual Port ID will work fine in a VC environment.