BladeSystem Virtual Connect
Showing results for 
Search instead for 
Did you mean: 

VLANs for Internal-blade comms only

Occasional Advisor

VLANs for Internal-blade comms only



I am buidling a Hyper-v cluster that uses a converged network design. I have 2 x cisco 3020 switches in bays 1 and 2, these will be used for our vm network traffic. In bays 5 and 6 are virtual connect modules for the private converged network traffic (live-migration, cluster, storage). The live-migration, cluster and storage traffic are isolated from each other by VLANs. Each blade uses mez2 2x10GB cards that are teamed and then split out as vNics (virtual network adapters) by the hypervisor... the  virtual connects have uplinks to upstream iSCSI switches. The live-migration and cluster vlans do not need to leave the C7000 chassis. Initially I setup shared uplinks with all of the vlans defined on them and assigned to the server profiles.


Problem is my live-migration and cluster vnics cannot communicate across blade servers. I get no reply from pings in both directions but comms to the iscsi network is working correctly.


What is the best way to define the vlans for the live-migration and cluster networks within virtual connect?
Is the only option to use Shared Uplink Sets or is there a better method recommended for internal blade only comms?



Occasional Advisor

Re: VLANs for Internal-blade comms only

Just a quick update that I have now resolved this issue, turns out I had to disable virtual machine queues on the physical flex-10 NICs. Thisa was causing the issue with comms.