BladeSystem - General
1754325 Members
2455 Online
108813 Solutions
New Discussion

Overcoming Virtual Connect 320 VLAN limits in mapped mode?

 
chuckk281
Trusted Contributor

Overcoming Virtual Connect 320 VLAN limits in mapped mode?

Arnold was looking for help:

 

*****************

 

Hi,

 

We’re working on an architecture to provide IaaS to end-users in a public cloud model.

We want this build on a combination of Matrix, CSA and AP4S.

 

To guarantee security between VM’s of customers (the multi-tenancy aspect) we need to use a unique VLAN ID per customer to host his VM’s and separate them from others.

Now, from what I understand, Virtual Connect should be in mapped mode to support VLAN’s. But he limit today is 320 unique VLAN’s per Virtual Connect pair.

 

If we want to host hundreds (>320) of VLAN’s in a single enclosure with a pair of Virtual Connects, how do we solve this ?

I know for SFR we used Cisco Nexus 1000v and switches to do this, but do we have an alternative solution to this ?

 

I’m not a networking expert, so forgive me if I’m talking nonsense here J.

 

Hartelijke groet / Kind Regards,

 

******************

 

First Denis responded:

 

******************

 

Hello Arnold,

 

I am not a specialist of Virtual Connect, but from what I have read in the Virtual Connect Cookbook, VLAN tunneling mode should allow you to support more than 320 VLAN’s.

With VLAN tunneling, server needs to interpret the VLAN tag.

 

http://intranet.hp.com/sites/VirtualConnect/Pages/index.aspx

 

The cookbook (see pages 10, 11 and 12).

 

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02616817/c02616817.pdf

 

I let the Virtual Connect experts elaborate further.

 

***********************

 

Then Vincent joined the conversation:

 

*******************

 

The only alternative today would be to configure Virtual Connect in tunnel mode.

It may or may not be a problem in your use case: the main undesirable consequence of tunnel mode is that if a server sends a broadcast frame on a VLAN, VC will forward it to all servers connected to the tunnel network, regardless of their VLAN attachments. The frames won’t actually reach the VMs that don’t have an interface in that VLAN, but they will occupy some of the bandwidth of the physical hosts.

Now if your enclosure contains only virtualization hosts that are all configured the same network-wise, i.e. connected to all the VLANs on the premise that VMs from all customers might run on any of the hosts, the problem above would occur even in mapped mode or with a traditional switch, so it probably wouldn’t be considered a problem. If customer VMs are segregated between different hosts with different VLAN attachments, then your customer probably wouldn’t like tunnel mode.

 

**********************

 

Other comments or suggestions?