BladeSystem - General
1752286 Members
5145 Online
108786 Solutions
New Discussion

Virtual Connect VLAN Tunnel -Failover (Firmware 3.75)

 
VIckzz
Occasional Contributor

Virtual Connect VLAN Tunnel -Failover (Firmware 3.75)

Hi,

 

We are trying to configure ESXi 5.1 Hybrid Networking with vDS and vSS using VLAN Tunnel  mode due to limitations of number of VLANs in SUS.

 

Currently running 4 Uplink Ports (2 each bay) connected to Cisco Nexus 5000 Series Access Switch.

 

Tunnel-A (Bay1, X1,X2)

Tunnel-B (Bay2 X1,X2)

 

ESXi Server Profile -

 


VLAN TUNNEL SERVER PROFILE

 

 REASON - I cannot get 4 uplinks any other way than above because by Default in VLAN Tunnel you will only get 2 Ports in the blade which is really annoying when you are segregating traffic between Management and VMs.

 

Now the challenge is if I take out Bay1 all the traffic stops and server connections drop whereas it should work because Bay2 is still connected to network.

 

Same happens if i take out Bay2.

 

it seems Failover is not taking place.

 

This is necessary because i need at least 4 NICs for vSS and vDS on each Server. I also cannot go more than 4 uplinks due to shortage around Backend Ports.

I am sure something is wrong but not sure what.


Can anyone help???

3 REPLIES 3
Thomas Martin
Trusted Contributor

Re: Virtual Connect VLAN Tunnel -Failover (Firmware 3.75)

Hello,

 

there is a little error in your profile for Port 3 and 4. You should not cross connect your network Uplinks. If Tunnel_A goes through Bay 1 and you connect this to Port 4 which is connected to Bay 2, so you loose your network if you pull out Bay 2. At this moment Port 4 has no connection to the Uplink Tunnel_A because you separate the cross connect ports between the Flex Modules. The traffic flows from Port 4 to the Flex Module in Bay 2 and from there over the cross connect ports to the Flex Module in Bay 1. Then it leaves the Flex Module over the external Ports X1 or X2. So no Failover could take place.

 

Thomas

VIckzz
Occasional Contributor

Re: Virtual Connect VLAN Tunnel -Failover (Firmware 3.75)

Thanks Thomas. I know that but i did that intentionally because VLAN Tunnel-A does not let me Map the same VLAN in the NIC#3 (LOM1b).

 

I need 4 NICs to be able to configure ESX with vSS and vDS both.

 

any idea?

 

I have gone through hundreds of links and forums but all i have learnt following:

 

* you cannot have more than 2 NICs on Blade Server Profile with VLAN Tunnel (Tunnel1-Bay1-x1,x2) (Tunnel2-Bay2-x1,x2)

* HP has no clear documentation around this.

 

 

Casper42
Respected Contributor

Re: Virtual Connect VLAN Tunnel -Failover (Firmware 3.75)

Option 1.

Grab some 1GB Transceivers and put them into unused ports on the VC modules.

Configure a new Mapped mode SUS (You can mix Mapped and Tunneled in 3.30 or higher) to use these uplinks.

Configure a normal Ethernet Network (MGMT-VLANXX) in the SUS

Use that for ports 1 and 2 in your profile and then use your Tunnel for 3 and 4 but with the direction reversed like was mentioned before.

I say to use an SUS here because you then can add other VLANs if you want down the road.

Also note that in 3.30 for Flex-10 and newer modules (not the 1/10 basically) you can go into Ethernet Settings \ Advanced Settings (left nav bar, Domain Settings section) and enable Enhanced VLAN mode.  This ups the limits to 1000 VLANs per VC Domain (500 really if you do A and B side Active/Active) and 162 ( I think, the ? on that section of the screen will confirm) VLANs per port (all 4 FlexNICs) on the server profile.  Tunneled Networks still count as only 1 VLAN to VC.

 

 

Option 2, flip your profile around.

1 = Tunnel B

2 = Tunnel A

3 = Tunnel A

4 = Tunnel B

You will still lose connectivity to one set of vmnics entirely during a single VC failure/reboot, but at least now its only the Mgmt interface and not all your VMs.

Since VMs use 3 and 4 and those are properly North/South mapped (A = line 3 = bay 1 = uplinks on Bay 1) then at least half of your VM uplinks will survive a single VC outage/reboot (firmware update).  Losing the Mgmt interface for 2-3 minutes during a VC Reboot isn't horrible.

 

 

Option 3, just use 2 FlexNICs with Mgmt, VMs, vMotion all in the same vSwitch and use NetIOC in the vDS to avoid congestion issues.