LAN Routing
cancel
Showing results for 
Search instead for 
Did you mean: 

HPE 1950 Switch LAG issue with ESXi 5.0

 
SOLVED
Go to solution
azkerm
Advisor

HPE 1950 Switch LAG issue with ESXi 5.0

Hi There,

I've recently bought a dual network interface card as per the HPE community suggestion and installed it on the server. I can see the network card on my ESXi system and I have configured as a standard networking switch. Connected both the interface to the switch modifying the NIC teaming, load balancing as "Routed based IP Hash".

Meanwhile, I have bought two HP1950 Office Connect switches for the network and have configured the switch port 5 - 6 for aggrigation with Dynamic mode as suggested by HP documentations. As per both HP and vmWare documentation, I made sure the ports + the LAG interface is on trunk. However, when I tried testing it on to a VM, it doesn't work. Not sure what I did but it doesn't work.

However, what I do need to point is that, existing ESXi server's LAN interface is connected to a seperate switch with no VLANs at all. So I just looped an interface from the Cisco to the HP switch port 8 with VLAN 40. Then, define the VLANs (such as 40, 50... etc) in switch. Then assigned the LAG team and its corrusponding ports to trunk mode. Then went back to ESXi and created interface tagging the VLAN. However, when I check the switch it showed port 6 with nothing as seen below.

switch.PNG

My switch configuration on the ports.

vlan 1 
# 
vlan 10 
description MGT 
# 
vlan 20 
description WAN1
# 
vlan 30 
description WAN2
# 
vlan 40 
description LAN
# 
vlan 50 
description GUEST 
# 
interface Bridge-Aggregation1 
description Bridge-Aggregation1 Interface (LAN) 
port link-type trunk 
port trunk permit vlan 1 40 50 
link-aggregation mode dynamic 
# 
interface NULL0 
# 
interface Vlan-interface1 
ip address dhcp-alloc 
# 
interface Vlan-interface10 
ip address 192.168.99.10 255.255.255.0 
# 
interface Vlan-interface40 
ip address 10.1.3.20 255.255.0.0 
# 
interface GigabitEthernet1/0/1 
# 
interface GigabitEthernet1/0/2 
description GigabitEthernet1/0/2 Interface (WAN) 
port link-type hybrid 
undo port hybrid vlan 1 
port hybrid vlan 30 tagged 
port hybrid vlan 20 untagged 
port hybrid pvid vlan 20 
# 
interface GigabitEthernet1/0/3 
description GigabitEthernet1/0/3 Interface (WAN2) 
port access vlan 30 
# 
interface GigabitEthernet1/0/4 
description GigabitEthernet1/0/4 Interface (WAN1) 
port link-type hybrid 
undo port hybrid vlan 1 
port hybrid vlan 20 tagged 
port hybrid pvid vlan 20 
# 
interface GigabitEthernet1/0/5 
description GigabitEthernet1/0/5 Interface (LAN-MEM1) 
port link-type trunk 
port trunk permit vlan 1 40 50 
port link-aggregation group 1 
# 
interface GigabitEthernet1/0/6 
description GigabitEthernet1/0/6 Interface (LAN-MEM2) 
port link-type trunk 
port trunk permit vlan 1 40 50 
port link-aggregation group 1 
# 
interface GigabitEthernet1/0/7 
description GigabitEthernet1/0/7 Interface (ASKER) 
port link-type hybrid 
undo port hybrid vlan 1 
port hybrid vlan 10 20 30 50 tagged 
port hybrid vlan 40 untagged 
port hybrid pvid vlan 40 
# 
interface GigabitEthernet1/0/8 
port access vlan 40 
# 

If this is confusing, do let me know, I shall post a logical diagram of what I did to make it more clear.

5 REPLIES
parnassus
Honored Contributor
Solution

Re: HPE 1950 Switch LAG issue with ESXi 5.0

Hello!

I stopped reading when I saw the statement:


azkerm wrote:

Hi There,

I've recently bought a dual network interface card as per the HPE community suggestion and installed it on the server. I can see the network card on my ESXi system and I have configured as a standard networking switch. Connected both the interface to the switch modifying the NIC teaming, load balancing as "Routed based IP Hash".

Meanwhile, I have bought two HP1950 OfficeConnect switches for the network and have configured the switch port 5 - 6 for aggrigation with Dynamic mode as suggested by HP documentations. As per both HP and vmWare documentation, I made sure the ports + the LAG interface is on trunk. However, when I tried testing it on to a VM, it doesn't work. Not sure what I did but it doesn't work.

The key point here is that you shouldn't use the Dynamic type as Port Trunk type.

When you have a VMware ESXi 5.x with vSphere Standard Switch (VSS) and you configure NIC Teaming selecting the Load Balancing algorithm "Route based on IP Hash" instead of the default one ("Route based on the originating virtual port ID") then it's required to configure a Static Port Trunking (so NO LACP) on the corresponding connected Switch.

Try it.

The Port Trunking with LACP (Dynamic) works, as far as I know, only if you have the vSphere Distributed Switch (VDS) or if you are using the latest VMware ESXi 6, if I recall well.

Now I remember...I yet wrote about that on your older post exactly here.

Doesn't matter, two times is better than one!

azkerm
Advisor

Re: HPE 1950 Switch LAG issue with ESXi 5.0

Hi There, 

Thank you for the reply. Let me try this at office tomorrow and report back.

I currently have ESXi enterprise plus licensing where I am eligible to create VDS. However, certain posts explain, to make sure that I can roll back if there's a failure which I cannot as I'm running the ESXi in production. I do understand, things can go wrong at times. However, I do take backups of VMs using ghettoVCB.

Those apart, what I would like to know is, can I utilize the teaming with static + VSS? or is it ideal if I reconfigure the interface through VDS while chaning the switch LAG to dynamic?

parnassus
Honored Contributor

Re: HPE 1950 Switch LAG issue with ESXi 5.0

It depends on which type of Hypervisor(s) deployment you're designing.

If you are sticky with just a single ESXi Server host probably the VSS fits the bill correctly and you can try to do NIC Teaming with Static Trunk easily.

If you have multiple ESXi Server hosts (or you plan to have them in a far/near future) go with VDS (in this case VDS has more sense) and do NIC Teaming using the permitted Dynamic (LACP) Trunk.

azkerm
Advisor

Re: HPE 1950 Switch LAG issue with ESXi 5.0

Yes! you are correct. I'm just sticking with a single ESXi host and I wanted to have good data speed access, hence the teaming. So, I guess I'll stick to static for the moment to see how it performs though.

In case, if I come across any issues, I shall review then.

parnassus
Honored Contributor

Re: HPE 1950 Switch LAG issue with ESXi 5.0

Exactly!

AFAIK, regarding the performances, remember that the more concurrent client hosts are accessing the ESXi Host the more the NIC Teaming with Load Balancing algorithm "Route based on IP Hash" will be useful because the way the ESXi will distribute its outgoing traffic back to clients comes into play heavily (Hashes will vary consistently and traffic streams will be distributed very well on all physical aggregated links).

If you have, let's say, very few client hosts that benefit will be light.