Web and Unmanaged
1753496 Members
4438 Online
108794 Solutions
New Discussion

Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

 
ryukifaiz
Occasional Visitor

Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

Hi, my network has 2 IRF, 1st IRF has two core switches and the other IRF has 4 access switches, all these switches are 1950 model. In between the 2 IRF, there are 2 links from core switch 1 connected to port 47 and 48 of  access switch 1 and 2 other links from core switch 1 connected to port 47 and 48 of access switch 3.  Also 2 links from core switch 2 connected to port 47 and 48 of  access switch 2 and 2 other links from core switch 2 connected to port 47 and 48 of access switch 4. All these links between the 2 IRFs are grouped into Bridge group 1 with link agregation mode of dynamic. My questions are:

1. If i have a server running on Vmware Vsphere ESXi, and i want to group two ports on the server for NIC Teaming, how shall i configure the corresponding switch ports connected to the server? 

2, Following from Q1, if i would to use LACP, are the two ports to be configure in bridge group 1 or shall i configure bridge group 2 for the 2 ports?

3. Following from Q2, if i have 2nd ESXi server whose 2 NICs to be teamed, the corresponding switch ports to be configure in bridge group 1 or 2 or shall i configure bridge group 3 for the 2 ports?

4. If the switch end is using LACP for load balancing, what other things i need to configure on the ESXi server, virtual distribution switch (VDS) ?

5. Following from Q1, can i configure the corresponding switch port in static LAG without LACP, and mix with my existing network configuration (2 IRFs already connected together with LACP)

Thanks.

jim

9 REPLIES 9
Ivan_B
HPE Pro

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

Hi Jim!

About BAGGs there is one key rule - a BAGG can connect two devices. By device I mean a server, a standalone switch or an IRF stack (as IRF masks physical switches and rerpesents the whole stack as one device) For example, if you have BAGG1 which consists of 4 physical ports, all those ports should connect your IRF stack to one and only one ESXi server.

Now to your questions:

1. ESXi supports different teaming configurations. One of them is so called switch-independent teaming. This one does not require any configuration from switch side except proper VLANs. There are other modes as well especially in Distributed vSwitch which require either a static BAGG or dynamic (LACP) BAGG on switch side. As you can see it is better to check from the ESXi side what are the requirements and then adjust your IRF stack/-s per them. But from our side everything is possible - no BAGG, static BAGG, dynamic BAGG, whatever...

2. One BAGG per IRF<->host connection, as stated in the beginning of my message

3. Same, one BAGG per IRF<->host connection, e.g. each ESXi server should be connected to the IRF stack using one separate BAGG.

4. LACP is not used for load balancing. It has totally different task - establish and keep an aggregation. Regarding "what other things i need to configure on the ESXi server, virtual distribution switch (VDS)" I think your question should be addressed to VMWare. Basically you do not need any additional configuration to make load sharing working - by default it is enabled, but I hope you have correct expectations for it. I see you don't have much of experience with link-aggregations, so let me emphasize that they do not balance traffic equally and per packet as many people think. Also, there is a rule in IRF called 'local first' which means that no matter what load-sharing algorithm you use for a BAGG, if traffic enters the stack on one IRF Slot, it will leave the IRF using a port on the same Slot. While on CLI-managed Comware switches it can be changed, 1950 may not have a way to alternate this behavior. But it has pros as well - such approach minimizes load on IRF ports, that is why it has been chosen as default.

5. Not sure what you want to mix, but an IRF stack can have numerous static and dynamic BAGGS, just follow the rule I've stated in the beginning - one BAGG per host connection. So you can have a BAGG (static or dynamic) between IRF stacks, a BAGG (static or dynamic) to one ESXi server (physical ports on IRF side can and should be distributed among several IRF Slots), a BAGG (static or dynamic) to another ESXi server and many other BAGGs to other PCs, servers, storages etc. etc.

 

I am an HPE employee

Accept or Kudo

ryukifaiz
Occasional Visitor

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

Hi Ivan,

Thanks much for your inputs. My mistake, LACP is for redundancy and increase bandwith between two host. So can i say

1. If i would to use another dynamic BAGG (LACP) to 1 of my server and another LACP to another server, at the end, will there be any connectivity between the different LACPs, or the connectivity between the LACP is depend on the VLAN allowed over the trunk ports tied to individual LACP? The reason i am asking this is that currently all host and servers on the network are sitting on the same default VLAN 1, but i have cater port trunk allowed VLANs in the current LACP between IRF for future grow, so of cause there will be connectivity, but will there be any connectivity between hosts and servers in different VLANs in the future if there are segreagated to the respective VLANs?

2. Since i have two core switch in 1 IRF, can i create another LACP with 1 port on core switch 1 , and 1 port on core switch 2, and link to 2 ports configured with NIC Team on a server? and another LACP in similar fashion to another server?

Thanks.

jim

 

Ivan_B
HPE Pro

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

1. BAGG represents a bundle of physical ports as one logical Ethernet port. Sorry if it seems that I go to very basics, but it is another very important concept as we have a couple of consequences from it. First, after configuring a BAGG and bundling physical ports to it we manage that logical port as every other - we can make it access, so it can carry only one untagged VLAN or we make it trunk that carries multiple tagged VLANs and one untagged (pvid). We can add VLANs, remove them - you can do pretty much everything that you would do with ordinary Ethernet port. Another important fact is that STP considers a BAGG as single port so you do not have to worry about any blocked links, because those bundled ports are not considered reduntant links anymore. Third important fact is that MAC learning also considers a BAGG as a single Ethernet interface. If you check MAC address table you will see that MACs learned on BAGGs have 'outgoing interface' as BAGG, not a physical port (BAGG member).

Therefore if you need one server over BAGG1 to have a connection to another server that is connected to the stack over BAGG2, you configure both BAGGs in the same way as you would do with single ports - you assign proper VLAN/-s to both BAGGs and both servers will have a connection, thanks to the third fact I've listed above - the way MAC address table treats BAGGs. If HPE guides are not clear enough to explain the idea of link aggregation I highly recommend you to google for  'port-channels/etherchannels' in Cisco terminology, because there are TONS of training materials referring Cisco gear for all kinds of audience and our BAGGs are nothing more than Layer 2 Port-Channels if I need to speak Cisco language : - ) 

2. Exactly! That is the recommended way to configure link-aggregations in IRF environment - each server connected by multiple physical ports bundled to a BAGG to EACH IRF Slot. Each server in its own BAGG, of course. Thus you will achieve resiliency - if Slot 1 IRF will go down, half of physical ports members of those BAGGs will remain up, because they are connected to Slot 2 which is still up and running. This scenario together with single management are cornerstones of IRF and are main reasons why people want to use it.

Hope this helps!

 

 

I am an HPE employee

Accept or Kudo

ryukifaiz
Occasional Visitor

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

Hi Ivan,

Thanks again for your inputs.

So if i would to create another LACP (Tied to port GI 1/0/48 and Gi 2/0/48) to a server and traffice from VLAN 1 , 5 , and 10 to be allowed. Am i right with the following configs?

interface Bridge-Aggregation2
port link-type trunk
port trunk permit vlan 1 5 10 
link-aggregation mode dynamic

interface GigabitEthernet1/0/48
port link-type trunk
port trunk permit vlan 1 5 10 
port link-aggregation group 2

interface GigabitEthernet2/0/48
port link-type trunk
port trunk permit vlan 1 5 10 
port link-aggregation group 2

And the configuration difference with that of BAGG without LACP is that the there is no need to mention link aggregation mode?

interface Bridge-Aggregation2
port link-type trunk
port trunk permit vlan 1 5 10 

Thanks.

jimmy

Ivan_B
HPE Pro

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

Yes, that is absolutely correct. And the only difference between static (or so-called 'no protocol') aggregation and LACP (dynamic) aggregation is that 'link-aggregation mode dynamic' line.

Just one hint in order to avoid unexpected issues. Sometimes such issues happen if you don't follow order of configuration. They always disappear after a reboot and never appear again, as 'saved-configuration' file's commands are always applied in the correct order during the boot phase, but if you want to be 100% sure your BAGG will function correctly as soon as you configure it, then the order will be:

system-view
interface Bridge-Aggregation2
link-aggregation mode dynamic
#
interface GigabitEthernet1/0/48
port link-aggregation group 2
#
interface GigabitEthernet2/0/48
port link-aggregation group 2
#
interface Bridge-Aggregation2
port link-type trunk
port trunk permit vlan 1 5 10 

The logic is pretty simple. First, create the new BAGG and decide if it will be dynamic or static. Then go to each physical member and make it member of the BAGG. Then return to the BAGG and start configuring link-mode and VLANs. At this step all configuration changes you make under the BAGG's context will be reflected to the physical member ports.

 

I am an HPE employee

Accept or Kudo

ryukifaiz
Occasional Visitor

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

Hi Ivan,

Thanks again for your valuable inputs.

Lets say i want to achieve active-active NIC teaming and policy for high availability and performance, where two ports on the server end set to active and with my dynamic LACP set up, what will be the expected negotiation result, it will be active - active right? All the ports in that LACP will send and receive traffic to all the ports in the NIC Teams?

https://docs.vmware.com/en/VMware-Validated-Design/5.0/com.vmware.vvd.sddc-design.doc/GUID-9C5CF99F-D3BC-4F77-B634-54BAE07A99A1.html

Thanks.

jim

Ivan_B
HPE Pro

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

Unfortunately, I am not a VMware expert and I don't see any technical details in that article that could help me to determine what kind of teaming is used in that Active-Active teaming over there, but I don't like one sentence there - "Create a single virtual switch with teamed NICs across separate physical switches." 'Separate physical switches' does not sound like a link-aggregation, it seems to be that switch-independent teaming I have mentioned in one of my previous messages. Keep in mind that your IRF stack from ESXi point of view is one physical device. So if you see in the VMware docs something mentioning different physical switches - it is not LACP nor static BAGG for sure.

You need to find a VMware guide that describes how to setup an LACP aggregation between vDS and a switch, no matter which vendor that switch will be. Here is what I've found regarding LACP config on vSphere with vDS - https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-34A96848-5930-4417-9BEB-CEF487C6F8B6.html If you will configure vDS to negotiate teaming using LACP, then dynamic BAGG on your IRF stack will gladly make all BAGG's ports 'Selected' which means they all will be eligible for traffic forwarding, e.g. from the IRF stack's perspective all the ports will be 'Active'

 

 

I am an HPE employee

Accept or Kudo

Ivan_B
HPE Pro

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

Maybe this document will be useful in your "LACP on VMware" journey - https://www.arubanetworks.com/techdocs/ArubaOS_86_Web_Help/Content/install-guide/virt-appl/appendix/nic-team-vswi.htm , section 'Creating a Distributed vSwitch Using vCenter with LACP Configuration'. As soon as you configure LACP on vDS's uplink ports, they should negotiate with your IRF's dynamic BAGG to use all physical ports member of the BAGG.

 

 

I am an HPE employee

Accept or Kudo

parnassus
Honored Contributor

Re: Configuring HP 1950 switch for NIC Teaming on Dell Vmware Vspher ESXi

If I'm not mistaken - and no matter if you're dealing with a single standalone physical Switch or with a logical entity like an IRF made of two or more physical Switches clustered together - IF you need to deploy a links aggregation with LACP you MUST use virtual Distributed Switch (vDS) ESXi side (this require a special license)...once deployed the vDS it's a matter of use the Route Based on IP Hash (which equals to LACP terms).

IF instead deal with virtual Standard Switch (vSS) THEN you can still use aggregations BUT you must use Non Protocol ones (and those should cope the setting of BAGGs on your Switch/IRF)...so forget about LACP.

LACP/Static: please note that the only traffic that will be actively distributed (that's per message) on aggregated physical links in a alghoritmical way (L2/L3 or L4 Hashing based on Source/Destination) is just the outgoing one (the egressing one from the device/host) and this is true from the Switch standpoint and from the ESXi standpoint too...in other terms you can't expect the ESXi will actively balance its incoming traffic (exactly as you can't expect the Switch/IRF will actively balance its incoming traffic)...the ESXi and the Switch/IRF will simply receive on their respective LAG interface the incoming traffic based on the alghoritmical choices already done at source (where that traffic was egressing through a Dynamic or a Static link aggregation). That's a point most people don't think about too much. That's also mean that you can easily see traffic polarization (a particular link member of a LAG is used more than the others members) and that will happen because the hashing alghorithm computes what link the egressing messages should take based on Source/Destination details...the more those details vary the more links can be equally used to send traffic away.


I'm not an HPE Employee
Kudos and Accepted Solution banner