HPE Synergy
1821213 Members
3181 Online
109632 Solutions
New Discussion

Re: Synergy Server connections to vmware iSCSI and LACP

 
daliborazure
Frequent Advisor

Synergy Server connections to vmware iSCSI and LACP

Hi

 

I was informed by Vmware (we will be using Synergy with esxi servers) that configuring LACP with ISCSI ports wouldnt be good idea

https://hewlettpackard.github.io/hpe-solutions-hpecp/5.0/Physical%20environment%20configuration/Physical%20environment%20configuration.html#cabling-the-hpe-synergy-12000-frames-and-hpe-virtual-connect-40gb-se-f8-modules-for-hpe-synergy

 

This makes complete sense but I am wondering whether we should use MC-LAG on iSCSI connections as vmware strongly suggests this is not good idea since we already have iSCSI MPIO and LAG these ports would not be that good idea

 

What would your opinion would be on that

 HPE iSCSI.JPG

10 REPLIES 10
support_s
System Recommended

Query: Synergy Server connections to vmware iSCSI and LACP

System recommended content:

1. DEPLOYING AND UPDATING VMWARE ESXI ON HPE SERVERS

2. KB-000344 VMware ESXi All Paths Down/PDL

 

Please click on "Thumbs Up/Kudo" icon to give a "Kudo".

 

Thank you for being a HPE valuable community member.


Accept or Kudo

ChrisLynch
HPE Pro

Re: Synergy Server connections to vmware iSCSI and LACP

MPIO should be used for any block storage protocol connection.  LACP could be used, but it doesn't help with the endpoint path management, only the server side connection.  As LACP in a point-to-point protocol (NIC to adjacent switch port; VC in this case).  LACP is supported on the FlexNIC, so I would suggest disabling LACP on your iSCSI connections, but use it for other IP-based traffic.

I work at HPE
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
daliborazure
Frequent Advisor

Re: Synergy Server connections to vmware iSCSI and LACP

So basically if we dont use LACP then one uplink port per Virtual interconnect is sufficient

Or we would still benefit using 2 child ports without LACP

iSCSI_A_network

Enclosure 1 Port Q5 FortyGigE1/1/5  

Enclosure 1 Port Q6 FortyGigE1/1/6

iSCSI_B_network

Enclosure 2 Port Q5 FortyGigE2/1/5  

Enclosure 2 Port Q6 FortyGigE2/1/6

ChrisLynch
HPE Pro

Re: Synergy Server connections to vmware iSCSI and LACP

Uplinks are completely different.  You should be using LACP with uplinks, regardless.  I was referring to Server Profile Connections.

I work at HPE
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
daliborazure
Frequent Advisor

Re: Synergy Server connections to vmware iSCSI and LACP

So that applies to storage connections as well as Vmware told me not to use LACP for iSCSI communication. I can use LACP for any other connections but not for block storage

 

SO LACP is ok to use for block storage. We dont use LAG connetions in server profile. So uplinks can be LAG even for iSCSI communication

ChrisLynch
HPE Pro

Re: Synergy Server connections to vmware iSCSI and LACP

We dont use LAG connetions in server profile. So uplinks can be LAG even for iSCSI communication

Correct.

I work at HPE
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
daliborazure
Frequent Advisor

Re: Synergy Server connections to vmware iSCSI and LACP

The only reason I even brought up this question is that VMware told me

After talking with a storage colleague, storage performance issues occur when you have LACP protocals running on the network side and multipathing happening on the storage side.

This means that no LACP as storage performance can be affected. Of course on any other uplinks LACP is OK but they say not for block storage connection.

ChrisLynch
HPE Pro

Re: Synergy Server connections to vmware iSCSI and LACP

I don't see how.  LACP is a very common protocol use for edge switch devices to increase the available bandwidth beyond the physical port limitations.  MPIO has no affect on these uplink ports.

I work at HPE
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
cer113
Frequent Advisor

Re: Synergy Server connections to vmware iSCSI and LACP

This is information that I got from Vmware support tech

Using LACP with iSCSI is not a best practice.  Here is an additional doc on that subject:  https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-3FDE1E96-9217-4FE6-8B76-6E3A64766828.html

After talking with a storage colleague, storage performance issues occur when you have LACP protocals running on the network side and multipathing happening on the storage side.

Here is a VMware KB for best practice configuration of iSCSI:  Configuring iSCSI port binding with multiple NICs in one vSwitch for VMware ESXi (2045040), https://kb.vmware.com/s/article/2045040

 

Also in this article

https://core.vmware.com/blog/iscsi-and-laglacp

https://kb.vmware.com/s/article/1001938

  • Do not use for iSCSI software multipathing. iSCSI software mulitpathing requires just one uplink per vmkernel, and link aggregation gives it more than one.

Now I may not understand what are they describing here but to me these statements are quite clear

vitg
Senior Member

Re: Synergy Server connections to vmware iSCSI and LACP

The general rule is not to mix LACP with iSCSI. It is supported by some storage vendors on particular systems.

LACP introduces an extra layer of complexity between the Initiator and the target, and a range of variables you need to account for.

Generally speaking MPIO does increase throughput and will behave more consistently during a link failure, take the below:

On vmware environments, using lacp introduces a greater potential for APD or PDL as the time the network takes to re-converge following a link failure depends on the LACP timers themselves and other features such as Spanning tree.

If the Host, interconnect modules, switches and the Storage appliance are all using lacp, even with fast timers, that's potentially a theoretical minimum of 4 seconds just for LACP, then you have spanning tree on top.

Usually this results in a host not receiving PDL or other SCSI sense codes and it will eventually timeout and mark the datastore as down, requiring manual intervention.

From experience, this is the real world behavior. The Initiator looses a link that happens to be carrying iSCSI traffic, then the delay from re-convergance of LACP and Spanning tree knocks the datastore offline, if there are running vm's it can require force powering them down before the host will even mark the datastore as active again or even a reboot of the host itself.

We have synergy in our environment configured with portchannels with fallback to LACP (SoNIC) on our F32 modules, the blades themselves have multiple ports per mezzanine, across multiple bays and we don't have any lacp at the blade/host level. MPIO is used with round-robin to distribute across the bound iSCSI adapter ports. We usually see 1-4 dropped I/O requests during a loss of path, but other IO sessions on other links are not impacted.

With LACP, the throuput didn't improve (expected) and behavior during a loss of link almost always lead to APD or PDL on the host impacted.

This didn't account for iSER at the time we tested it.