- Community Home
- >
- Servers and Operating Systems
- >
- HPE Synergy
- >
- HPE Synergy
- >
- Re: Synergy Server connections to vmware iSCSI and...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-04-2024 09:02 AM - last edited on 03-11-2024 10:45 PM by support_s
03-04-2024 09:02 AM - last edited on 03-11-2024 10:45 PM by support_s
Synergy Server connections to vmware iSCSI and LACP
Hi
I was informed by Vmware (we will be using Synergy with esxi servers) that configuring LACP with ISCSI ports wouldnt be good idea
This makes complete sense but I am wondering whether we should use MC-LAG on iSCSI connections as vmware strongly suggests this is not good idea since we already have iSCSI MPIO and LAG these ports would not be that good idea
What would your opinion would be on that
- Tags:
- Cable
- Synergy system
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-04-2024 10:03 AM
03-04-2024 10:03 AM
Query: Synergy Server connections to vmware iSCSI and LACP
System recommended content:
1. DEPLOYING AND UPDATING VMWARE ESXI ON HPE SERVERS
2. KB-000344 VMware ESXi All Paths Down/PDL
Please click on "Thumbs Up/Kudo" icon to give a "Kudo".
Thank you for being a HPE valuable community member.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2024 08:52 AM
03-08-2024 08:52 AM
Re: Synergy Server connections to vmware iSCSI and LACP
MPIO should be used for any block storage protocol connection. LACP could be used, but it doesn't help with the endpoint path management, only the server side connection. As LACP in a point-to-point protocol (NIC to adjacent switch port; VC in this case). LACP is supported on the FlexNIC, so I would suggest disabling LACP on your iSCSI connections, but use it for other IP-based traffic.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2024 09:17 AM
03-08-2024 09:17 AM
Re: Synergy Server connections to vmware iSCSI and LACP
So basically if we dont use LACP then one uplink port per Virtual interconnect is sufficient
Or we would still benefit using 2 child ports without LACP
iSCSI_A_network
Enclosure 1 Port Q5 FortyGigE1/1/5
Enclosure 1 Port Q6 FortyGigE1/1/6
iSCSI_B_network
Enclosure 2 Port Q5 FortyGigE2/1/5
Enclosure 2 Port Q6 FortyGigE2/1/6
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2024 09:46 AM
03-08-2024 09:46 AM
Re: Synergy Server connections to vmware iSCSI and LACP
Uplinks are completely different. You should be using LACP with uplinks, regardless. I was referring to Server Profile Connections.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2024 09:55 AM
03-08-2024 09:55 AM
Re: Synergy Server connections to vmware iSCSI and LACP
So that applies to storage connections as well as Vmware told me not to use LACP for iSCSI communication. I can use LACP for any other connections but not for block storage
SO LACP is ok to use for block storage. We dont use LAG connetions in server profile. So uplinks can be LAG even for iSCSI communication
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2024 09:56 AM
03-08-2024 09:56 AM
Re: Synergy Server connections to vmware iSCSI and LACP
We dont use LAG connetions in server profile. So uplinks can be LAG even for iSCSI communication
Correct.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2024 10:31 AM
03-08-2024 10:31 AM
Re: Synergy Server connections to vmware iSCSI and LACP
The only reason I even brought up this question is that VMware told me
After talking with a storage colleague, storage performance issues occur when you have LACP protocals running on the network side and multipathing happening on the storage side.
This means that no LACP as storage performance can be affected. Of course on any other uplinks LACP is OK but they say not for block storage connection.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2024 11:40 AM
03-08-2024 11:40 AM
Re: Synergy Server connections to vmware iSCSI and LACP
I don't see how. LACP is a very common protocol use for edge switch devices to increase the available bandwidth beyond the physical port limitations. MPIO has no affect on these uplink ports.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-11-2024 06:05 AM
03-11-2024 06:05 AM
Re: Synergy Server connections to vmware iSCSI and LACP
This is information that I got from Vmware support tech
Using LACP with iSCSI is not a best practice. Here is an additional doc on that subject: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-3FDE1E96-9217-4FE6-8B76-6E3A64766828.html
After talking with a storage colleague, storage performance issues occur when you have LACP protocals running on the network side and multipathing happening on the storage side.
Here is a VMware KB for best practice configuration of iSCSI: Configuring iSCSI port binding with multiple NICs in one vSwitch for VMware ESXi (2045040), https://kb.vmware.com/s/article/2045040
Also in this article
https://core.vmware.com/blog/iscsi-and-laglacp
https://kb.vmware.com/s/article/1001938
- Do not use for iSCSI software multipathing. iSCSI software mulitpathing requires just one uplink per vmkernel, and link aggregation gives it more than one.
Now I may not understand what are they describing here but to me these statements are quite clear
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2024 05:49 PM - last edited on 09-16-2024 02:21 AM by support_s
04-13-2024 05:49 PM - last edited on 09-16-2024 02:21 AM by support_s
Re: Synergy Server connections to vmware iSCSI and LACP
LACP introduces an extra layer of complexity between the Initiator and the target, and a range of variables you need to account for.
Generally speaking MPIO does increase throughput and will behave more consistently during a link failure, take the below:
On vmware environments, using lacp introduces a greater potential for APD or PDL as the time the network takes to re-converge following a link failure depends on the LACP timers themselves and other features such as Spanning tree.
If the Host, interconnect modules, switches and the Storage appliance are all using lacp, even with fast timers, that's potentially a theoretical minimum of 4 seconds just for LACP, then you have spanning tree on top.
Usually this results in a host not receiving PDL or other SCSI sense codes and it will eventually timeout and mark the datastore as down, requiring manual intervention.
From experience, this is the real world behavior. The Initiator looses a link that happens to be carrying iSCSI traffic, then the delay from re-convergance of LACP and Spanning tree knocks the datastore offline, if there are running vm's it can require force powering them down before the host will even mark the datastore as active again or even a reboot of the host itself.
We have synergy in our environment configured with portchannels with fallback to LACP (SoNIC) on our F32 modules, the blades themselves have multiple ports per mezzanine, across multiple bays and we don't have any lacp at the blade/host level. MPIO is used with round-robin to distribute across the bound iSCSI adapter ports. We usually see 1-4 dropped I/O requests during a loss of path, but other IO sessions on other links are not impacted.
With LACP, the throuput didn't improve (expected) and behavior during a loss of link almost always lead to APD or PDL on the host impacted.
This didn't account for iSER at the time we tested it.
- Tags:
- Network Controller