- Community Home
- >
- Servers and Operating Systems
- >
- HPE Synergy
- >
- HPE Synergy
- >
- Synergy Server connections to vmware iSCSI and LAC...
HPE Synergy
1758613
Members
2678
Online
108874
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-13-2024 05:49 PM
04-13-2024 05:49 PM
Re: Synergy Server connections to vmware iSCSI and LACP
The general rule is not to mix LACP with iSCSI. It is supported by some storage vendors on particular systems.
LACP introduces an extra layer of complexity between the Initiator and the target, and a range of variables you need to account for.
Generally speaking MPIO does increase throughput and will behave more consistently during a link failure, take the below:
On vmware environments, using lacp introduces a greater potential for APD or PDL as the time the network takes to re-converge following a link failure depends on the LACP timers themselves and other features such as Spanning tree.
If the Host, interconnect modules, switches and the Storage appliance are all using lacp, even with fast timers, that's potentially a theoretical minimum of 4 seconds just for LACP, then you have spanning tree on top.
Usually this results in a host not receiving PDL or other SCSI sense codes and it will eventually timeout and mark the datastore as down, requiring manual intervention.
From experience, this is the real world behavior. The Initiator looses a link that happens to be carrying iSCSI traffic, then the delay from re-convergance of LACP and Spanning tree knocks the datastore offline, if there are running vm's it can require force powering them down before the host will even mark the datastore as active again or even a reboot of the host itself.
We have synergy in our environment configured with portchannels with fallback to LACP (SoNIC) on our F32 modules, the blades themselves have multiple ports per mezzanine, across multiple bays and we don't have any lacp at the blade/host level. MPIO is used with round-robin to distribute across the bound iSCSI adapter ports. We usually see 1-4 dropped I/O requests during a loss of path, but other IO sessions on other links are not impacted.
With LACP, the throuput didn't improve (expected) and behavior during a loss of link almost always lead to APD or PDL on the host impacted.
This didn't account for iSER at the time we tested it.
LACP introduces an extra layer of complexity between the Initiator and the target, and a range of variables you need to account for.
Generally speaking MPIO does increase throughput and will behave more consistently during a link failure, take the below:
On vmware environments, using lacp introduces a greater potential for APD or PDL as the time the network takes to re-converge following a link failure depends on the LACP timers themselves and other features such as Spanning tree.
If the Host, interconnect modules, switches and the Storage appliance are all using lacp, even with fast timers, that's potentially a theoretical minimum of 4 seconds just for LACP, then you have spanning tree on top.
Usually this results in a host not receiving PDL or other SCSI sense codes and it will eventually timeout and mark the datastore as down, requiring manual intervention.
From experience, this is the real world behavior. The Initiator looses a link that happens to be carrying iSCSI traffic, then the delay from re-convergance of LACP and Spanning tree knocks the datastore offline, if there are running vm's it can require force powering them down before the host will even mark the datastore as active again or even a reboot of the host itself.
We have synergy in our environment configured with portchannels with fallback to LACP (SoNIC) on our F32 modules, the blades themselves have multiple ports per mezzanine, across multiple bays and we don't have any lacp at the blade/host level. MPIO is used with round-robin to distribute across the bound iSCSI adapter ports. We usually see 1-4 dropped I/O requests during a loss of path, but other IO sessions on other links are not impacted.
With LACP, the throuput didn't improve (expected) and behavior during a loss of link almost always lead to APD or PDL on the host impacted.
This didn't account for iSER at the time we tested it.
- « Previous
-
- 1
- 2
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP