- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Application Integration
- >
- Re: Nimble on Windows
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2020 06:44 AM
06-16-2020 06:44 AM
How does the load balancing algorithm work if we have a Windows client (NWT) which is reading and writing data to a Volume which is striped across a Pool? Is the NWT able to write the data to the array port which is local to the data blocks? Or does the data sometimes traverse the intra-group network? Thank you for clarifying.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2020 09:07 AM
06-16-2020 09:07 AM
SolutionFor hosts that don’t have an NCM with 2.x arrays. This host will not know which array to contact. Other arrays will forward the info to that array resulting in latency. However, if you have NCM (Nimble Connection Manager) is installed in the Host, NCM will take care of Automatic MPIO configuration and path management.
Nimble Connection Manager (NCM) is designed to simplify making and maintaining the optimal number of iSCSI connections (also known iSCSI sessions) between the Windows host and Nimble array.
The NCM feature brings all the things you need to make initial iSCSI connections in one place and helps you verify that the connections are correct. The NCM GUI makes the optimal number of
connections to a target at the request of the user instead on forcing the user to make each connection one at a time.
When the user asks NCM to connect to a Nimble volume, with one request NCM:
• Gathers interface, subnet, and volume information
• Calculates the optimal number of connections
• Determines which host network interfaces are in the same subnets as the array network interfaces
• Attempts to make the optimal number of connections
NCM has NCS (Nimble Connection Service) which Calculates and maintains the optimal number of connections to each Nimble volume. NCS monitors changes to the host and to the array over time, and adjusts the optimal number of connections as needed. Host changesover timeincludechangingthenetwork interfaces. Array changes over time include adding an array, merging a pool, evacuating an array, and changing between manual and automatic iSCSI connection modes. NCS also compensates for manually added connections by automatically removing them, and for manually deleted connections by restoring them.
NCM basically interacts with the Group Leader Array and it knows how the volume is spread across (in which pool the volume is created from and if that pool is spread across more than one Array). So it is intelligent to manage the data movement.
I work for HPE