HPE Storage Tech Insiders
cancel
Showing results for 
Search instead for 
Did you mean: 

Nimble OS 2.0 Part 1: Manual vs Automatic Networking

ndyer39

Welcome everyone to the first of a series of Nimble OS 2.0 "whats new" blog posts. Today we're going to focus on Manual vs Automatic Networking, and what that means to you.

With prior releases of Nimble OS' before 2.0, the way of provisioning and connecting to Nimble volumes was very much a manual process. Each host discovered Nimble volumes via the iSCSI Discovery IP address, which is a virtual IP roaming across the data ports. We would then create a Nimble volume and present to an iSCSI Initiator within the host, and then bind each host NIC to each data IP address on the array for multipathing. This was done manually by the end user, or semi-automated by using scripts (two examples are here for VMware, and here for Windows.

Here's an example of a VMware host connecting to a Nimble array with a single IP subnet. Notice that 8 data connections are created per volume, 4 for each NIC, and these are connected across both switches.

Manual Mode.jpg

The downside to mapping data and configuring servers this way is:

  • Lots of iSCSI sessions are created for each volume, which can cause a problem for VMware datastores.
  • Confusion reigns as to how many connections we are expecting to see per volume/host/array - especially with Hypervisors such as vSphere and Hyper-V.
  • Manual MPIO configuration is not the easiest.
  • Scripting MPIO configuration helps, but in certain systems the configurations are not persistent when rebooting (ie VMware again).
  • The host has no intelligence as to which host port should use to access the data. Therefore it may use a port which is being overwhelmed with data, have latency or even disk queues.
  • The host may also end up routing data via an ISL link from one switch to another to access a specific data IP address - as again no intelligence of how the array IP layout exists. This can saturate ISL links between switches and cause latency and IO problems.

Therefore a big change in Nimble OS 2.0 is the introduction of "Automatic Mode" of networking, which solves a lot of the above.

Automatic mode works in conjunction with a couple of other features called Nimble Connection Manager - NCM for short (a host based tool for connecting and managing volumes/multipathing wthin VMware and/or Windows) and Virtual Target IP addresses (or VTIP for short).

EDIT: In NimbleOS 2.1 we have merged the VTIP functionality into the Discovery IP and deprecated the VTIP as very often these would be the same IPs, and more people are familiar with Discovery IP than VTIP.

A VTIP is a Virtual IP address, which similarly to the Discovery IP roams virtually across the data ports on the Nimble array. In a single subnet configuration it often assumes the same IP address as your Discovery IP too, to make things easier (in a dual subnet configuration you should create a VTIP for each subnet). The VTIP effectively becomes a single point of management for Nimble connections and multipathing - meaning whenever a volume is created, only that IP address needs to be considered when creating a handshake from the server to the array.

The Nimble Connection Manager will then work in conjunction with the new Networking intelligence built into the Nimble OS 2.0 to on-the-fly create, manage, rebalance, or disconnect iSCSI connections between the host NICs and the data NICs on the Nimble array.

Automatic Mode.jpgTake the example above. We are now using a VTIP of 192.168.50.100, and the Nimble Connection Manager has automatically created two iSCSI sessions for me - NIC A going to 192.168.50.101 and NIC B going to 192.168.50.102. Also notice that these sessions are created on their own local switches, rather than reaching across the stack to bind across ports on the other switch. This is what IP Address Zones provide - a nice way to separate switches from eachother into Bisect (ie 50.1 - 50.127 on switch A, 50.128 - 50.255 on switch B) or Even/Odd (switch A only ever has even IP addresses, and switch B only has odd IP addresses).


Here's how it looks from the new Nimble OS 2.0 Networking tab. Notice my VTIP is the same as my Discovery IP address to make configuration easy. Automatic mode is enabled as well as rebalancing.

Screen Shot 2014-03-28 at 17.44.58.pngScreen Shot 2014-03-28 at 17.45.14.png


Note: A newly upgraded array from 1.4 -> 2.0 will always be in Manual mode by default to ensure legacy connections are not invalidated - only switch to Automatic mode when NCM is installed & ready to use.

The Automatic mode really comes into its own when Scale-Out is implemented - as data may now be distributed and reside across two different systems. NCM will understand and allow for direct connecting and rebalancing across the two/three/four systems on the fly without having to build in iSCSI Redirects, which then introduces latency for reads and writes. This is done as the VTIP now spans across all arrays, becoming the only IP address needed to know regardless of the amount of systems in the group.

Automatic Mode-scale out.jpg

See above - we are still presenting over 192.168.50.100, yet Nimble Connection Manager has now created iSCSI sessions across both systems for both NIC A and NIC B on the fly to be able to access that volume, again persisting and understanding how the switch stack is mapped out, without any manual configuration or direction of how the server should access it's data.

I hope you found this blog post useful - if you have any questions please ask them below. Also please consult the Nimble OS 1.4.x -> 2.0 upgrade guide for more information & guidance before upgrading.

About the Author

ndyer39

Comments
jliu79

Hi Nick, thanks for the post and it helps me better understand how it works. I still have a question: What kind of down time should I expect when switching from manual to automatic? I know installing the 2.0 toolkit will require the servers to be restarted, is there any other change that will result disconnecting from the server to the array?

ndyer39

Hi Jason. Thanks for the question.

There should be no downtime when flipping the switch from manual to automatic as long as your Virtual Target IP Address is the same as your Discovery IP address, as for any new iSCSI connection request should now report to this IP address rather than anything else (ie NOT a physical NIC data IP address!). I believe any current connections to the array are retained until something like a reboot occurs and a new connection request comes in.

marktheblue45

A picture speaks a thousand words.... Sometimes.

marktheblue45

It would be desirable if the precheck fails to get a little bit more information from the Web GUI. NOS 2.x sure takes the hassle out of setup and MPIO. A BIG PLUS.

jliu79

Hi Nick, another question: after switch to automatic connection using NCM, do I need to manually disconnect the iSCSI connections in the Microsoft iSCSI iniator? Thanks.

Jason,

Quick Answer: No, the Nimble Connection Service will detect when the array changes from 'Manual Connection Method' to 'Automatic Connection Method' and carefully convert the active connections from the array's Data IPs to the array's Discovery IPs for you. It may take several minutes.

Longer Answer: When the array is switched from 'Manual Connection Method' to 'Automatic Connection Method' the Nimble Connection Service will recognize the change and begin to 'manage' the connections to the array's target volumes.  'Manage target connections' really means that NCS determine the number of connections, will inspect each connection, its source and destination endpoints, and determine the number of arrays involved in the target.  From this information it will calculate the optimal number of connections ('m' * 'a', where 'm' is the number of host initiator ports that can connect to the array's discovery IP addresses, i.e. are in the same subnets, and 'm' is the number of arrays that participate in the target. The default minimum number of connections is 2 and the default maximum number of connections is 8. If the calculated optimal number falls within the default minimum and maximum number of connections, NCS will create the optimal number of connections. Otherwise, the defaults apply.

When NCS actually adds and removes iSCSI sessions to meet the optimal number of connections, it will balance the connections across the host initiator ports and the target's target portal IP addresses.

NCS will always add a connection and wait until it is fully operational before removing a connection either to balance the connections across the host initiator ports or to replace a connection to the array's Data IP with a connection to one of the array's Discovery IPs.

You can easily view a target's connections in the NCM GUI. Select a connected target and click the properties button. The properties dialog shows the source and destination IP of each active connection.

Beware: If there are more than two connections to a given target to an array in 'Manual Connection Method' and the number of host initiator ports is one or two and the number of arrays involved in a target volume is 1, NCS will mange the number of connections to two (2) since the optimal number of connections is (2 *1).  If for performance reasons, you want a larger number of connections, you'll need to change the default minimum number of connections in the registry.

jliu79

Hi James, thanks for the explanation. It explains why I only see 2 connections from the host to the volume. I have 2 NIC on the host and just 1 array. I originally thought there should be 8 connections as there are 2 NIC cards on host and 4 data ports on the array.

Events
See posts for dates
See posts for locations
HPE at 2018 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2018.
Read more
See posts for dates
Online
HPE Webinars - 2018
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all