HPE Storage Tech Insiders
Showing results for 
Search instead for 
Did you mean: 

Nimble Fibre Channel: Provisioning Nimble FC Volume to Windows


My last blog, Nimble Fibre Channel: Introduction to Nimble FC, Setup and Install a FC Array explored how to setup a Nimble Fibre Channel array. This blog and the next few posts we will see how to provision a volume to Windows, ESX and Linux. As one would expect it's very similar to iSCSI volume provisioning but there are some nice additions to Fibre Channel implementation that the Engineering team have incorporated, a few tweaks which I think are very useful within a Fibre Channel context.

Before starting any provisioning exercise to a host it's always useful to identify what the server's Host Bus Adaptors are, in order to grab useful information like World Wide Port Names (WWPN's) which uniquely identify the host and the port.  This is essential information for Zoning and LUN Masking.  There are a couple of ways to do this, firstly when booting the server you can press CTRL+Q (Qlogic) or CTRL+E (Emulex) in the BIOS screen and identify the WWPN's. An easier way is to load the Qlogic Host Software (QConverge Console / SANSurfer) or Emulex Software (OneCommand) software, I'd highly recommend this as the tools are always very useful to have at hand when troubleshooting any issues.

Below is a screenshot of my Windows hosts QConverge Console which shows the Qlogic adaptors,  the software allows me to grab by WWPN and see which driver and firmware versions are installed.  It's well worth jotting down the WWPN as we will need to reference them later when defining the host to the Nimble storage array.


Remember you'll need to do this for all HBA ports that are installed in the host.

Next on the Nimble GUI we will go ahead and provision our first volume for this Windows host. First click on Manage > Volumes and select New Volume


The Create Volume dialogue will now ask you for the Volume Name, a description for it's use and Performance Policy.

For those who are unaware, the Performance Policy sets the caching policy to be used by CASL, whether the volume should use compression and sets the block size for the application. This is the sum total of tweaking and fine tuning to CASL, it largely is a 'set and forget' policy that enforces best practice for the volumes use case. If your specific application is not listed then the GUI will allow you to create your own policy, if in doubt just use the Default policy.

Finally the wizard will ask which hosts should be able to access this volume.


Normally you will select your desired host from the list, but mine hasn't been setup yet so click New Initiator Group, this will launch a wizard to define the host.

Name your initiator group to be the name of your host and then add each WWPN that you identified to be present in your host (from the earlier step above).  This is where there is one really nice feature that the team has built.  Rather than key in that 16 alphanumeric string, just type in the first three characters (below I have typed "21:").  Nimble OS will now query the Fabric's Name Server and then display any strings that it matches against. In the instance below it's matched to the two WWPN's of the host.   This not only saves time but also ensures there are no mistakes when keying the WWPN's in .  A really nice feature!


Both ports should be added to the Host Initiator Group...


Once saved the new Initiator Group will be added and associated to the LUN.   In order to present Fibre Channel devices,  Nimble OS also had to successfully learn to count pass zero (in iSCSI all devices are presented as LUN ID 0).  With Fibre Channel we have to present a LUN ID.  Nimble OS automatically chooses a free LUN ID.


If you manually tried to override this for a LUN number that is already in use - the wizard will throw an error and not only tell you that it's taken but also alert you to which volume is using that LUN ID (another really nice touch):


Clicking Next, allows you to define the size of the volume and the space provisioning settings (500GB and Thin provisioned in my example below)


Finally we are asked to define how we wish to Protect the volume with regards to space efficient snapshots and replication (I chose none here):


Once finished the volume is provisioned instantly.  Clicking on the volume will allow us to review the settings, edit/change them.  We can see any IO that is initiated on the volume (non-yet as it's not yet to be mounted):


We can also see if there are any initiators accessing the volume (via the Connected Initiators information):


So the process is almost identical to iSCSI, just the different type of ACL being WWPN based rather than IQN.

Next we can login the Windows host and from Disk Management we can rescan the disks:


As expected, our new disk will pop up.

Note: If at this point if you see several disks, it's probably a good indicator that you haven't got MPIO feature installed and the Nimble DSM installed.

The volume by default will be offline, so right click it to bring it online, Initialise it and then format it and place a drive letter. At which point there is nothing more to do other than start to use it and migrate data to the newly provisioned volume.


Note: there is no need to install Nimble Connection Manager for Windows.  The path management and connections are all managed by ALUA.

You can view the path and MPIO status by right clicking the Disk (on the left handside of the Window where is says Online) and by selecting Properties.

Then select MPIO to see the MPIO policy and the path status. As I have two ports in the this host and eight Fibre Channel targets, I will see 8 paths to the storage.  ALUA reports 4 paths as Active/Optimised (these are from the Active Controller) and 4 Paths are in a Standby state (these are from the Standby Controller).  This is a great view to check to see if there are any paths down and identify which paths are down.  A path failure is also logged in the Windows Event Viewer.


The array also shows a wealth of information.  For instance clicking on Manage > Connections, we will see each Initiator that is logged into to a device along with which Target ports it is logged into and the status of those ports.  Again really nice from a troubleshooting perspective!


Below is a video that shows the exact process above:

Video Link : 1141

In the next blog we'll look at the same process using ESX and the vCenter Plugin.

Please feel free to ask questions below...

About the Author