Around the Storage Block
cancel
Showing results for 
Search instead for 
Did you mean: 

HPE Nimble Storage dHCI and Peer Persistence

I have been pretty quiet over the last 2 month mainly because I was working on our HPE CSI driver validation against Google Cloud’s Anthos (Yes, a blog post is coming). In the meantime, one of the most asked questions was regarding Peer Persistence: do you support Peer Persistence with HPE Nimble Storage dHCI?  If so, what are the requirements? First, we do support Peer Persistence with dHCI and in this blog post I will share various requirements and provide a high-level overview of the deployment.

Requirements

HPE Nimble Storage dHCI support for Peer Persistence has the following requirements:

  • Two HPE Nimble Storage dHCI arrays are required.

  • The arrays must be of the same model (for example, AF40 and AF40).

  • The arrays must be running the same version of NimbleOS. (5.1.2 minimum)

  • No more than 20 servers can be used for dHCI. (This maximum total does not mean 20 per site, but 20 for both sites together.)

  • A VMware vSphere cluster can only spread across two sites.

  • Only one VMware vCenter server is required.

Deployment

Planning your network and dHCI deployment is a critical piece of the puzzle and I can’t say enough that before starting your deployment, please read our deployment guide and make sure that you have your worksheet ready. The worksheet is available at the end of our deployment guide. There are a lot of data that will be needed when you do the deployment. It will be useful, believe me!

Before moving forward with the deployment, you must complete the network configuration between your sites. One specific requirement for Peer Persistence with HPE Nimble Storage dHCI is the vlan configuration. The management subnet and both iSCSI subnets must be available between the two sites.

Your networking configuration should be similar to the one shown below

pp1.png

To summarize, you need to make sure that your management vlan and your two iSCSI vlans are available on both sites. As you can see, I have used HPE M-Series switches to configure the solution. To avoid any loop in my environment, I have created an mlag port-channel between my sites so that I can have all my vlans available on both site.  An mlag port-channel can be used to interconnect your HPE M-Series to your network as well, from M-Series to M-Series or to other switch vendor so it’s a useful concept and CLI command!

Now that you have your network in place, you can start the deployment! But, did you fill out your worksheet? If yes you are awesome. If no, I still believe that you’re awesome but please take a couple of minutes to fill it out J .

Let’s go over the deployment steps for Peer Persistence at a high level.

  1. Perform the steps from the section Deploying the HPE Nimble Storage dHCI solution from our deployment guide on the first dHCI system. Make sure that you select all the servers from both sites during the deployment. This step is really important, as it will assure a successful deployment.

  2. On the second dHCI system, perform only the task Discover the array from the section Deploying the HPE Nimble Storage dHCI solution.

  3. When the Setup Complete message appears, you can close the window. Yes, you read it correctly, after the Nimble array configuration, we close the browser and you can move back to the first dHCI solution.

  4. Log in to the HPE Nimble Storage web UI on the first dHCI solution.

  5. Click Hardware, Action, Add Array to Group, and look for your second array in the list.pp2.png

  6. Click Add and provide the login and password of your second array.

  7. Click Finish.

At this point, the relation is created between the two arrays. You can use the HPE Nimble Storage dHCI vCenter plugin to create a volume collection and datastore.

Isn’t that simple?  In a few steps, you have two HPE Nimble Storage dHCI deployed with Peer Persistence on top of it. Now you can decide if you would like to use Automatic Switch Over (a.k.a ASO) or only use Synchronous replication. If you plan to use ASO, you need a third site where you can deploy a witness vm. In case you would so, please refer to the deployment guide on InfoSight available here.

In my lab, I have deployed a witness vm and I also have enabled ASO

pp3.png

Management:

Let’s create a VMFS datastore that uses Peer Persistence.

  1. Open a web browser and connect to vCenter (HTML5).

  2. Click Menu and select HPE Nimble Storage.

  3. Click Nimble Groups and select your group.

  4. Click Datastores and select VMFS.

  5. Click the plus sign (+) to add a new datastore.

  6. In the Datastore dialog box, provide the following information and then click Next:

    1. Specify a name for the datastore.

    2.  Optionally, you can provide a short description of the datastore.

    3. Select the datacenter where you want the VMFS datastore to be created.

    4. Under Protocol, select iSCSI.

    5. Under Host, select your HPE Nimble Storage dHCI cluster.pp4.png

  7. Specify a size for the VMFS datastore.

  8. Click Location to specify the array on which want to create the volume. As you can see, I have two different pools, which is exactly the behavior expected here. You can select on which pool you would like to create the volumepp5.png

  9. Select Create a new volume collection to use with this datastore and then click Next. The dialog box expands to enable you to create a volume collection and a schedule for it. You must provide a name for the volume collection. You can then use that volume collection with another datastore, if you choose. Next, complete the information in the Create Volume Collection section of the dialog box. You might need to use the scroll bar to see all the options. The same volume collection can be used with multiple volumes.

    1. Replication type. Select Synchronous

    2. Replication partner. The replication partner should be auto selected.pp6.png

  10. Optionally set limits for IOPS and throughput and click Next. You can select either No Limit or Set Limit, which allows you to enter a value for that option.pp7.png

  11. View the settings summary and click Finish.

  12. The new datastore is now created and automatically replicated synchronously on your second array.pp8.png

You can easily create a datastore on array 1 or array 2 at your convenience. One of the best practices that I can share is that you should make sure that the VMs running on site 1 have their datastore from the array which is also running on this site and vice-versa. This will ensure a lower latency, since the arrays write their data locally on their site.

At this point, you have a fully configured HPE Nimble Storage dHCI solution using Peer Persistence!

Stay tuned, more to come!

 

0 Kudos
About the Author

Fred_Gagne

Fred specializes in HPE Converged Solution and HPE Nimble Storage dHCI solution.