HPE Storage Tech Insiders
Showing results for 
Search instead for 
Did you mean: 

Merged Pool Scale Out with Nimble Connection Manger


One of the more interesting features of Nimble Storage is the scalability it brings to the table (at no additional cost!). Most people are familiar with features such as Scale Up (non-disruptive scaling of system performance) or Scale Deep (non-disruptive scaling of system capacity); however, you can also non-disruptively do both at the same time, which we call Scale Out in the Nimble terminology.

Nimble allows you to take the ability of two arrays and merge their storage pool getting the aggregate capacity and performance of both. That is to say, if you had a Nimble AF7000 (210,000 IOPs @ 4k 70r/30w) with 92TB of capacity, you could merge it with a second AF7000 with 92TB of capacity and be able to drive a single volume up to 420,000 IOPs and span its capacity across both arrays.

Topology wise it looks something like this:

You have a host running the Nimble Connection Manager (NCM) software (we support multiple platforms). The host is connected via FC or iSCSI to the storage arrays. The storage arrays are grouped and present the volume as one target. That volume is divided up into bins across arrays. In the example above these bins are evenly distributed across both arrays thought that is not always the case. At some point an array might contain more bins for a specific volume then the other. Why? As NCM sends IO down to the arrays it will decide which array gets the next IO based on current capacity and how busy the array is.

As one array gets busy, with say an intensive read request, we might land IO on the other array until the latency is such that each array is equal, and then start distributing IO evenly again.  We are able to control this at the front-end and redirect the writes before they land, pushing them to the correct place. The IO is not being transferred at the backed. The goal is that we have a system of bins that segment a volume across arrays and then maintain a map of said bins in order to request and store data for the various volumes across the spanned storage pool.

That sounds complex, and it is. With everything that is Nimble, you do not need to worry about it! We take care of it for you. Great! But can you trust us to do that? Prove it? Sure! Let’s look at an MS SQL cluster that spans multiple AF9000s.

Four Nimble AF9000s (two in Group-A, two in Group-B) took in a total of 229,314,662,573 IOPs over a 14-day period. The average total block size over that time was 21.16k (the production workload was 34.42k Read/8.72k Write and the mirror workload was 8.08k Read and Write). Meaning, that IO translated to roughly 4.4 PB of data or 323TB per day in IO transactions. That is a whole lot of IO.

47,776,722,162 IOPs were read, 181,537,940,411 were write. 83,686,893,578 belonged to Group-A, while 145,627,768,995 belonged to Group-B. Array A-1 and A-2 had a deviation of 0.075% in IOPs (A-2 being greater), Array B-1 and B-2 had a deviation of 0.024% (B-2 being greater). Meaning A-2 and B-2 experienced 0.075% and 0.024% more IOPs respectively.

To put it a different way, in all of that data transfer in and out of the two groups, the arrays grouped experienced less than one-tenth of a percent in deviation of IO transactions. Overall, this shows that the bin mapping that the Nimble Connection manager does for striped volumes (spreading IO over arrays) is exceedingly efficient.

This is how Nimble is able to do linear performance scaling inside of groups with spanned pools. The bin mapping process we use allows customers to scale without performance impact or complexity. It is the very definition of set it and forget it, something we all need in our datacenters today.

If you want to learn more about Nimble Storage, and free up your time to do more important things in your datacenter, feel free to hit up any member of the Nimble Storage team!

Alexander Lawrence
Sr. System Engineer
Pacific Northwest

Bryan Beulin

SE Manager

Mountain District


Rick Lindgren

Principle Engineer

Pacific Northwest


About the Author



You fail to mention a very important item here that we found out the hard way after already purchasing arrays. In order to take advatage of this feature you lose deduplication; one of the prime selling points of storage arrays. 

Tomas CZE

Hello, does this configuration behaves like 4 controllers storage? I mean, what happens if 2 controllers of one arrays fails? Can I still access all data from second array? Thank you

Tomas CZE

Hello, and is it possible to interconnect 2 arrays directly or there has to be a switch used? Thank you



There is no concept of "network RAID" across a scale-out pool. A single array is 99.9999% resillient as per Infosight anyway with no single point of failure - so the chance of the system going down are slim.

It is not possible to interconnect arrays - the group network must always pass through an Ethernet fabric of some sort (preferably 10Gb).

Read for dates
HPE at 2019 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2019.
Read more
Read for dates
HPE Webinars - 2019
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all