Around the Storage Block
1748151 Members
3649 Online
108758 Solutions
New Article ๎ฅ‚
StorageExperts

Excellence in data reduction with HPE Nimble Storage

Learn why HPE Nimble Storage All-Flash Arrays excel at data reduction and space efficiency. Then prove it with the HPE Store More Guarantee.

HPE Nimble Storage_data reduction_blog.jpgAre you looking to maximize your return on investment in an all-flash array? Then you need to pay careful attention to the arrayโ€™s ability to reduce the footprint of your data.

For most customers, raw solid-state storage provides far too much performance per terabyte of data. By the time youโ€™ve purchased enough capacity for your application, youโ€™ve purchased far more raw performance than you need. Thatโ€™s why an important function of any all-flash array is to lower your cost by squeezing as much of your data into as little raw flash capacity as possible.

HPE Nimble Storage arrays use a set of techniques to minimize the amount of raw flash required to hold your data. In this blog, Iโ€™ll review these techniques and show how Nimble achieves industry-leading data reduction.

HPE Nimble Storage data reduction technology

HPE Nimble Storage uses numerous algorithms to reduce the footprint of your data. We cluster them into three broad categories:

  • Inline deduplication
  • Inline compression
  • Copy avoidance

Hereโ€™s a quick look at the major algorithms, and some insight into how they combine create an array with an effective capacity that far exceeds the raw capacity of the array.

Inline Deduplication

In a world unconstrained by resources, deduplication of incoming data is a simple process: Maintain an index of the fingerprint of every block in the array. When a new block arrives, look it up. If you find the fingerprint, youโ€™ve found duplicate. Unfortunately, this approach requires a massive indexโ€”one that is costly to store, access, and maintain.

In contrast to this naive approach, Nimble Storage dedupe algorithms are efficient, inline, always on and performance optimized.

  • Efficient deduplication requires far less memory in the array to manage a given volume of data in storage. Our fingerprint management system uses both short and long fingerprints, time and space locality and application awareness to achieve this efficiency. As a result, we manage more physical capacity with less memory than our competitors, which in turn means you spend less money on capacity with Nimble arrays.
  • Inline because data is deduplicated first, before other data reduction techniques and before the data is ever committed to flash. This removes load of unnecessary writes to flash and the flash-wear of later post-processing to find and remove duplicates. 
  • Always on because we always remove the duplicates as the data arrives. High-performance write operations such as data copies, virtual machine (VM) moves, or bulk data ingest will not shut down dedupe. This critical ability ensures you donโ€™t run out of space when running workloads that generate lots of duplicate blocks, such as parallel patch updates to a large number of VM images.
  • Performance optimized data reduction algorithms run with minimal impact, whether your data has a few duplicates or is 10:1 dedupable.

Nimbleโ€™s deduplication partitions the volumes into application categories. We never try to deduplicate your Microsoft SQL data against your Oracle data. It wonโ€™t work, and we save time and space by not trying to find duplicates where they wonโ€™t exist. This also allows us to provide application-granular reporting of deduplication results. How well is your Oracle data deduplicating? A couple of clicks will tell you.

Inline compression

HPE Nimble arraysโ€™ always-on compression algorithms offer a field-measured average 2x benefit on many applications, notably including all databases. Our variable block size enables high-performance inline compression without the need to clump blocks together, avoiding the costly read-modify-write penalty on random updates incurred by other platforms.

Nearly all Nimble arrays run with compression enabled on all volumes. All our internal performance testing is run with compression enabled and all of our performance claims are made with compression enabled. High-performance compression has been part of Nimble arrays since the first one we built.

Zero-pattern elimination

Zero-pattern elimination is a special case of compression and deduplication. If a block is full of zeros, rather than processing that block, we simply free the storage that would be associated with that data. For some workloads, such as databases that maintain initialized data blocks, this simple optimization substantially improves performance and data reduction.

Copy avoidance

By far the most efficient data reduction technique is to avoid creating data at all. HPE Nimble arrays support efficient snapshots and zero-copy clones. These techniques create virtual copies of your data for almost any purpose, allowing you to avoid nearly all physical copies of data.

Snapshots

Need a crash-consistent or application consistent image of your data? Nimbleโ€™s Snapshot implementation is so efficient we support up to 1000 snapshots per volume. Snapshots are quick to take, have no performance cost to maintain, and require space only to hold the difference between the active volume and the snapshot. There is no need to limit the number you take or to manage a separate pool of space for snapshot data.

Zero-copy clones

Create as many zero-copy clones of any snapshot as you need. As efficient and performant as the snapshots they are built from, zero-copy clones are perfect for dev/test copies, reporting instances, or for working with historical copies of your data. The Nimble Storage toolkits integrate clone management with popular applications simplifying the creation of full database instances using this technology.

Rubber meets road: check out the HPE Store More Guarantee

Data reduction has long been a priority with HPE Nimble Storage. Between our world-class data reduction algorithms and our highly efficient metadata management, weโ€™re willing to bet that our arrays are more efficient at storing data than anyone elseโ€™s.

In this blog, Iโ€™ve given you a high-level view of what makes us excel at space efficiency. Want us to prove weโ€™re good? Take a look at our HPE Store More Guarantee and see for yourself how Nimble Storage excels in data reduction.


Stephen Daniel.jpg

Meet Around the Storage Block blogger Stephen Daniel, Distinguished Technologist, HPE. Stephen has spent more than 30 years working the design and implementation of high performance commercial computing systems, with the last four years at HPE Nimble Storage. He works on storage system performance, data reduction and on integrating HPE Nimble Storage technology with databases and Linux ecosystems.

 

 

 

 

0 Kudos
About the Author

StorageExperts

Our team of Hewlett Packard Enterprise storage experts helps you to dive deep into relevant infrastructure topics.