- Community Home
- >
- Storage
- >
- Around the Storage Block
- >
- Excellence in data reduction with HPE Nimble Stora...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Excellence in data reduction with HPE Nimble Storage
Learn why HPE Nimble Storage All-Flash Arrays excel at data reduction and space efficiency. Then prove it with the HPE Store More Guarantee.
Are you looking to maximize your return on investment in an all-flash array? Then you need to pay careful attention to the arrayโs ability to reduce the footprint of your data.
For most customers, raw solid-state storage provides far too much performance per terabyte of data. By the time youโve purchased enough capacity for your application, youโve purchased far more raw performance than you need. Thatโs why an important function of any all-flash array is to lower your cost by squeezing as much of your data into as little raw flash capacity as possible.
HPE Nimble Storage arrays use a set of techniques to minimize the amount of raw flash required to hold your data. In this blog, Iโll review these techniques and show how Nimble achieves industry-leading data reduction.
HPE Nimble Storage data reduction technology
HPE Nimble Storage uses numerous algorithms to reduce the footprint of your data. We cluster them into three broad categories:
- Inline deduplication
- Inline compression
- Copy avoidance
Hereโs a quick look at the major algorithms, and some insight into how they combine create an array with an effective capacity that far exceeds the raw capacity of the array.
Inline Deduplication
In a world unconstrained by resources, deduplication of incoming data is a simple process: Maintain an index of the fingerprint of every block in the array. When a new block arrives, look it up. If you find the fingerprint, youโve found duplicate. Unfortunately, this approach requires a massive indexโone that is costly to store, access, and maintain.
In contrast to this naive approach, Nimble Storage dedupe algorithms are efficient, inline, always on and performance optimized.
- Efficient deduplication requires far less memory in the array to manage a given volume of data in storage. Our fingerprint management system uses both short and long fingerprints, time and space locality and application awareness to achieve this efficiency. As a result, we manage more physical capacity with less memory than our competitors, which in turn means you spend less money on capacity with Nimble arrays.
- Inline because data is deduplicated first, before other data reduction techniques and before the data is ever committed to flash. This removes load of unnecessary writes to flash and the flash-wear of later post-processing to find and remove duplicates.
- Always on because we always remove the duplicates as the data arrives. High-performance write operations such as data copies, virtual machine (VM) moves, or bulk data ingest will not shut down dedupe. This critical ability ensures you donโt run out of space when running workloads that generate lots of duplicate blocks, such as parallel patch updates to a large number of VM images.
- Performance optimized data reduction algorithms run with minimal impact, whether your data has a few duplicates or is 10:1 dedupable.
Nimbleโs deduplication partitions the volumes into application categories. We never try to deduplicate your Microsoft SQL data against your Oracle data. It wonโt work, and we save time and space by not trying to find duplicates where they wonโt exist. This also allows us to provide application-granular reporting of deduplication results. How well is your Oracle data deduplicating? A couple of clicks will tell you.
Inline compression
HPE Nimble arraysโ always-on compression algorithms offer a field-measured average 2x benefit on many applications, notably including all databases. Our variable block size enables high-performance inline compression without the need to clump blocks together, avoiding the costly read-modify-write penalty on random updates incurred by other platforms.
Nearly all Nimble arrays run with compression enabled on all volumes. All our internal performance testing is run with compression enabled and all of our performance claims are made with compression enabled. High-performance compression has been part of Nimble arrays since the first one we built.
Zero-pattern elimination
Zero-pattern elimination is a special case of compression and deduplication. If a block is full of zeros, rather than processing that block, we simply free the storage that would be associated with that data. For some workloads, such as databases that maintain initialized data blocks, this simple optimization substantially improves performance and data reduction.
Copy avoidance
By far the most efficient data reduction technique is to avoid creating data at all. HPE Nimble arrays support efficient snapshots and zero-copy clones. These techniques create virtual copies of your data for almost any purpose, allowing you to avoid nearly all physical copies of data.
Snapshots
Need a crash-consistent or application consistent image of your data? Nimbleโs Snapshot implementation is so efficient we support up to 1000 snapshots per volume. Snapshots are quick to take, have no performance cost to maintain, and require space only to hold the difference between the active volume and the snapshot. There is no need to limit the number you take or to manage a separate pool of space for snapshot data.
Zero-copy clones
Create as many zero-copy clones of any snapshot as you need. As efficient and performant as the snapshots they are built from, zero-copy clones are perfect for dev/test copies, reporting instances, or for working with historical copies of your data. The Nimble Storage toolkits integrate clone management with popular applications simplifying the creation of full database instances using this technology.
Rubber meets road: check out the HPE Store More Guarantee
Data reduction has long been a priority with HPE Nimble Storage. Between our world-class data reduction algorithms and our highly efficient metadata management, weโre willing to bet that our arrays are more efficient at storing data than anyone elseโs.
In this blog, Iโve given you a high-level view of what makes us excel at space efficiency. Want us to prove weโre good? Take a look at our HPE Store More Guarantee and see for yourself how Nimble Storage excels in data reduction.
Meet Around the Storage Block blogger Stephen Daniel, Distinguished Technologist, HPE. Stephen has spent more than 30 years working the design and implementation of high performance commercial computing systems, with the last four years at HPE Nimble Storage. He works on storage system performance, data reduction and on integrating HPE Nimble Storage technology with databases and Linux ecosystems.
- Back to Blog
- Newer Article
- Older Article
- haniff on: High-performance, low-latency networks for edge an...
- StorageExperts on: Configure vSphere Metro Storage Cluster with HPE N...
- haniff on: Need for speed and efficiency from high performanc...
- haniff on: Efficient networking for HPEโs Alletra cloud-nativ...
- CalvinZito on: Whatโs new in HPE SimpliVity 4.1.0
- MichaelMattsson on: HPE CSI Driver for Kubernetes v1.4.0 with expanded...
- StorageExperts on: HPE Nimble Storage dHCI Intelligent 1-Click Update...
- ORielly on: Power Loss at the Edge? Protect Your Data with New...
- viraj h on: HPE Primera Storage celebrates one year!
- Ron Dharma on: Introducing Language Bindings for HPE SimpliVity R...