- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Re: Question about compression
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2014 10:40 PM
тАО09-18-2014 10:40 PM
Hi All,
Please excuse my ignorance if I don't understand how compression works with Nimble. So we have a CS240G which is 24TB RAW and about 15.5TB usable after nimble's triple parity raid. According to the image below I'm getting 1.35X compression with about 13.56TB used and I'm already close to 90% capacity. With 1.35X compression, shouldn't I get about 20TB usuable
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2014 11:19 PM
тАО09-18-2014 11:19 PM
Re: Question about compression
Hello Hien,
The Nimble UI works on TiB (1024) rather than TB (1000) - so an array which shows 15.5TB usable is actually 17.04TB - there's an open bug to fix the terminology as it's a little confusing.
The way we're showing data usage on your screen should be read as follows:
Volume Usage - how much data you're actually storing on the array after any data reduction such as compression & pattern matching.
Primary Compression - how much data space you've saved through compression so far.
Therefore as it stands you've written a total of 19.2TiB (21.1TB) of data to the array. The array has managed to save 5.64TiB (6.2TB) of space through LZ4 compression, meaning that you are storing 13.56TiB (14.9TB) on the array itself.
Hope this helps!
twitter: @nick_dyer_
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2014 11:26 PM
тАО09-18-2014 11:26 PM
Re: Question about compression
Hi Nick,
When I calculate the datastore usage on my esxi hosts though, I still only come up with about 14.3TB. Is there something else I'm missing? Thanks for taking the time to respond.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-19-2014 03:52 AM
тАО09-19-2014 03:52 AM
SolutionHi Hien,
This is most likely where you have either created then deleted or SVmotioned a VM - VMware will report that the volume usage has shrunk, whereas on the array all we see are used blocks. What you should look to do is run SCSI UNMAP within VMware to reclaim those dead blocks within the volume.
Here's a good blog on SCSI UNMAP for your information: Space Reclamation in vSphere 5.5 with Nimble Storage
twitter: @nick_dyer_