Disk Enclosures
1748256 Members
3969 Online
108760 Solutions
New Discussion юеВ

Re: Definition of raw disk capacity vs usable capacity

 
Keith C. Patterson
Frequent Advisor

Definition of raw disk capacity vs usable capacity

Hello,
I think I'm a little confused. I just read a report that defined raw disk capacity as the total capacity available for data not counting any RAID overhead.
My understanding was that usable capacity was all the capacity left over for data after the RAID overhead has been taken into account.
Where am I going wrong on this?
Thanks.
9 REPLIES 9
Steven Clementi
Honored Contributor

Re: Definition of raw disk capacity vs usable capacity

Keith:


Generally speaking...

RAW Space is the Absolute TOTAL disk space available to the array subsystem.

For instance... an EVA with 8 300GB disks in a group would give you 2.4TB RAW space.

Subtract from that Single Disk protection... and you have 1.8TB Usable Space.

So, USABLE space is generally the amount of space left after ARRAY Subsystem Overhead, not necessarily RAID overhead because think of this....

1.8TB Total Usable
at RAID0, you can use @1.8TB
at RAID1, you only have available 900GB, but are using 1.8TB
at RAID5, you only have available @1.5TB, but are still using 1.8TB


Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
Ivan Ferreira
Honored Contributor

Re: Definition of raw disk capacity vs usable capacity

That's right, when you create a RAID, you loss some "usable" space for parity. The amount of space used for parity depends of the RAID Level:

RAID 0 - No space lost, no protection
RAID 1 - 50 % space lost
RAID 5 - 20-25% space lost
RAID 0+1 - 50 % space lost

Also, there are differences between the real capacity of the disks, and the commercial/sales capacity. The real disk capacity is calculated using using 1024 bytes (KB).
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Uwe Zessin
Honored Contributor

Re: Definition of raw disk capacity vs usable capacity

Both capacities are 'real' ;-) and convertable.

The 'commercial/sales' values are k=1000.
The 'software' values are K=1024.
.
DCE
Honored Contributor

Re: Definition of raw disk capacity vs usable capacity

You also have to take in account the files system overhead. Usuable raw space is larger than usable file system space.

Dave
Keith C. Patterson
Frequent Advisor

Re: Definition of raw disk capacity vs usable capacity

DCE, are there any standard numbers that dictate how much of a filesystem will utilize the total raw capacity, ie. 90%, 80% ...

Thanks.
Steven Clementi
Honored Contributor

Re: Definition of raw disk capacity vs usable capacity

Keith:

In the case of windows, the file system overhead is generally small, usually a very small percentage.

Example: I have a Logical Unit 126.15GB

Unformatted it is 126.15GB
Formatted available free space is 126.08GB
99% free space.


See attached image of newly formatted disk.


Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
Mark Poeschl_2
Honored Contributor

Re: Definition of raw disk capacity vs usable capacity

Ditto for AdvFS on Tru64. I've got a 250 GB file system which uses 62 MB for metadata. I would expect other Unix file systems to be roughly similar.

The percentage is going to vary quite a bit depending on your application no matter what the OS. If you have zillions of tiny files vs. a few huge ones, the percentage of storage used for file system overhead will be much larger.
Stuart Whitby
Trusted Contributor

Re: Definition of raw disk capacity vs usable capacity

Actually, FS overhead on Unix is normally dependent on the inode count. Each inode takes up a set amount of space and points to a bunch of disk blocks (hopefully contiguous) or a bunch of other inodes, which point to a bunch of disk blocks etc. I ran out of space on a filesystem once which was at 88% since it was full of code fragments and various versions of those code fragments. The disk was set up for one inode per 8k of disk space, with average file size of under 2k. That was a pain to fix - basically back up the data, reformat the disk with twice the number of inodes, then recover the data. It worked, but the used space went up some (can't remember what to).

With UFS, which we were using at the time, that space is fixed. With NTFS, the reason there's such a small FS overhead is that the MFT is just a file, and a just-installed disk hardly has anything on it, so little to store. Each MFT entry is 1k, so for every 5 byte file on there, there's 1k and 5 bytes taken up (okay, so it'll still probably take up a full block on disk, so your 5 byte file may actually end up taking up 9k - one 8k disk block +1k in the MFT).

Oh, and as for your raw disk capacity, the full amount is what it *is*, raw. Until the system knows whether it's going to be RAID 0, 1, 5 etc, all it has is a bunch of disks with available space.
A sysadmin should never cross his fingers in the hope commands will work. Makes for a lot of mistakes while typing.
Keith C. Patterson
Frequent Advisor

Re: Definition of raw disk capacity vs usable capacity

d