- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- EVA4/6/8 SUM of vdisks does not match occupancy
Disk Enclosures
1748140
Members
3468
Online
108758
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-09-2009 02:07 PM
тАО07-09-2009 02:07 PM
I have several EVAs and the result is the same on all but I will use the 4000 for simplicity.
CVEVA=v8.02, XCS=6.200, SSSU=8.0
We have one disk group of 56*300G drives (i.e. a full EVA4000). CVEVA states that the Total is 15083GB and the Occupancy is 10749GB.
I have been using SSSU to pull information on the vdisks for utilization purposes so that I can graph and keep history on how much space is assigned to each server. I summed the sizes of all of the vdisks and came up with 8551GB. Why does this number not match the occupancy?
All of the vdisks assigned are using RAID5 and the disk group is set for single redundancy.
CVEVA=v8.02, XCS=6.200, SSSU=8.0
We have one disk group of 56*300G drives (i.e. a full EVA4000). CVEVA states that the Total is 15083GB and the Occupancy is 10749GB.
I have been using SSSU to pull information on the vdisks for utilization purposes so that I can graph and keep history on how much space is assigned to each server. I summed the sizes of all of the vdisks and came up with 8551GB. Why does this number not match the occupancy?
All of the vdisks assigned are using RAID5 and the disk group is set for single redundancy.
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-09-2009 08:44 PM
тАО07-09-2009 08:44 PM
Solution
> Occupancy is 10749GB
This is RAW space used.
> all of the vdisks and came up with 8551GB
My guess is that you just added the host-visible space without taking VRAID-5 redundancy into account:
8551*1.25 = 10688.75
Not exactly, as some additional space is used by the EVA internally, but it looks much more realistic, doesn't it?
--
Can you run
ls disk full XML
ls disk_group full XML
ls vdisk full
ls container full XML
ls snapshot full XML
ls dr_group full XML
put it in a .ZIP file and post as an attachment so we can get a better picture?
This is RAW space used.
> all of the vdisks and came up with 8551GB
My guess is that you just added the host-visible space without taking VRAID-5 redundancy into account:
8551*1.25 = 10688.75
Not exactly, as some additional space is used by the EVA internally, but it looks much more realistic, doesn't it?
--
Can you run
ls disk full XML
ls disk_group full XML
ls vdisk full
ls container full XML
ls snapshot full XML
ls dr_group full XML
put it in a .ZIP file and post as an attachment so we can get a better picture?
.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-21-2009 11:11 AM
тАО07-21-2009 11:11 AM
Re: EVA4/6/8 SUM of vdisks does not match occupancy
Thanks Uwe.
You are right. I verified that there is a 0.8 factor for RAID5. The factor for RAID0 is 0.0 and the factor for RAID1 is 0.5.
There is also space reserved for the protection level. "The software algorithm for reserving reconstruction space finds the largest disk in the disk group; doubles its capacity; multiplies the result by 0, 1, or 2 (the selected protection level); and then removes that capacity from free space."
You are right. I verified that there is a 0.8 factor for RAID5. The factor for RAID0 is 0.0 and the factor for RAID1 is 0.5.
There is also space reserved for the protection level. "The software algorithm for reserving reconstruction space finds the largest disk in the disk group; doubles its capacity; multiplies the result by 0, 1, or 2 (the selected protection level); and then removes that capacity from free space."
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-22-2009 04:11 AM
тАО07-22-2009 04:11 AM
Re: EVA4/6/8 SUM of vdisks does not match occupancy
just to explain these numbers:
VR0 is of curse 100% capacity
VR1 is 50%, because it's mirrored
VR5 is 80%, because internally the EVA uses 4D1P (4 data "chunks?" plus 1 parity), however different from "classical" arrays these 4D1P are striped accross all HDDs in a Diskgroup (there's more to that, RSS-sets, but to explain the 20%-penalty regardless of the number of Disks used for VR5 the 4D1P is enough ;-)
also there is a 7% "penalty" for the drives smaller than 1TB and around 10% for the 1TB drives, when you compare the Disk-sizes reported from (Windows) Hosts with the VDisk-Sizes in CommandView: reason is that one counts "GB" as 1000*1000*1000 and the other as 1024*1024*1024...
VR0 is of curse 100% capacity
VR1 is 50%, because it's mirrored
VR5 is 80%, because internally the EVA uses 4D1P (4 data "chunks?" plus 1 parity), however different from "classical" arrays these 4D1P are striped accross all HDDs in a Diskgroup (there's more to that, RSS-sets, but to explain the 20%-penalty regardless of the number of Disks used for VR5 the 4D1P is enough ;-)
also there is a 7% "penalty" for the drives smaller than 1TB and around 10% for the 1TB drives, when you compare the Disk-sizes reported from (Windows) Hosts with the VDisk-Sizes in CommandView: reason is that one counts "GB" as 1000*1000*1000 and the other as 1024*1024*1024...
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP