- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Help understanding EVA overhead
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-08-2012 12:35 PM
09-08-2012 12:35 PM
Help understanding EVA overhead
I am configuring HP P4400 and P6300s and I don't understand the math behind the EVA's disk usage. For the purposes of this post I will be using numbers from a HP P6300 with 16x600GB disks. However, I have a HP P4400 with similar amounts of MIA capacity, so I believe it behaves the same way.
Now, if someone reading this can reference a white paper that discusses what the different option actually do, I would appriciate it. I'm looking for a discussion at the level of how data is striped and how the missing capacity is reserved with a focus on what happens in a failure condition. The user guide references the best practices guide, which doesn't have this level of information. I have not found a good guide that discusses this.
So, in my EVA6300 I have 16x600GB disks. The EVA erports the disks as having a usable capacity of 558.91GB. I have created a disk group that contains 8 disks, with the intention of removing RSS from the equation. My understanding is that, with only 8 disks, the EVA should only have one RSS group in this disk group.
In the "Disk Group Properties" there is a table of estimated capacities for different raid levels. For the following values I have "Disk Drive Failure Protection" set to "None."
The EVA lists free space of 4462GB for Vraid0, 2231GB for Vraid1, 3570GB for Vraid5, and 2975GB for Vraid6.
Raid0 makes the most sense as the total capacity of the group is 558.91GBx8=4471.28GB, which comes to a missing 9.28GB. (I won't quibble over <10GB.)
Raid1 does not make sense from a classical raid1 perspective. I'm under the assumption that Vraid1 is actually raid10, in which case my expected capacity is 558.91GB * 8 / 2 = 2235.64GB, which gices a missing capacity of 4.64GB.
I would expect a classical raid5 to come to 558.91GB * (8-1) = 3912.37, which leads to a missing capacity of 342.37GB. Trying to calculate the number of reserved disks by capacity gives (( 558.91 * 8 ) - 3570) = 901.28GB / 558.91GB = 1.61 disks reserved. I don't understand what the EVA is doing in Vraid5 mode.
Raid 6 is similar to raid 5. I would expect a classical raid6 to come to 558.91GB * (8-2) = 3353.46GB, which leads to a missing capacity of 378.26. The missing capacity is similar to the raid5 missing capacity. Does the EVA incur an overhead of 300-400GB for parity raids?
Now, enabling "Disk drive failure protection" drives the availiable capacity through the floor. My understanding is that this setting reserves spare space in the group, with the intention that the Vraids will rebuild into the spare space on disk failure. For this section I will only use Vraid0 values.
As I mentioned above my group is 8x558.91GB disks. With no protection I have 4462GB availiable for Vraid0.
When I set single disk protection the availiabe Vraid0 capacity drops to 2973GB. This is a loss of 1,489GB which is approximately 33% of the non-protected capacity or 2.6disks. Why is this loss so high?
With this grouping I cannot enable double disk protection, it says I do not have enough capacity. I don't understand this, but I'm hoping I will after understand what is happening with the single disk case.
Can someone help esplain the issues noted?
- Tags:
- RAID
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-09-2012 01:00 AM
09-09-2012 01:00 AM
Re: Help understanding EVA overhead
Hi
I will try to explain some basics
1) protection level the EVA reserves disk space of 2 disks for single protection level and 4 disks for double protection level this means that for single protection level you loose 2 X 600GB disk space.
2) raid 0 is the amount of space for creating Vdisks in the disk group you only loose small overhead as all the configuration of the EVA is on the disks. Raid 1 is raid 0 divided by 2 and raid 5 is 20% overhead because the EVA spreads the data across the disks in groups of 5 disks for raid 5 that is 4 disks for data and one for parity. raid 6 is similar with 2 parity.
3) the EVA tries to spread the data inside a disk group evenly across all disks in the group meaning that a single disk used capacity should be about the same across all the disk group. Usually it's best to create a disk group with as many disks as possible this way you have more disks "working for you".
4) an RSS is an internal EVA part of a disk group and the number of members can be from 6 to 12 but if possible it will be 8.
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-09-2012 01:59 PM
09-09-2012 01:59 PM
Re: Help understanding EVA overhead
With only 16 physical disks, it is best if you have all of them in a single disk group, so as to minimize protection level overhead costs. Each disk group has its own protection level overhead. In addition, you will have better performance with twice the physical disks (16 instead of only 8) working for you.
All virtual RAIDs (Vraid) have an implicit "+0". Vraid1 effectively is RAID10, Vraid5 is RAID50 and Vraid6 is RAID60.
As far as size of the RAIDsets, a 100 GB Vraid5 requires 125 GB of raw storage (4+1). A 100 GB Vraid5 requires 150 GB of raw storage (4+2).
Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-09-2012 09:04 PM
09-09-2012 09:04 PM
Re: Help understanding EVA overhead
"4) an RSS is an internal EVA part of a disk group and the number of members can be from 6 to 12 but if possible it will be 8."
The number is actually 6 to 11. when you add the 12th disk, the RSS splits into two 6 disk RSS's. (or at least that's how it was...)
To further add to the info already presented...
For small disk groups, you'd more than likely never use double protection. Of course, it is a decision that you need to make, but the short of it is... with the behind the scenes redundancy and virtualization... losing multiple disks probably will not be a problem.
Typically, if I have a group of 40 or less disks, I go with single protection. Above 40... double.
While I can not answer your question specifically why you can not set double protection in your "test scenario"... based on your numbers.. you'd have remaining only 375GB... which really doesn't make any sense. There should be close to 2.2TB vRaid0 with Double Protection. Did you restart Command View and re-check? Sometimes CV doesn't display right.
vRaid1 is 50% (actually.. Something like 48% I think is closer).
vRaid5 is about 80% (20% overhead)
vRaid6 is about 67% (33% overhead)
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)