Disk Enclosures
cancel
Showing results for 
Search instead for 
Did you mean: 

VA7100 RAID Allocations

VA7100 RAID Allocations

I have a VA7100 array that had 6 disks. I have 1 94 GB lun and it is currently 50 % used. Looking at the armperf the 50GB of data is all being stored in RAID 5 DP mode. Several weeks ago I added 2 additional disks to the array, for a total of 8, I would have expected some of the data in the LUN to go back to raid 0 + 1.

Command View shows the following allocations:

Logical Drives 94 GB
Unallocated 59 GB
Redundancy 79 GB
Active Spare 33 GB

Do I have to add more disks to get a more even balance between raid 5 and raid 0 + 1? Or is there a command I need to run?


11 REPLIES
Brian M Rawlings
Honored Contributor

Re: VA7100 RAID Allocations

I had always understood that these arrays would always try to store everything as mirrored (RAID 0/1), and only rotate blocks to RAID-5DP when forced to (unallocated space dropping below zero, or some threshold). Your array does not appear to be doing the above, so either I was misinformed, or you have an issue with your array (firmware, failed drive, something like that).

It might be useful if you provided more info on your array and environment (firmware rev, bdf, complete stats from the array, etc).

Regards, --bmr
We must indeed all hang together, or, most assuredly, we shall all hang separately. (Benjamin Franklin)
Eugeny Brychkov
Honored Contributor

Re: VA7100 RAID Allocations

James,
try creating one more LUN, for example, 30GB in size and look if VA behavior will change. Attach armdsp -a output to your next reply
Eugeny

Re: VA7100 RAID Allocations

Here is the armdsp. I will make an additional LUN in the morning....thanks for the quick response!
Eugeny Brychkov
Honored Contributor

Re: VA7100 RAID Allocations

armdsp looks great. Some notes: HP18 firmware is available. You can call HP for upgrade; private loop connected: if you use FC switches (not hubs and direct connect) then VA would better be in fabric.
Please let us know on your testing after LUN creation
Eugeny

Re: VA7100 RAID Allocations

made the second lun and filled it 10% with some random
data...from what I can tell in the early stages of investgating this most of the data form this new LUN is being stored in 5DP Mode. Also
check on the pre-existing LUN 1 and all 50GB still stored in 5DP. This a production box so I do not want to be carefull in looking at this. Thanks for the note about HP18. Then upgrade is planned for some scheduled downtime in April....I will also change the FC port to fabric at that time as this array is direct attached. Thanks for the help and as the testing continues I will post the results.
Brian M Rawlings
Honored Contributor

Re: VA7100 RAID Allocations

If the array is directly attached, you can't switch to 'fabric' mode. Direct attached FC is done in 'loop' mode, which is half-duplex.

It takes a switch (the "fabric device") to allow you to operate in 'fabric login' mode. This allows your FC HBA and storage device to run in full-duplex mode, which can double your bandwidth (or not, depending on your typical access pattern).

If you have apps that more or less read and write equally, most of the time, and they are all fairly busy, full-duplex will help a lot. If you have one app, and it mostly reads, you won't see any perceptible difference in overall performance.

Also, to add a fabric while still maintaining no single point of failure in your storage scheme, you need to add a pair of switches, not just one. So, going "fabric" just for performance adds some appreciable cost, and needs to be approached intelligently to be sure you'll get the benefit you expect.

Fortunatly, if no other servers are involved (i.e., no need for security or zoning), HP sells the 8-port Brocade 2Gb FC switches in an "Entry" configuration, meaning no zoning functionality. In this limited application, the switches are under $5K each.

So, there are some considerations to going "fabric". Hope the background helps...

--bmr
We must indeed all hang together, or, most assuredly, we shall all hang separately. (Benjamin Franklin)

Re: VA7100 RAID Allocations

We have the switches in place just need to setup a zone and place the VA in a zone with the L1000 so that none of the Microsoft stuff can see it.

Still cannot understand why the VA7100 wants to allocate all the space as 5dp even the new LUN I made is 99% stored in 5dp.

I think at my next cycle of upgrades in a few months a will redo the luns and the array and just force the va7100 in use 1+0 all the time.
Eugeny Brychkov
Honored Contributor

Re: VA7100 RAID Allocations

After you created additional LUN please 'leave VA alone' (I mean do not access its LUNs) to allow it to optimize its contents. All IO issued to VA will have priority on optimization. BTW, do you see 'array is optimizing' when you issue 'armdsp -a'? Try it some times, when host is idle and VA is able to optimize its contents
Eugeny
Eugeny Brychkov
Honored Contributor

Re: VA7100 RAID Allocations

The general idea I've heard of is you have to have withing one RG value of 'Allocated to Regular LUNs' to be more than 50% of 'Total Physical Size' value
Eugeny

Re: VA7100 RAID Allocations

Thanks for the additional info about letting the array idle. Sunday after the my normal cold backup of oracle I will let the system sit and leave the db down and see if it will start optimizing.
Roger_22
Trusted Contributor

Re: VA7100 RAID Allocations

Upgrade to HP18, keep current Resilience setting (Normal Mode), enable Pre-fetch. These are key to providing the best performance.

AutoRAID keeps the active write working set in RAID 1+0. These are the small (<256K from cache to the disks) IOs. Data that is not frequently written, or written with large blocks is kept in RAID 5DP.

The old Model 12H maximized RAID 1 space. But that proved to be a poor performance choice. As new data is written to the array, it must first create free space by converting some RAID 1+0 capacity to RAID 5 ??? this takes time, time that the new write must wait on. Also, without free space, the array could not do the highly efficient log-structured RAID 5 writes. The VA will optimize for free-space, but is also able to keep large amounts of data in RAID 1+0 for short periods.

If you created a new LUN and wrote randomly to the array and it created RAID 5DP capacity, then cache was able to concatenate the IOs into 256K records to the back end. These large-block writes are more efficient to the log-structured RAID 5DP than to RAID 1+0.