- Integrated Systems
- About Us
- Integrated Systems
- About Us
02-07-2003 08:37 AM
We have received a new VA7410. It has two controller ( total 4 FC Ports) and 15*73GB
Disks (1095 GB). We are planning to implement RAID5. What will be the effective storage
capacity we will be getting if we configure RAID5??. Also is it possible to
separate in such a way that Controller - 1 is assigned some disks/disk space
and contoller - 2 the rest??.
Kindly let me know
Solved! Go to Solution.
02-07-2003 09:21 AMSolution
VA products can work in two modes: RAID0/1 and autoraid. RAID0/1 is very simple, but firmware manages optimal (perfomance and reliability) data allocation. In autoraid mode VA keeps frequently used data in RAID0/1, but rarely used data in RAID5DP. Again, physical data allocation on internal disks is managed by firmware.
Two ports on one VA controller is equal, although you can set them to different behavior and connect them to different switches.
In normal conditions all even disk are managed by VA controller 1 (RG1) and all odd disks are managed by controller 2 (RG2). If controller failure occur then another controller takes over. So perfomance path for LUNs created in RG1 is C1, for LUNs in RG2 - C2. Note that you can not create LUNs across regundancy groups - it should belong either to RG1 or RG2.
So here're answers:
- RAID5: no such option, use autoraid mode;
- if you want to balance space between two RGs then simply balance disks between odd and even slots in main enclosure and JBODs;
- use perfomance path for LUNs as primary path.
02-07-2003 10:37 AM
It's automatic that Controller-1 gets 1/2 the drives (the even numbered ones, according to the lables on the drive cabinet), and Controller-2 gets the even ones. These drives form groups, and are called RG's (Redundancy Groups).
LUNs are created within RGs - so Controller-1 owns RG-1, so a LUN created in RG-1 is owned by Controller-1. The same goes for Controller-2 and RG2. You must specdify which RG the LUN will be created in when you create the LUN.
You may access any LUN from any controller, but it's faster if you access LUNs in RG-1 via Controller-1.
So, yes, you can (actually you have to) separate your disks between the controllers.
Note that if you add an expansion cabinet (DS2405) and more drives, your usable capacity will become much better compared to raw capacity. It's more space efficient with a greater number of drives.
02-07-2003 03:18 PM
You cannot, however, switch from AutoRAID mode to RAID 0/1, without backing up your data and wiping your LUNs off the array. This "switch" is a one-way trip.
I suggest running with "hot spare" enabled, and with prefetch enabled as well (I think the defaults for these is "OFF"). I further suggest AutoRAID mode, since only the oldest data (by timestamp, put on every block when last touched) is migrated to RAID5DP if space is needed.
This basically means that all static data (logs, old binaries, archives, old emails, whatever) that never get read anyway are moved to slower storage, and normally active stuff stays as RAID 0/1 anyway.
The VA7410 attempts to store everything as RAID 0/1, and only converts old blocks to RAID5DP storage if there is no more free space. The cool thing about this is, if you notice that some data is stored as RAID5DP, and this bothers you, you can just slide in a drive (without allocating any storage to anything), and, poof, your storage will all go back to RAID 0/1.
It would be interesting to concoct a find command that would traverse the entire tree and give you the names and sizes of all files and directories that haven't been touched in, say, 3 months or 6 months. That would give you some idea of how much space in RAID5DP you could get away with, before you might start seeing a performance dropoff.
Of course, doing this might read all blocks, resetting all timestamps and making the point moot (not really, since within a few hours or days, your active data would still have a "newer" timestamp... just not by as big a delta).
Good Luck, and happy RAIDing...
02-07-2003 08:46 PM
A ton on of thanks for all of you for such valuable responses. I shall get back to you if I need any further clarifications.
Karthik S S
02-07-2003 08:46 PM
A ton of thanks for all of you for such valuable responses. I shall get back to you if I need any further clarifications.
Karthik S S
02-10-2003 03:48 AM
We are planning to go ahead with AutoRAID feature on va7410. While creating the LUNs is it possible to restrict the LUN so that it is created on a single disk. ( Instead of using it from the pool of space that we get after arrayfmt - about 700GB ). That is is there any option to create a LUN on particular Haddisk slot. Why I wanted like this is to have more control on the HDDs / LUNs from the different hosts that are connected to the VA.
02-10-2003 08:11 AM
Unix or Windows server?
You can, and do, is assign LUNs from a RG. You can create RG with different number of disks (5s the minimum) ??? that is if you had another disk shelf. With your configuration of a single shelf and 15 disks you don???t have that flexibility, you???ll get the following:
1 Active Hot Spare (I recommend this for the AutoRAID setting)
8 disks in RG1 ??? 307GB usable (base 1024)
7 disks in RG2 ??? 257GB usable
No spares (if you have a failure, the controller will temporally convert some of your data to RAID 5DP, once you replace the failed disk, it will return that data to RAID 1+0)
8 disks in RG1 ??? 257GB usable
7 disks in RG2 ??? 233GB usable
You have a very small array. The capacity advantages of RAID 5DP are small at this size, but the performance risks (some workloads may not be candidates for AutoRAID) remain. Can your application work with the RAID 1+0 capacities? If so, I recommend RAID 1+0 ??? while your application is running, the array captures performance statistics. These statistics can be used to estimate if this workload is a candidate for AutoRAID (small write working set, and daily lows in demand - below 20% utilization), if so, it???s easy to convert on-line.
02-10-2003 10:11 PM
The good news is, the autoraid is the king of simplicity, and not a bad performer, given that they've given up some performance for ease of use and management.
The bad news is, if you stripe everywhere, all disks are kept busy. Great for a few I/O streams, but if you have very many different things hitting the array, you end up with disks busily seeking back and forth between two or three or more tasks... and all drives are affected. If you want a pair of spindles for just Oracle logs or indices... sorry.
As Roger points out, however, it tends to be a decent workload-spreading system, with acceptable performance for one server, or for a couple, particularly if one app is very busy, and others are mostly idle or bursty.
But, since you really have no option, enjoy the ease of management and benefits, and don't sweat the minor issues like wanting to segregate busy drives. If you wanted the hottest performing array around, this wouldn't have been your first choice (well, maybe it wasn't your choice... been there, done that).