1752601 Members
4523 Online
108788 Solutions
New Discussion юеВ

Re: VA7410

 
SOLVED
Go to solution
Karthik S S
Honored Contributor

VA7410

Hi,

We have received a new VA7410. It has two controller ( total 4 FC Ports) and 15*73GB
Disks (1095 GB). We are planning to implement RAID5. What will be the effective storage
capacity we will be getting if we configure RAID5??. Also is it possible to
separate in such a way that Controller - 1 is assigned some disks/disk space
and contoller - 2 the rest??.

Kindly let me know

Thanks
Karthik
For a list of all the ways technology has failed to improve the quality of life, please press three. - Alice Kahn
8 REPLIES 8
Eugeny Brychkov
Honored Contributor
Solution

Re: VA7410

Hi,
VA products can work in two modes: RAID0/1 and autoraid. RAID0/1 is very simple, but firmware manages optimal (perfomance and reliability) data allocation. In autoraid mode VA keeps frequently used data in RAID0/1, but rarely used data in RAID5DP. Again, physical data allocation on internal disks is managed by firmware.
Two ports on one VA controller is equal, although you can set them to different behavior and connect them to different switches.
In normal conditions all even disk are managed by VA controller 1 (RG1) and all odd disks are managed by controller 2 (RG2). If controller failure occur then another controller takes over. So perfomance path for LUNs created in RG1 is C1, for LUNs in RG2 - C2. Note that you can not create LUNs across regundancy groups - it should belong either to RG1 or RG2.
So here're answers:
- RAID5: no such option, use autoraid mode;
- if you want to balance space between two RGs then simply balance disks between odd and even slots in main enclosure and JBODs;
- use perfomance path for LUNs as primary path.
More questions?
Eugeny
Vincent Fleming
Honored Contributor

Re: VA7410

With 1 spare per RG, you will get about 561GB usable in AutoRAID mode. You will get 684GB usable with no spares.

It's automatic that Controller-1 gets 1/2 the drives (the even numbered ones, according to the lables on the drive cabinet), and Controller-2 gets the even ones. These drives form groups, and are called RG's (Redundancy Groups).

LUNs are created within RGs - so Controller-1 owns RG-1, so a LUN created in RG-1 is owned by Controller-1. The same goes for Controller-2 and RG2. You must specdify which RG the LUN will be created in when you create the LUN.

You may access any LUN from any controller, but it's faster if you access LUNs in RG-1 via Controller-1.

So, yes, you can (actually you have to) separate your disks between the controllers.

Note that if you add an expansion cabinet (DS2405) and more drives, your usable capacity will become much better compared to raw capacity. It's more space efficient with a greater number of drives.

Good luck,

Vince
No matter where you go, there you are.
Brian M Rawlings
Honored Contributor

Re: VA7410

One comment: decide up front. If you choose RAID 0/1, you can easily change your mind and switch to "AutoRAID" mode with no data loss or any trouble (other than the backup that any sane admin would do before making such a change).

You cannot, however, switch from AutoRAID mode to RAID 0/1, without backing up your data and wiping your LUNs off the array. This "switch" is a one-way trip.

I suggest running with "hot spare" enabled, and with prefetch enabled as well (I think the defaults for these is "OFF"). I further suggest AutoRAID mode, since only the oldest data (by timestamp, put on every block when last touched) is migrated to RAID5DP if space is needed.

This basically means that all static data (logs, old binaries, archives, old emails, whatever) that never get read anyway are moved to slower storage, and normally active stuff stays as RAID 0/1 anyway.

The VA7410 attempts to store everything as RAID 0/1, and only converts old blocks to RAID5DP storage if there is no more free space. The cool thing about this is, if you notice that some data is stored as RAID5DP, and this bothers you, you can just slide in a drive (without allocating any storage to anything), and, poof, your storage will all go back to RAID 0/1.

It would be interesting to concoct a find command that would traverse the entire tree and give you the names and sizes of all files and directories that haven't been touched in, say, 3 months or 6 months. That would give you some idea of how much space in RAID5DP you could get away with, before you might start seeing a performance dropoff.

Of course, doing this might read all blocks, resetting all timestamps and making the point moot (not really, since within a few hours or days, your active data would still have a "newer" timestamp... just not by as big a delta).

Good Luck, and happy RAIDing...

--bmr
We must indeed all hang together, or, most assuredly, we shall all hang separately. (Benjamin Franklin)
Karthik S S
Honored Contributor

Re: VA7410

Hi,

A ton on of thanks for all of you for such valuable responses. I shall get back to you if I need any further clarifications.

Thanks again,
Karthik S S
For a list of all the ways technology has failed to improve the quality of life, please press three. - Alice Kahn
Karthik S S
Honored Contributor

Re: VA7410

Hi,

A ton of thanks for all of you for such valuable responses. I shall get back to you if I need any further clarifications.

Thanks again,
Karthik S S
For a list of all the ways technology has failed to improve the quality of life, please press three. - Alice Kahn
Karthik S S
Honored Contributor

Re: VA7410

Hi,

We are planning to go ahead with AutoRAID feature on va7410. While creating the LUNs is it possible to restrict the LUN so that it is created on a single disk. ( Instead of using it from the pool of space that we get after arrayfmt - about 700GB ). That is is there any option to create a LUN on particular Haddisk slot. Why I wanted like this is to have more control on the HDDs / LUNs from the different hosts that are connected to the VA.

Pl. clarify.

Thanks
Karthik
For a list of all the ways technology has failed to improve the quality of life, please press three. - Alice Kahn
Roger_22
Trusted Contributor

Re: VA7410

No, you cannot assign LUNs to a single disk. But, I???d recommend against that strategy. Stripe everything has shown to be the best strategy for performance. Single queue, multiple servers always wins over multi-queue multi-server ??? just note any experience at a grocery store ??? they can never balance the service times in the checkout lines (and I always pick the slowest line!) Modern arrays with a write cache eliminate the need to partition the resources on the array to achieve maximum performance.

Unix or Windows server?

You can, and do, is assign LUNs from a RG. You can create RG with different number of disks (5s the minimum) ??? that is if you had another disk shelf. With your configuration of a single shelf and 15 disks you don???t have that flexibility, you???ll get the following:

AutoRAID
1 Active Hot Spare (I recommend this for the AutoRAID setting)
8 disks in RG1 ??? 307GB usable (base 1024)
7 disks in RG2 ??? 257GB usable

-or-

RAID 1+0
No spares (if you have a failure, the controller will temporally convert some of your data to RAID 5DP, once you replace the failed disk, it will return that data to RAID 1+0)
8 disks in RG1 ??? 257GB usable
7 disks in RG2 ??? 233GB usable

You have a very small array. The capacity advantages of RAID 5DP are small at this size, but the performance risks (some workloads may not be candidates for AutoRAID) remain. Can your application work with the RAID 1+0 capacities? If so, I recommend RAID 1+0 ??? while your application is running, the array captures performance statistics. These statistics can be used to estimate if this workload is a candidate for AutoRAID (small write working set, and daily lows in demand - below 20% utilization), if so, it???s easy to convert on-line.


Brian M Rawlings
Honored Contributor

Re: VA7410

For good or for ill, you're pretty much stuck with 'stripe everywhere'. That's how the autoraid (and some other arrays) work.

The good news is, the autoraid is the king of simplicity, and not a bad performer, given that they've given up some performance for ease of use and management.

The bad news is, if you stripe everywhere, all disks are kept busy. Great for a few I/O streams, but if you have very many different things hitting the array, you end up with disks busily seeking back and forth between two or three or more tasks... and all drives are affected. If you want a pair of spindles for just Oracle logs or indices... sorry.

As Roger points out, however, it tends to be a decent workload-spreading system, with acceptable performance for one server, or for a couple, particularly if one app is very busy, and others are mostly idle or bursty.

But, since you really have no option, enjoy the ease of management and benefits, and don't sweat the minor issues like wanting to segregate busy drives. If you wanted the hottest performing array around, this wouldn't have been your first choice (well, maybe it wasn't your choice... been there, done that).

Regards, --bmr
We must indeed all hang together, or, most assuredly, we shall all hang separately. (Benjamin Franklin)