ProLiant Servers (ML,DL,SL)
cancel
Showing results for 
Search instead for 
Did you mean: 

RAID performnce comparison

 
SOLVED
Go to solution
Violin_2
Occasional Advisor

RAID performnce comparison

Server = ML350G4p with Smart Array 641 controller.

Operating system (SBS2003) installed on logical drive 1 (2 by 72GB in RAID 1)

User data on logical drive 2 (4 by 146GB disks in RAID5)

Question: why is the OS installed on the mirror pair rather than the RAID5 set? I would have thought that the RAID5 four drives would be at least 1.5 times faster read performance than the mirrored pair.

(Explanation: 1 drive reads a track in a given time. 2 drives in mirror potentially reads two separate tracks in the same time i.e. twice as fast. 4 drives in RAID5 read 4 tracks in the same time, but one is parity, so ignored; thus 3 tracks actually read in the same time. This all ignores rotational latency, which is reasonable for data volumes much larger than a track.)

This is apparently the standard configuration. Can somebody explain why, please?

-- Violin
6 REPLIES
rick jones
Honored Contributor

Re: RAID performnce comparison

I believe RAID5 requires a minimum of three drives. If one didn't think there was much of a performance requirement for the OS discs vs the user data, and there were only 6 discs in the system, then it would seem to make sense to use only two discs and RAID1. But that is just my opinion.
there is no rest for the wicked yet the virtuous have no pillows
kris rombauts
Honored Contributor
Solution

Re: RAID performnce comparison

Hi Violin,

i think this RAID1 -> OS and RAID5 -> DATA comes more out of the idea of separating the OS and User Data for recovery purposes than performance.

If i.e. your OS gets bad due to virus, corruption, human error, OS bugs etc etc, then it is somehow easier to recover only the RAID1 array without even touching the RAID5 array with the data. Depends how good your backup solution is.
In any case this 2+4 design is better then 6 disks in raid 5 i.e. with then two logical drives created in the single array for reasons of migrating to bigger disks and separating data on different physicall disks.
Now you can tolerate 2 disk failures (one in each array) but with 6 disks in one array you can only tolerate one disk failure in total before data is lost.
Of course 6 disks in one RAID5 means you'd have to take 6 times 146 GB disks for equal striping accross them.


If you have issues with performance on the OS drives now, there are a couple of things to look into (if not already done so) i.e.

- does the SmartArray 641 have a cache battery installed (BBWC) ? if not, this makes a huge huge difference.
- if it does not support the BBWC, then consider moving to one that does (i.e. SA642)
- are the disks 7.2 , 10 or 15 Kprm ?
- has there been any performance monitor tracing done to investigate where the bottleneck is located ? There can be hardware configuration, OS configuration and Application configuration reasons that play a role.


HTH

Kris

Violin_2
Occasional Advisor

Re: RAID performnce comparison

Kris,

With regard to separation OS and data, this can be done by partitioning a single logical disk.

I think tolerating 2 simultaneous disk failures is probably not very important - such a failure is in any case only tolerated if the failures are one drive in each array.

Two logical disks does give the advantage of parallel operation between OS and data. I think the read performance is effectively x2 for mirror and x3 for the 4-disk RAID5; is this likely to be better than a single logical disk using 6 disks in RAID5 (where the read performance is x5) ??

Is my analysis of performance reasonable?

This is a server that we plan to re-deploy to a light use F&P function with email. The disks are 10k SCSI, and controller does not have BBWC. So as yet we don't have performance figures.

-- Violin
wobbe
Respected Contributor

Re: RAID performnce comparison

have you looked into file fragmentation?
If you redeploy this server get 6x146gb and use raid 10 using a small partition for the o.s. The 641 was an entry level raid controller and without the cache module I wouldn't expect much from its raid 5 performance.
Joshua Small_2
Valued Contributor

Re: RAID performnce comparison

I've seen a lot of people build servers this way simply because it's how they always have.

It may be "standard" as far as being common, but I haven't seen it referred to as any sort of best practise.
kris rombauts
Honored Contributor

Re: RAID performnce comparison

Hi Violin,

the x3 number of higher read performance is a pure theoretical figure, i would not count on getting that much higher read performance, there are some papers out on the internet with such (theoretical)studies and comparisons that could help you.

Here is one describing the impact of adding a cache module i.e.:

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c00818421〈=en&cc=us&taskId=101&prodSeriesId=367226&prodTypeId=329290



You would need to have an idea of what type of workload this server will have in it's F&P function (how much read% , how much write% and what type of read and write (random), small files/large files etc to tune the configuration optimal like i.e. choosing stripe size, NTFS allocation unit size (default 4K but can be customized at format time), partition offset, read/write cache ratio setting.


Adding a 128 MB BBWC cache module will give you much higher performance improvement on the disk subsystem then re-configuring your disk drives into other raid levels.


HTH

Kris