- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- Re: RAID performnce comparison
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-14-2010 11:45 AM
тАО01-14-2010 11:45 AM
Operating system (SBS2003) installed on logical drive 1 (2 by 72GB in RAID 1)
User data on logical drive 2 (4 by 146GB disks in RAID5)
Question: why is the OS installed on the mirror pair rather than the RAID5 set? I would have thought that the RAID5 four drives would be at least 1.5 times faster read performance than the mirrored pair.
(Explanation: 1 drive reads a track in a given time. 2 drives in mirror potentially reads two separate tracks in the same time i.e. twice as fast. 4 drives in RAID5 read 4 tracks in the same time, but one is parity, so ignored; thus 3 tracks actually read in the same time. This all ignores rotational latency, which is reasonable for data volumes much larger than a track.)
This is apparently the standard configuration. Can somebody explain why, please?
-- Violin
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-14-2010 04:31 PM
тАО01-14-2010 04:31 PM
Re: RAID performnce comparison
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-15-2010 12:36 AM
тАО01-15-2010 12:36 AM
Solutioni think this RAID1 -> OS and RAID5 -> DATA comes more out of the idea of separating the OS and User Data for recovery purposes than performance.
If i.e. your OS gets bad due to virus, corruption, human error, OS bugs etc etc, then it is somehow easier to recover only the RAID1 array without even touching the RAID5 array with the data. Depends how good your backup solution is.
In any case this 2+4 design is better then 6 disks in raid 5 i.e. with then two logical drives created in the single array for reasons of migrating to bigger disks and separating data on different physicall disks.
Now you can tolerate 2 disk failures (one in each array) but with 6 disks in one array you can only tolerate one disk failure in total before data is lost.
Of course 6 disks in one RAID5 means you'd have to take 6 times 146 GB disks for equal striping accross them.
If you have issues with performance on the OS drives now, there are a couple of things to look into (if not already done so) i.e.
- does the SmartArray 641 have a cache battery installed (BBWC) ? if not, this makes a huge huge difference.
- if it does not support the BBWC, then consider moving to one that does (i.e. SA642)
- are the disks 7.2 , 10 or 15 Kprm ?
- has there been any performance monitor tracing done to investigate where the bottleneck is located ? There can be hardware configuration, OS configuration and Application configuration reasons that play a role.
HTH
Kris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-15-2010 05:03 AM
тАО01-15-2010 05:03 AM
Re: RAID performnce comparison
With regard to separation OS and data, this can be done by partitioning a single logical disk.
I think tolerating 2 simultaneous disk failures is probably not very important - such a failure is in any case only tolerated if the failures are one drive in each array.
Two logical disks does give the advantage of parallel operation between OS and data. I think the read performance is effectively x2 for mirror and x3 for the 4-disk RAID5; is this likely to be better than a single logical disk using 6 disks in RAID5 (where the read performance is x5) ??
Is my analysis of performance reasonable?
This is a server that we plan to re-deploy to a light use F&P function with email. The disks are 10k SCSI, and controller does not have BBWC. So as yet we don't have performance figures.
-- Violin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-15-2010 03:10 PM
тАО01-15-2010 03:10 PM
Re: RAID performnce comparison
If you redeploy this server get 6x146gb and use raid 10 using a small partition for the o.s. The 641 was an entry level raid controller and without the cache module I wouldn't expect much from its raid 5 performance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-17-2010 07:11 PM
тАО01-17-2010 07:11 PM
Re: RAID performnce comparison
It may be "standard" as far as being common, but I haven't seen it referred to as any sort of best practise.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-17-2010 11:27 PM
тАО01-17-2010 11:27 PM
Re: RAID performnce comparison
the x3 number of higher read performance is a pure theoretical figure, i would not count on getting that much higher read performance, there are some papers out on the internet with such (theoretical)studies and comparisons that could help you.
Here is one describing the impact of adding a cache module i.e.:
http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c00818421тМй=en&cc=us&taskId=101&prodSeriesId=367226&prodTypeId=329290
You would need to have an idea of what type of workload this server will have in it's F&P function (how much read% , how much write% and what type of read and write (random), small files/large files etc to tune the configuration optimal like i.e. choosing stripe size, NTFS allocation unit size (default 4K but can be customized at format time), partition offset, read/write cache ratio setting.
Adding a 128 MB BBWC cache module will give you much higher performance improvement on the disk subsystem then re-configuring your disk drives into other raid levels.
HTH
Kris