- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: EVA raid levels and related performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-06-2008 12:45 PM
тАО10-06-2008 12:45 PM
I have an EVA 8000 containing one disk group of the same disks.
Right now, I am very tight on space so as I am unable to get more disks for some time, I was considering creating some LUNs for an Oracle database on Raid5 instead of the usual raid 1.
As the controller handles the writes, if my disk group is in asyncronous mode for replication etc, am I right in thinking there should be little drop in write performance between the two? How about read performance?
Finally, if I were to do this and put the data as raid 5, should I look to put the redo and/or archive logs onto a smaller raid 1 lun?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-06-2008 12:47 PM
тАО10-06-2008 12:47 PM
Re: EVA raid levels and related performance
I would test this theory for awhile before pulling the trigger.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-06-2008 12:55 PM
тАО10-06-2008 12:55 PM
Re: EVA raid levels and related performance
I will do some tests myself, but was looking to see if anyone has any practical experience of what I am looking for.. if people with the EVA 8000 say "forget it" then I can save myself the time of configuring and testing if I know I am wasting my time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-06-2008 01:14 PM
тАО10-06-2008 01:14 PM
SolutionYou are not the first, nor the last, to ask.
Please do look around (google) for existing whitepapers/best-practice documents on this.
>> Right now, I am very tight on space so as I am unable to get more disks for some time, I was considering creating some LUNs for an Oracle database on Raid5 instead of the usual raid 1.
I like the intro. You realize tt would be nicer / better if you could stick to raid-1 but current constraints do not allow that. Fine.
>> As the controller handles the writes, if my disk group is in asyncronous mode for replication etc, am I right in thinking there should be little drop in write performance between the two?
The EVA controller will do a lot to 'hide' the RAID-5 inefficiencies. For some loads large writes such as towards the archive logs it can even avoid the overhead pretty much recognizing full-track writes: no need to read up the old data/checksum, just write fresh data and checksums.
For short random writes RAID-5 will necesarrily keep 2x more disks busy than raid-1 which the application may or might not notice. Typcially an Oracle application does NOT wait for the random data writes, only for the reads... but those are impacted.
>> How about read performance?
Typically just fine save for those writes potentially hurting the reads. There will be only one place/spindle to choose to read a specific morcel of data from, so that disk could be busy but there will be more spindles for the same amount of data.
>> Finally, if I were to do this and put the data as raid 5, should I look to put the redo and/or archive logs onto a smaller raid 1 lun?
Absolutely. Notably the REDO. The Application users do wait (on commit) for the REDO writes to complete.
Mix and match as you see fit! No bonus point for having all storage be the same..
Ideally you may want those RAID-1's in a small diskgroup, very much controlling their performance, but it does not sound like you have the total number of drives to be able to do that.
All in one big pile tends to be the recommended configuration over a few dedicated groups. It's too hard to balance space and speed with more than 1 group.
Hope this helps some,
Hein van den Heuvel
HvdH Performance Consulting.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-06-2008 01:24 PM
тАО10-06-2008 01:24 PM
Re: EVA raid levels and related performance
I will have a detailed read tomorrow and once I get chance to test this, will post my findings :)
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-06-2008 02:35 PM
тАО10-06-2008 02:35 PM
Re: EVA raid levels and related performance
in EVA (disk oriented storage), the perf rule is the more disk in the DG the more performance (spindles in the VDISK)
EVA backend:
a) for datafiles the RAID5 is sufficient (it is 4D+1P)
b) for redo logs/archive logs its usualy good to have RAID1 (4D+4P)
EVA frontend:
a) if you have LVM you can divide all the disks into 8 small LUNs (8 EVA frontports) and put them into the LVM VGs for the nice frontend loadbalancing:
e.g if you need 160GB disk it will be rather 8x20GB disks, etc.
You need not create the separate DG for RAID5 and RAID1.