General
cancel
Showing results for 
Search instead for 
Did you mean: 

sa vs. dba texas death match over raid 1/5

p7
Frequent Advisor

sa vs. dba texas death match over raid 1/5

hi all,

do any of u ever have an issue with oracle dbas over raid levels. we have an xp1024 connected to a v2200 and are running raid 5 with striping on front and back end. they
are complaining that they noticed wait io on the redo logs (im not sure how). they want their redo logs on raid 5 instead. i checked glance and the i/o is very little. they keep trotting out the official oracle tuning guide "same", but i tried to explain that arrays behave differntly with no success. has any of u experienced a similiar problem.

thx in advance
11 REPLIES
Mark Grant
Honored Contributor

Re: sa vs. dba texas death match over raid 1/5

Well, glance is not the best tool to determine what is happening in an XP. I wouldn't start planning any achitectural changes based on its output.

If they are getting write i/o issues then it is true that raid 1 is faster at write i/o's.

Yesterday we had an HP storage expert come around and spend half an hour explaining that dba perceived storage wisdom no longer holds true. He spent the next hour showing us EVA benchmarks that clearly shows that RAID 1 gives significant increased I/O rates in the majority of database situations.
Never preceed any demonstration with anything more predictive than "watch this"
Simon Hargrave
Honored Contributor

Re: sa vs. dba texas death match over raid 1/5

Unless it's a very high I/O database with very random writes, chances are it won't make a huge difference. I/O on XP array goes via cache, so returns control to process instantly. If there is a hit, then RAID-1 will, in theory be better. You really need to use XP-Perf Advisor to see actual disk I/O though.

Another option is, if you have different disks in your XP (eg 18Gb 15krpm, 72Gb 10krpm etc), then put redo logs etc on the faster 18k disks.

However using Performance Advisor you should be able to show if there are any hotspots, and rearrange LUN's to "cool down" the hotspots.

Perf Advisor is expensive, but if you're fairly sure you don't have I/O issues, you can get a 60day(?) trial from HP, print off loads of graphs for the DBA's, then walk back to your desk smiling ;)
p7
Frequent Advisor

Re: sa vs. dba texas death match over raid 1/5

thx much
Gary L. Paveza, Jr.
Trusted Contributor

Re: sa vs. dba texas death match over raid 1/5

We have this argument constantly. Don't forget that on the XP you have a large (hopefully) amount of cache. That is going to absorb a lot of the wait time. The guides that the Oracle DBAs refer to usually relate to JBOD, not high end arrays.
Augusto Castro
Occasional Advisor

Re: sa vs. dba texas death match over raid 1/5

I am surprised that your DBA is asking to place the redo logs in RAID-5. Oracleâ s recommendation is to place them in RAID-1. The explanation is that Redo Logs are structures written sequentially and the database performs write intensive operations on these files. RAID-1 would perform better for write I/O.

If you have technical support with Oracle you should review with your DBA the following document:
Note:30286.1: I/O Tuning with Different RAID Configurations

There is always the possibility of contention even if you have the best disk array and choice of RAID level. For example, improper number of redo logs, wrong size or if the redo log buffer in memory is too small
p7
Frequent Advisor

Re: sa vs. dba texas death match over raid 1/5

woops i made a typo

ur correct they want to put redos on raid-1

Steven E. Protter
Exalted Contributor

Re: sa vs. dba texas death match over raid 1/5

SA here.

Oracle recommends Raid 1 or raid 10 for data, index and redo logs.

Period. Doing otherwise risks performqance problems.

We have some low use databases on raid 5 but the ones with transaction volume are raid 10.

Meausre performance during a load situation to be certain.

Don't use glance, use sar collection scripts. I've probably posted a good set a thousand times.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Tim D Fulford
Honored Contributor

Re: sa vs. dba texas death match over raid 1/5

Hmm I really do not like RAID5 or 6 (RAID5DP). That said

RAID5DP & RAID5 do have significant advantages when writing sequentially WITH CACHE. This MAY be where the DBA's are getting their info.

Quick run down.. Assume the following stripe is on the disks in RAID6 (RAID5DP), p & q are linearly independant parities and d is data. There are 8 disks & 3 stripes.

d10 d11 d12 d13 d14 d15 p1 q1
q2 d20 d21 d22 d23 d24 d25 p2
p3 q3 d30 d31 d32 d33 d34 d35

If you were to overwrite the WHOLE of d10-d15 the the controller would need to do 8 IOs (6 data and 2 parity). If this were a RAID10 stripe it would require 12 IOs (6 data and 6 mirror)... So LARGE sequential writes RAID5DP betetr than RAID10 (here 50% better, but it will depend on stripe length)

If you now ONLY write on d10 (single block)AND p1 & q1 are in memory the you would need to do 3 IO's (1 data 2 parity). RAID10 would need to do 2 IO for the same .. THUS RAID10 is writes 50% better than RAID5DP for block writes PROVIDED parity is in cache.

If you wanted to do the same write but p and q were not in cache memory then you would need to read d10, p1, q1 then write back d10, p1 & q1.. Thus 6 IO would be required. RAID10 would STILL only need 2 IO's. thus block writes where parity in NOT in cache is 300% worse for RAID5DP than RAID10... This is the performance killer, this is where the system will need to wait on the disk...

The astute reader (thats you back there!!) will say .. hey this is rubbish ALL my writes go to cache memory. This is absolutly true, BUT when the cache flushed as it will do so at the expense of the disk read performance .. so you may be reading fron something totally unrelated, but will have to wait for the cache flush to finish before the read can commence..

I would agree in principle that the redo-logs are sequential in nature, and thus would fit into the first, very efficient catagory. But IF the cache is not large enough to hold the data and parity CONTINOUSLY you will risk the last two scenarios... This means you need LOTS of cache... how many disks can you buy for say 2GB of cache memory? ..

In conclusion RAID10 whilst expensive & inefficient on disk, is probably of a similar price:performance on a live OLTP system...


Regards

Tim
-
Hein van den Heuvel
Honored Contributor

Re: sa vs. dba texas death match over raid 1/5

>>>> they want their redo logs on raid 5 instead. i checked glance and the i/o is very little. they keep trotting out the official oracle tuning guide "same",

Did those DBA's actually read the SAME document?

S.A.M.E stands for "Stripe and Mirror Everything".
That equats to RAID 0+1 aka RAID 10. Not RAID-5.
You then still have the choice whether to mirror stripes or to stripe mirrors and how much of that to do in hardware (=controller software) versus software.

Case closed?

http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf

You may also want to read up on chapter 15 in the Oracle Tuning doc: "I/O Configuration and Design"

Finally, with metalink access, read:
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=30286.1



hth,
Hein.
Hein van den Heuvel
Honored Contributor

Re: sa vs. dba texas death match over raid 1/5

pasquale patti wrote...
>> woops i made a typo
>> ur correct they want to put redos on raid-1

And I missed that reply. That's a big typo :-)
This makes much more sense.

Raid-1 is great for Redo.
Your DBA is correct in suggesting that.

Hien.

Michael Tully
Honored Contributor

Re: sa vs. dba texas death match over raid 1/5

Well I've had these arguments as well. I set up exactly what they wanted and low and behold, still not happy complaining about guess what ... redo logs. So I set a SAME (stripe and mirror eveything) set up a copy of the database and guess what ... no complaints and the comment ... uh da what did you do??? Guess what will be in production?? Why stripe an mirror eveything, well you get all spindles working the same.

Other thing I did was change the 'scsictl' queue depth from the default of 8 to 16.

scsictl -m queue_depth=16 /dev/rdsk/cxtydz
Anyone for a Mutiny ?