Operating System - HP-UX
1752770 Members
4716 Online
108789 Solutions
New Discussion юеВ

Re: File Striping for Oracle Database

 
SOLVED
Go to solution
David Bellamy
Respected Contributor

Re: File Striping for Oracle Database

Logan: as your output shows you are not doing filesystem striping.
A. Clay Stephenson
Acclaimed Contributor

Re: File Striping for Oracle Database

You are running RAID 1 (mirrored w/o stripes) and your array is a JBOD "Just a Bunch of Disks". None of your LVOL's are stripped. You are not allowed under LVM to do striping with mirrors. You could stripe each LVOL across your disks for RAID0 -- which would perform better but have no protection against disk failure --- and you are extremely vulnerable because the failure of a single disk could cause loss of lots of data. Currently, you are erring on the side of data redundancy rather than performance --- and that is a wise choice.

There is a 3rd option and that is extent-based stripping. Here the stripping is occurring at the extent level and mirroring is possible. The problem with this scheme is that the smallest possible extent (1MiB) is too large to be a good stripe size for i/o distribution (ideally ~ 64KiB-256Kib) and moreover the smallest possible PE severely limits the maximum size of your disks/LUN's.
In your case, about the least-evil approach is to carefully distribute the i/o over as many separate mountpoints as possible because rather than being the "throw everything at the array as fast as you can" model, you are the "I have many disks so spread as much as I can across them" model (but make sure all your LVOL's are mirrored on separate disks).
If it ain't broke, I can fix that.

Re: File Striping for Oracle Database

Thanks Clay. I got my answer.

Re: File Striping for Oracle Database

David, Thanks much for your response.
TwoProc
Honored Contributor

Re: File Striping for Oracle Database

re: extent based striping...

I'd go ahead and use it. Then you can add to this idea by striping in your Oracle data files themselves. This is done by merely creating your tablespaces across the file systems, and this would give you a pretty good similarity to a nicely striped jbod storage solution.

For instance your busiest biggest tablespaces could be "INVDATA", and it could be comprised of one or two very large data files:
'/u5/data/PROD/invdata1.dbf', and '/u5/data/PROD/invdata2.dbf'. Let's say each of these are roughly 5G or so.

Instead you could set this up like:
'/u5/data/PROD/invdata1.dbf',
'/u6/data/PROD/invdata2.dbf'.
'/u7/data/PROD/invdata3.dbf',
'/u8/data/PROD/invdata4.dbf'.
'/u9/data/PROD/invdata5.dbf'.
'/u10/data/PROD/invdata6.dbf'.
'/u5/data/PROD/invdata7.dbf',
'/u6/data/PROD/invdata8.dbf'.
'/u7/data/PROD/invdata9.dbf',
'/u8/data/PROD/invdata10.dbf'.
'/u9/data/PROD/invdata11.dbf'.
'/u10/data/PROD/invdata12.dbf'.
...
repeat as necessary, having 100Mg or so in each data file.
...

Make sure these files have room to grow by leaving some headroom in each. Make the files 32k uniform allocation when you create them, or try 16k or 64k if you like as well.

Now, as data begins to populate the tables the I/O is spread across your mirrored disks and gives better performance, providing you've got enough controllers and separate disk spindles to support such an operation. The data won't "arrive" in one data file filling it fully, and then moving onto the next one, it will actually create the data *across* the data files *relatively* equally (depending) and you'll begin to see performance that will start to look more like that of a striped jbod system, even though HPUX really can't do this in software.

If you want to put this in incrementally, you could build some tablespaces in the above mentioned manner, and then export/import your data into them - or simply do an "alter table move" command and move them to the new tablespace one at a time (if it is not too many tables or objects).

Back to extent based striping idea: by the time you extent base stripe, and then add to that the idea of spreading your data across multiple mount points (mirrored drives in your case) you'll start to get improved disk balancing and I/O access times. Whether or not it is "enough" depends on whether or not your users are satisfied with the performance of their applications (talk about stating the obvious :-) ), but it should be better than what you started with.

BTW, I always stripe all of my high growth or highly accessed tablespaces this way (I don't bother with the little ones), even on the big XP class storage servers, so that when the database is "cloned down" to a smaller test server - it works as well as it possibly can, given the hardware limitations.

Hope this helps
We are the people our parents warned us about --Jimmy Buffett
Yogeeraj_1
Honored Contributor

Re: File Striping for Oracle Database

Hi again Logan,

Below some of the guidelines that have been recommended by the Oracle Guru (Tom Kyte):

o No RAID, RAID 0 or RAID 0+1
for online redo logs AND control files.
In Oracle, you should still multiplex them even if you mirror them.

o No RAID or RAID 0 for temporary datafiles (used with temporary tablespaces).

o No RAID, RAID 0 or RAID 0+1 for archive logs. Again, in Oracle, you should still multiplex them.

o raid 0+1 for rollback.
It get written to lots. We cannot multiplex them at the Oracle Level. We use this for datafiles that we believe will be HEAVILY written. Bear in mind, Oracle buffer writes to datafiles, they happen in the background so the poor write performance of raid 5 is usually OK except for the heavily written files (such as rollback).

o raid 5 (unless you can do raid 0+1 for all of course) for datafiles that experience
what you determine to be "medium" or "moderate" write activity. Since this happens in the background typcially (not with direct path loads and such) -- raid 5 can typically be safely used with these. As these files represent the BULK of your database and the above represent the smaller part -- you achieve most of the cost saving without impacting performance too much.


Also, we should try to dedicate specific devices to:
o online redo
o archive
o temp

These should not share their devices with others in a "perfect" world (even with
each other).

hope this helps too!

kind regards
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Eric Antunes
Honored Contributor

Re: File Striping for Oracle Database

Hi Logan,

If you have root access, then you can know by yourself:

#cd /opt/raidsa/bin
#ioscan -fnkCext_bus
#./sautil /dev/ciss

A good performance strategy is to avoid redolog files on raid5. Furthermore, multiplexing of the controlfiles and redolog members is always a good idea and, of course, a good Buffer Cache Hit Ratio.

Best Regards,

Eric Antunes
Each and every day is a good day to learn.

Re: File Striping for Oracle Database

John/ Yogeeraj/ Eric, Thanks a bunch.
Alzhy
Honored Contributor

Re: File Striping for Oracle Database

Greetings!

Just a word of caution to the previous poster who said (no offense SEP!):

"It is much better to handle this on a disk array and merely present a well configured LUN to the HP system.

The disk array is designed for this, the system is not as good at making the i/o most efficient.

Striping such that it is with the OS is not very effective and not even considered true striping."

As an Admin who's had to educate DBAs through these years, the above should be treated with extreme caution. Why?

Not ALL arrays (specially the cache-centric ones) manage the striping or performance in whatever virtualization/RAIDing they offer. This is TRUE for most high end arrays like Hitachi's (USP/Tagmastore and HP's XP Line) as well as EMCs, etc. BEST Practice for these arrays remain to be that you use your host/OS volume manager to stripe your storage volumes/filesystems. In fact, Oracle ASM and ODM even have specific ASLs or APIs so these automated storage allocation schemes "know" the internal layout of the array's innards so it is able to layout your volumes on the OS/host optimally for performance.

In the case of Highly Virtualized Arrays like the EVA line - the above "possibly" holds true BUT we are actually getting performance by still striping EVA LUNS on the host level.

JBODs - or ordinary disks in array enclosures are of course a different matter. YOU MUST use your host/OS volume manager to RAID them both for protection/redundancy and performance.

Hakuna Matata.