Showing results for 
Search instead for 
Did you mean: 

Help about disk array config on ORACLE DB

Go to solution
Manish Verma_2
Occasional Visitor

Help about disk array config on ORACLE DB

We were experiencing bad IO performance on our Oracle datawarehouse (roughly 1TB) and an Oracle consultant advised us to restripe logical volume with stripe size 256K in 8-way stripe. Earlier stripe was on extent basis (LV created using –D option).

So my UNIX admin created new volume groups with PE size 4MB. Some of the VG created on 8 LUNS and some with 10 LUNS (I don’t know why he did this). After that he layout all the logical volumes on these volume groups with stripe size 256K and stripe width 8 (using –i and –I options of lvcreate). Once he completed this, then I copied all the data files from old mount points to new mounts with this new striping. Well, I was expecting some performance gain after doing all this exercise of moving terabyte of data, but UNFORTUNATED the database performance is exactly the same. Could any one help me figure out why this happened and what could have went wrong?

I would appreciate if someone can answer few other questions

1. We are doing lots of full table scan, hash joins during DWhse load. Is it good to turn the file system buffer cache off?
2. Our admin create VGs with PE size 4 MB, but LVs have stripe size of 256K, shouldn’t size of PE size and stripe size be same?
3. Is stripe size is same as Logical extent size?
4. Is the performance being affected especially because we implemented 8-way stripe and using 10 LUNS?
5. When I run GPM, I see lots of Oracle process waiting with STOP REASONS “IO/CAHCE/SEM”, what this cou
Jean-Luc Oudart
Honored Contributor

Re: Help about disk array config on ORACLE DB

I had some similar question when we migrated to Oracle.
We have 8 way stripe (64K) on 8 LUNS (why 10 ?)
see a few threads on the question :

1. cannot answer
2. PE size nothing to do with perf as far as I know
3. ?
4. I would stay with 8 LUNS for 8 way stripe
5. What is the memory on the server, the buffer size ?

What is the backend storage ?

fiat lux
Steven E. Protter
Exalted Contributor

Re: Help about disk array config on ORACLE DB

My assumption: A datawarehouse has little or no writing to it other than a regular update of the data.

If this data update is going slow it might be because of the raid level used on the striping. Oracle recommends raid 1 or 10 for oltp systems, online real time tranaction processing. Thats not what you have, but if there are frequent updates as in writes, this may be the cause of the performance problem.


1) oracle has its own buffer. buffer cache is redundant and can leave data not written to disk under certain circumstances. Should not be a problem in a warehouse.

2) I asked this question months ago and A. Clay Stepenson, who is quite experienced told me the PE size has little or no impact on oracle peformance. If my pea brain molecules are working right, he's actually run tests.
3) I don't think so. Not sure the relavence of this item... ??
4) It depends. The more disks your data is spread across, the better performance is in a raid environment. I'd look at write activity. If there is a lot, that will hammer performance in a raid 5 environment.
5) Shared memory.

shmmax should be 25% of ram. ram is defined as memory plus swap. settings above 25% will be ignored.

shmseg needs to be high. So does maxuprc.

Note, that gpm has a screen to show shared memory usage. If you are getting close to 100% there, you need to increase resources.

If you have already maxed out on shmmax, then memory or more swap is the only way you can squeeze more out.

Steven E Protter
Owner of ISN Corporation
Honored Contributor

Re: Help about disk array config on ORACLE DB

First - others are correct -PE size has no bearing - it's just the smallest chunk size that you can manage to allocate space from/with. Unless, of course, you were using distributed stripes, then the PE size has everything to do with size of the distributed stripe. I don't you'd notice unless you had little PE's which you wouldn't want to try to do in a 1TB database anyway, and if anything, I think small sizes would hurt in a DW setting.
Question: How big is your block size? Most papers I've seen on this would recommend a 32K block size for a DW (OLAP situations have a recommendation of 8k). This means that during your full table scans you'd get more data (a lot more) per fetch. Keep in mind that to change the block size you'd have to do a full export/import or at the least create new tablespace areas with the new block size.

Also, what is your multiblock_read_count? You should try to tie that in to a situation where the Block_size X the multi_block_read_count is equal to the largest data pull you can get from your storage server. Now, that's a funny question that's hard to get an answer to...

Anyways, I've seen the research by the folks at the HP tuning lab that have shown that you can avoid an unnecessary read if the db_file_multiblock_read_count is at least 32 (regardless of block size). This, coupled with a mount point option making the system NOT cache the file system with OS buffers (,nodatainlog,convosync=direct option on each mount point in the fstab) generates potentially huge benefit. According to the data I've seen from HP performance labs (to answer question 1) you'd benefit from this if your multiblock_read_count is over 32. Below this, Oracle does a read from the OS that clearly benefits from at least a small buffer for the mount point (reference S. McLester HP Tuning Labs).

Anyways, I've seen myself in databases that are "full table scan" prone - like a datawarehouse in your example, or just a "ad-hoc query database" that increasing the amount of data grabbed per fetch can definitely increase how much data you get at a time to fulfill full table scans, therefore making things go much faster. However, full table scans on large tables tend to be "what they are", and it's pretty hard to make magic on them. You can get percentage increases - but it's pretty hard to increases in the order of magnitude w/o tuning the query or rewriting the query or some of its logic.

A few notes:
Have you tried tuning the big full table scan queries? Indexes created for them? Materialized views for the common ones? I know that often in a DW - this just isn't feasible - but a check for feasibility should be undertaken...

Also, review how much ram you're got in your db_cache_buffers - you may benefit from having MUCH more than you do. I've heard of people doing Gig of size for DW applications.
We are the people our parents warned us about --Jimmy Buffett
Honored Contributor

Re: Help about disk array config on ORACLE DB

I don't know why - but parts of that last sentence didn't make it - the word I saw & put on screen was "30Gig" not just "Gig" - anyway - 30gig...
We are the people our parents warned us about --Jimmy Buffett
Honored Contributor

Re: Help about disk array config on ORACLE DB


you may also run a Statspack report to determine which are the area that your oracle database is "suffering"

hope this helps too!

No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)