Operating System - HP-UX
1834804 Members
2842 Online
110070 Solutions
New Discussion

Re: Online JFS mount option for Oracle data file system

 
Ervin Liu
Occasional Contributor

Online JFS mount option for Oracle data file system

Hello,
We have a system which have oracle running.The oracle has a huge concurrent reading request instead of writing request.I don't know if the setting "mincache=direct,convosync=direct" will still be helpful for oracle performance in this case.
Currently,we have very low read buffer cache hit rate when we run "sar -b". ( we have 12G memory and dbc_max_pct is 15%) which result in many physical read I/O. We can not use raw device or relayout file system at this point of time because the it is in production already.

Another concern is that if we bypass the system data buffer cache by setting the mount option this way,are there any bad impact like increasing of the CPU and Memory usage?

Our systemm specification:
12G memory
6 CPU
XP512 array
N-class server

I would appreciate it very much for your kind help.

liu
1 REPLY 1
A. Clay Stephenson
Acclaimed Contributor

Re: Online JFS mount option for Oracle data file system

The answer to your question is 'it depends'. I can tell you that having dbc_max_pct set to 15% is almost certainly too large a cache. I would use bufpages to set your buffer cache at around 400MB for 11.0 and perhaps 800-1000MB for 11.11. Typically, I see better performance on 11.0 if datafiles and indices bypass the buffer cache using mincache=direct,convosync=direct and conventional mount options for archive and redologs. I suggest that you use nodatainlog for all mounts. You will see no measurable advantage using raw i/o over the advanced vxfs mount options which bypass the buffer cache - I wouldn't even condider the raw i/o option.

Under 11.11, I find that using the buffer cache for everything gives better performance.

The one other thing that you might find helpful is to
adjust the 'disksort_seconds' tunable. You may need to install a patch before this paramater is available. Search for disksort_seconds
for this patch. It adjusts the fairness algorithm for sequential vs. random disk i/o.

Having said all this, I suspect that by far the best results are going to be found by tweaking the SQL. Often, a single index can make a tremendous difference.

If it ain't broke, I can fix that.