1751876 Members
5473 Online
108782 Solutions
New Discussion юеВ

Re: dbc_max_pct

 
SOLVED
Go to solution
lastgreatone
Regular Advisor

dbc_max_pct

WE have relatively small databases on an L1000 11/64 and the dba is 2K block size. The default is 8K on the system. I reduced the dbc_max_pct to 10% from default 50%. I believe this should relieve block I/o contention, am I right?
6 REPLIES 6
Steven Sim Kok Leong
Honored Contributor
Solution

Re: dbc_max_pct

Hi,

As Oracle has its own buffer cache (database buffer cache), it is really unnecessary to have a large OS buffer cache. As such, dbc_max_pct should be set to a minimal to prevent double buffering.

Hope this helps. Regards.

Steven Sim Kok Leong
Jeff Machols
Esteemed Contributor

Re: dbc_max_pct

You are right, unless you are running db2, all other DB's have there own IO cache so the value should be reduced.
A. Clay Stephenson
Acclaimed Contributor

Re: dbc_max_pct

Hi Frankie:

Your assumption is correct. You have another option and that is to fix the buffer cache at some value by setting bufpages to a non-zero value. e.g. bufpages=80000 will set the value to 320MB. I find that on even very large servers that typically the marginal improvements in buffer cache hit rates become very small above 300-400 MB. If your are using raw/io or the OnlineJFS mount options convosync=direct,mincache=direct which bypass the unix buffers, you can reduce the buffer cache even lower especially for a machine that is a pure database server. I really prefer fixed buffer cache because it allows one to tune other parameters while keeping the buffer cache constant.
If it ain't broke, I can fix that.
Helen French
Honored Contributor

Re: dbc_max_pct

Sridhar Bhaskarla
Honored Contributor

Re: dbc_max_pct

Hi Frankie,

It depends. On the Physical memory you have. It is not recommended to set it to any value more than 300MB. So, you can adjust your dbc_max_pct to get around 300MB. Another dependency is about your application. Having a very low buffer cache will degrade sequential reads. And if your application does it a lot, then you will be losing the performance.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try
Dennis J Robinson
Frequent Advisor

Re: dbc_max_pct

dbc_max_pct in the past was said not to scale more than 800M now I am hearing 300M. I will have alot of performance tuning work to do on these sites.

My advice is to try on your own server. Good rule of thumb is on dbase server to do 5-10 for dbc_max_pct. Optimal performance will come when you bypass dynamic buffer cache sizing ( set nbufs )

mincache=direct,convosync=direct mount parameters will allow direct I/O which means the UNIX buffer cache is tottaly bypassed on read/writes. This is great for OSYNC writes, and non sequential reads.

However sequential read performance is hurt more than 2x on these filesystems. Also filesystem based I/O (such as backups, filecopies suffer horrendously). I haven't seen many databases which do not tablescan.

Final suggestion is to use mincache=direct,convosync=direct only on those filesystems which have mostly completely random I/O pattern ( no tablescan ), or heavy writes ( bypass double buffered write ). The gain is approximately 10% in these cases.

My suggestion is larger than the 300-800M for servers which are more than simply database servers. Too small of a buffer cache is a bad situation, because you encounter thrashing of the buffer cache. This occurs when you have many competing processes attempting to allocate space in the cache.

On NFS servers, Clearcase servers, servers where large files are moved, use as much buffer cache as the box can stand with improved performance.

You know the drill