1833877 Members
1862 Online
110063 Solutions
New Discussion

Re: sar -b

 
SOLVED
Go to solution
derek b smith_1
Regular Advisor

sar -b

All,

I know that %wcache is supposed to never drop below 90 but %rcache should be where? What does it mean / what is happenning when %wcache fall below 90%

my buf cache from glance is currently at 409mb and db min and max pct is at 2 and 5.

thanks
derek
9 REPLIES 9
Todd McDaniel_1
Honored Contributor
Solution

Re: sar -b

My %wcache varies from 93 to 0%....

My %rcache never drops very much below 95% for the most part...


What is your problem you are concerned about???
Unix, the other white meat.
derek b smith_1
Regular Advisor

Re: sar -b

my problem is %wcache is consistently averages below 30%

and buf cache should hover around 2gb
Alzhy
Honored Contributor

Re: sar -b

Derek,

Your %rcache should be consistently near 100% in an optimally tuned/performing system. If it goes below 90-100%, then that means your apps are not finding data in the cache and has started going straight to the physical volumes (LVOL).. You need to get more cache memory allocated (dbc_max_pct) in such situations..

For Fileservers.. a bigger dbc_max_pct will be better.
Hakuna Matata.
Michael Tully
Honored Contributor

Re: sar -b

I disagree with a larger buffer cache. The optimal is somewhere in the range of 300-500Mb no more.
What is your set as ?

# kmtune -l -q dbc_max_pct
Parameter: dbc_max_pct
Current: 3
Planned: 3 Default: 50 Minimum: -
Module: -
Version: -
Dynamic: No
Anyone for a Mutiny ?
Alzhy
Honored Contributor

Re: sar -b

Salad Mike,

If this environment is a DB environment one .. then yes, DB cache should be kept minimal...

But if the environment is a Fileserver (ie. running SAMBA or simply NFS)... then larger buffer caches will be better... I worked in an OIL/GAS industry wherein we use HP boxen as SAMBA/NFS servers solely.. and we set get better performance with large filesystem buffers... how large? we went all the way up to the maximum allowed...

Hakuna Matata.
Con O'Kelly
Honored Contributor

Re: sar -b

Hi

I'm not sure why you think %wcache should not drop below 90%. It very much depends on what appplications etc are running on your system. For example if you are running Databases and you have alot of random I/O then your %wcache will definitely fall below 90%.

This in itself is not an indication of any performance issues.

For my money your current buffer cache of 400MB is fine, though 11i can utilise buffer caches of 800MB. It appears you have a dynamic buffer cache configured. If you have 8GB-10GB of memory then your settings for dynamic buffer cache are fine.

Cheers
Con
derek b smith_1
Regular Advisor

Re: sar -b

Correct I should of clairfied the app so...Ok gentlemen, allow me to clairfy some more things. This system is an rp8400 running 11i with a backend Oracle 8 db and a front-end SQL Windows server. Settings are:
dbc_max_pct=5 and
dbc_min_pct=2

From a HPUX perspective what are the paramters to focus on for tuning these type of db systems?

thank you. derek
Michael Tully
Honored Contributor

Re: sar -b

Derek,

some further information on your set up would help.
How about:
Current kernel parameters
patch level
How are your logical volumes set up and on what type of disk array. I have the same system that we were having problems with disk IO utilising oracle. We have ended up creating a plan to LVM stripe across a nominal number of LUN's across a nominal number of physical disks. (There is an old saying: SAME. Stripe and mirror everything.) We have an EMC 8530, so we have the mirroring, but not the stripes. Anyway, the other items to look at are queue depth (see man on scsictl) and how your LUN's are currently set up. (On our test systems we were able to come to all of these conclusions, that's why we are planning this change in two weeks time in production)

HTH
Michael
Anyone for a Mutiny ?
Bill Hassell
Honored Contributor

Re: sar -b

Read cache can hover near 90-95 and your read performance will be quite good. The write cache is almost impossible to get high percentages (much over 50%) unless you have a highly constricted environment. Such an environment would be where your application would write the same records over and over. Pretty unusual. You'll get more than 50% by writing sequential records. Also a bit unusual. Both situations allow the cache to make a lot of records available in memory for updating.

But the typical usage is highly random and that makes the buffer cache almost useless in buffering writes. So you just ignore write cache% and look at the read cache%. 90% or higher is excelent. Note that doubling your buffer cache when read% is more than 90 will have very little effect. The relationship between the cache size and read% is asymptotic, that is, after a certain point, adding more memory is less and less effective.

Oracle has it's own buffering scheme so anything more than 400-500 megs means wasted time in double buffering.


Bill Hassell, sysadmin