Operating System - HP-UX
1753964 Members
7424 Online
108811 Solutions
New Discussion юеВ

Re: OS buffer cache vs. DB shmem cache

 
SOLVED
Go to solution
Ed Loehr_1
Occasional Advisor

OS buffer cache vs. DB shmem cache

We are empirically evaluating various settings of dbc_max_pct on our HP DB servers. The boxes have 64GB RAM + quad Itaniums. In the past, as a result of advice from these forums, specifically contention issues with vhand, we've set dbc_max_pct to no more than 3% (~2GB), and set our DB shmem caches to be as large as 30GB. Now, in pursuit of better performance, we are setting dbc_max_pct to 20-40% of RAM.

What would you anticipate?
Which glance metrics would tell if you are correct?

TIA.

Ed
6 REPLIES 6
Bill Hassell
Honored Contributor
Solution

Re: OS buffer cache vs. DB shmem cache

The OS version and processor count and speed are critically important to the answer. For 11.00, the performance will drop as you move past 2Gb, dramatically for slow processors or systems with just 2-3 CPUs. Above 4Gb, your system overhead (with a busy database) will probably exceed 50% or more and performance will be awful. At 11.11, the numbers will get better but probably start sinking after 4Gb. At 11.23, things change dramatically and a DBC in the 6-10 Gb range will still be useful. 11.31 has a major rewrite of the buffer cache code which will significantly improve performance at almost any size DBC. At 11.31, you may find better performance with the DBC than with a large SHMEM value.

One nice feature for 11.23 and 11.31: the max_dbc_pct value is dynamic and can be changed at any time while the system is running.

Glance will only give relationships between user and system overhead. You can look at the disk I/O rates for some useful numbers. But don't even think about testing this without a very well-defined benchmark. The test must be repeatable since the first run will always have a lot of disk I/O.


Bill Hassell, sysadmin
SANTOSH S. MHASKAR
Trusted Contributor

Re: OS buffer cache vs. DB shmem cache

Hi,

If u set dbc_max_pct to 20-40% this much of memory
would not be available to DB(I assume it to be
Oracle). U have to set it very small if u r
using DB cache, hence ur initial setting of
dbc_max_pct=3% is OK.
Rasheed Tamton
Honored Contributor

Re: OS buffer cache vs. DB shmem cache

The old school saying (hp-ux 10.20 time) was to use database buffer cache rather than to use OS buffer cache.

Also to use raw/aynch option rather than file systems.

But with the latest HP-UX versions and also with the new VxVM, I am not sure how things are performed better and benchmarked accordingly.

I would also like know the expert opinions vis a vis the latest OS versions.

Regards,
Rasheed Tamton.
Hein van den Heuvel
Honored Contributor

Re: OS buffer cache vs. DB shmem cache

Already with V2 you may want to allow the buffer cache to grow bigger that you may have been used to and even more so under V3.

just fwiw... Under HP-UX 11i Version 3 all of dbc_max_pct, dbc_min_pct, bufcache_max_pct, bufpages, nbuf are OBSOLETED kernel tunable parameter.
Use the file cache tunables filecache_max and filecache_min

Please consider carefully reading pages 7 thru 16 in: "Common Misconfigured HP-UX Resources"
http://docs.hp.com/en/5992-0732/5992-0732.pdf

Of course it all depends... on the the system usage. If the system is 100% dedicated to a database, with the only file IO being some image activations, shell scripts and log files then a large buffer cache will server no purpose... but with the new hpux versions you can also trust it not to grow too big.

Now if the system also does some moonlighting as NFS server and (flat) file massaging or program development, then a large filecache is likely to help it a bunch.

Ed wrote: 64GB ram, dbc_max_pct 3% (~2GB), DB shmem caches 30GB.

So that other 30GB plus is available for kernel memory and user process memory (resident set). Those might 'only' take a few GB, leave 10+ GB completely free? Might as well give it to the and see whether it helps. Nothing else will be clamouring for it!

Santosh wrote : "If u set dbc_max_pct to 20-40% this much of memory would not be available to DB."

I beg to differ. The amount specified by MIN will not be available but the MAX amount is just that, a max. The system is supposed to only use that when there is no other pressure for that memory. Admittedly it would serve no purpose to tell the filecache that is might be allowed to use up to x% when only ever y% will be available, but at the same time that _should_ not hurt, and is hurting less and less, notably with v3.

It would be nice to see someone (hp) documenting a benchmark with filecache settings in the 1 GB - 16 GB zone, but I did not readily find one. Anyone?

If I were to run the box in question, would certainly try with dbc_max set to as much as 10% (6.4GB) for a period.

For very stable / predictable systems (like a TPC or SAP benchmarks :-) I would lean towards making min=max, to stop the system even considering whether it should grab more or less.

I'll close with a reference to the full disclosure for the recent super superdome 4M TPS TPC result.

This document is highly recommended reading for anyone with hpux/oracle/performance interest (which is more than 1/2 of the folks reading this topic! :-).

http://www.tpc.org/results/FDR/TPCC/tpcc.hp.SD.fdr.022707.pdf

3 cute details:

- 2 Tera bytes of physical memory
- 1 Tera byte + SGA: db_keep_cache_size = 1220G
- filecache_min=640MB filecache_max=640MB
... that is just 0.3 % on that box.


Hope this helps some,
Hein van den Heuvel (at gmail dot com)
HvdH Performance Consulting
Ed Loehr_1
Occasional Advisor

Re: OS buffer cache vs. DB shmem cache

I forgot to mention this is all on 64-bit HP-UX 11.23.

For what its worth to others, we currently have one 64GB machine, call it "db4-64", running with a 19.2GB dbc_max setting, one 16GB-RAM machine (call it "db3-16") running with 40% dbc_max_pct (6.4GB), both without issues so far (and without any "wow" improvement, either). Neither the global cache hit rate (Glance's GBL_MEM_CACHE_HIT_PCT), FS IO rate, physical IO rate, nor the number of IO-blocked processes have improved (or degraded) after increasing these cache sizes. The db4-64 cache size went from 3% or 1.9GB to 40% or 19.2GB. That all makes me wonder if it's doing any good at all. db3-16 and db4-64 both have 80% cache hit rates. Every night, db3-16 dumps 91GB of data and db4-64 dumps 140GB, all over the course of 6-8 hours. I think the dumps are effectively flushing all caches for that period, negatively impacting performance.

We're moving soon to incremental DB backups via async replication so the caches never have to be fully cleared and we can perhaps reduce the consumption down to a more sensible working set size.

Computers were going to make life simpler; I now see that was not my life they were talking about.:) Thanks to all of you for your input.

Ed
Emil Velez
Honored Contributor

Re: OS buffer cache vs. DB shmem cache

if you are using filessystems for your oracle tablespace files you can make buffer cache not be used for those filesystems with the

mincache=direct

mount option in the fstab. THis way your buffer cache will be for the other filesystems.