1833601 Members
3260 Online
110061 Solutions
New Discussion

dbc_min_pct calculation

 
SOLVED
Go to solution
CGEYROTH
Frequent Advisor

dbc_min_pct calculation

Hi

I've been looking at the kernel parameter help page for dbc_min_pct (http://docs.hp.com/en/939/KCParms/KCparam.DBCminPct.html) on that page it gives a formula for working out a conservative value for dbc_min_pct, that is expressed as:-

snip....................
"(number of system processes) * (largest file-system block size) / 1024

To determine the value for dbc_min_pct, divide the result by the number of Mbytes of physical memory installed in the computer and multiply that value by 100 to obtain the correct value in percent.

Only those processes that actively use disk I/O should be included in the calculation. All others can be excluded. Here are some examples of what processes should be included in or excluded from the calculation:

Include:
NFS daemons, text formatters such as nroff, database management applications, text editors, compilers, etc. that access or use source and/or output files stored in one or more file systems mounted on the system.

Exclude:
X-display applications, hpterm, rlogin, login shells, system daemons, telnet or uucp connections, etc. These process use very little, if any, disk I/O."
snip..........................

so i did a 'ps -el | grep -v root | wc -l' to get the number of processes running (most of my process are run for by non-root accounts), this gave me 351.

I then used 'mkfs -m' on various oracle filesystems to get the filesystem block size, the biggest one was 2048 (is this the correct way to get the filesystem block size?)

so the resultant calculations was

(351*2048)/1024 = 702
then divide 702 by the memory in the box 3072 and multiply by 100.

(702/3072)*100 = 22 <---- that as a percentage for dbc_min_pct seems high to me. Am I using the calculation correctly? presently the box has dbc_min_pct set to 2% and dbc_max_pct set to 5%.

It is a L2000 (rp5450) with 3GB of memory and the following swap configuration.

uhpatca1:/home/root# swapinfo -mt
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 2048 0 2048 0% 0 - 1 /dev/vg00/lvol2
reserve - 1974 -1974
memory 2415 746 1669 31%
total 4463 2720 1743 61% - 0 -
8 REPLIES 8
A. Clay Stephenson
Acclaimed Contributor

Re: dbc_min_pct calculation

I find those formulae to be all but useless because the real answer is "it depends". You really have to tune these values to match your system. One thing to note is that (potentially) don't have enough swap although you may be fine. Your currect dbc_xxx_pct are a bit on the low side because it varies between ~ 60MiB and ~150MiB. Even if you are running raw/io for your applications, this is a small amount of cache for normal UNIX activity. I would bump dbc_min_pct in your case up to about 8% and dbc_max_pct up to about 25%. Those should be reasonable values to start. It also matters which version of the OS you are running. 11.11 is quite good at dynamically adjusting the memory for buffer cache so you might consider higher values and 11.23 is even better. If you are running 11.0 then I would not allow the buffer cache to exceed 800MiB. In any event, you want to use Glance (or sar) to find the sweet spot for your system.
If it ain't broke, I can fix that.
CGEYROTH
Frequent Advisor

Re: dbc_min_pct calculation

I though it was a bit low given the comments I've read from you, SEP and Bill on other threads. What should I be looking for in sar to get the 'sweet' spot?

Regarding swap I decided not to increase it (based on comments i've seen by Bill H.), but to monitor it and see if more swap is required. I haven't seen any swapping so far (swapinfo or vmstat), but I may look to up this later as we are adding 4GB of memory to the system in due course.

BTW OS is 11.00
CGEYROTH
Frequent Advisor

Re: dbc_min_pct calculation

Also what is the impact of having values that are too low for dbc_min_pct and dbc_max_pct?
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: dbc_min_pct calculation

I too am an advocate of small amounts of swap --- but that applies to systems with "lots" of memory --- and 3GiB really doesn't qualify these days. On smaller systems, at least a 1:1 swap to memory ratio is a safer alternative. If you aren't paging out to a significant degree then you are fine.

You find the buffer cache sweet spot by examining the output of sar -b or using Glance to see the read-cache % hit rate and the write-cache % hit rates. You are looking for the point where increases in the size of the buffer cache only make very small increases in the hit rate. At that point, you have gone past the sweet spot and should reduce the size of the buffer cache. Be careful to gather your metrics during equivalent periods of system activity so that you are comparing apples to apples. Above all, (and regardless of improving cache hit rates) if you start seeing significant pageout rates (e.g the po column in vmstat > ~ 8 or so) then you have gone too far.

The typical sweet spots for systems with lots of memory are in this range:

11.0 ~ 400-800MiB
11.11 ~ 800-1600MiB
11.23 ~ 800-3000MiB

but, alas, yours is not a box with "lots" of memory so probably about 600MiB is about as large as you should shoot for.

If it ain't broke, I can fix that.
Bill Hassell
Honored Contributor

Re: dbc_min_pct calculation

The only way to locate a sweet spot is to prepare a fully automated test set, thenj adjust the two dbc percentages. This should not take more than a couple of weeks of testing to find the best setting.

As Clay said, all those comments and the formula values are just guidelines. With such a small amount of RAM (by today's big server standards) your minimum dbc_min_pct set to 2 is the correct number. Note that 2 is the minimum anyway.

Now the reason that the buffer cache is an inexact science is that managing it has radically changed over the years. Many years ago, it was just a linked list and required a LOT of system overhead to search. Later it was changed to a hash table but with a lot of entries per hash. Again, a very large buffer cache would consume a lot of system CPU time to manage.

But starting at 11.11, the buffer cache has been significantly changed to reduce the CPU cycles needed to find and update the entries. Additionally, the synce has become multithreaded so clearing old writes can proceed in parallel. Couple all this with a fast Itanium box with 8 or more processors and 24Gb or more of RAM and a buffer cache of 8 to 16 Gb actually shows much better performance than a 2 Gb cache.

AS always, your mileage may vary -- just like compression depends on the actual data, the best buffer cache size depends very much on the read/write ratio, sequential or random records and large versus small record sizes.

Here is a link that explains it quite well:

http://www.docs.hp.com/en/5971-2383/5971-2383.pdf

Note also that the buffer cache handler will be dramatically different for the next HP-UX release.


Bill Hassell, sysadmin
CGEYROTH
Frequent Advisor

Re: dbc_min_pct calculation

thanks for your informative answers. I have one final question, I have kmeminfo on the servers (due to a previous problem) running this executable with the -s option give me the information below (including memory usage and buffer cache). Is there any harm in using this to track buffer cache usage and memory usage (I don't have glance)? does it have any system impact if I was to run it every 5mins - so i can align the results with sar stats?


----------------------------------------------------------------------
Physical memory usage summary (in page/byte/percent):

Physmem = 786432 3.0g 100% Physical memory
Freemem = 29562 115.5m 4% Free physical memory
Used = 756870 2.9g 96% Used physical memory
System = 234497 916.0m 30% By kernel:
Static = 44473 173.7m 6% for text/static data
Dynamic = 149140 582.6m 19% for dynamic data
Bufcache = 39321 153.6m 5% for buffer cache
Eqmem = 27 108.0k 0% for equiv. mapped memory
SCmem = 1536 6.0m 0% for critical memory
User = 526450 2.0g 67% By user processes:
Uarea = 7752 30.3m 1% for thread uareas
Disowned = 8 32.0k 0% Disowned pages
uhpatca1:/usr/capgem/scripts# file kmeminfo

P.S One of these days I need to take the time to understand virtual memory management on HPUX, I find it so difficult to get my head around.
Bill Hassell
Honored Contributor

Re: dbc_min_pct calculation

kmeminfo has almost no load at all -- it just reports on some selected values in the kernel. As far as understanding memory management in HP-UX, I would strongly recommend the "hp-ux 11i internals" book by the two chris's (chris cooper and chris moore).

However, like most commercial Unix flavors, memory is extremely complicated and the simple model of a program in memory doesn't exist. Instead, the unchanging code (the text area) is shared by all copies of the same program. Most programs are compiled to use shared libraries to reduce memory requirements. The data area is a separate area as are stack and I/O spaces. Then there's shared memory segments allowing programs to share common data. And memory mapped files. And of course, the buffer cache and all the kernel portions of RAM.

Knowing all of this doesn't help too much though. Very simply, if you run out of RAM, processes will start deactivating and then page out to swap space. This is seen with vmstat and the PO column. The fix: less processes or more RAM.


Bill Hassell, sysadmin
Steven E. Protter
Exalted Contributor

Re: dbc_min_pct calculation

Shalom,

Bill Hassell pointed out to me the effect of changing the dbc parameters is radically different from version to version of the OS.

11.23 seems to actually have a bufer cache tha can enhance Oracle performance.

Beware.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com