- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: dbc_min_pct calculation
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2006 04:45 AM
10-06-2006 04:45 AM
I've been looking at the kernel parameter help page for dbc_min_pct (http://docs.hp.com/en/939/KCParms/KCparam.DBCminPct.html) on that page it gives a formula for working out a conservative value for dbc_min_pct, that is expressed as:-
snip....................
"(number of system processes) * (largest file-system block size) / 1024
To determine the value for dbc_min_pct, divide the result by the number of Mbytes of physical memory installed in the computer and multiply that value by 100 to obtain the correct value in percent.
Only those processes that actively use disk I/O should be included in the calculation. All others can be excluded. Here are some examples of what processes should be included in or excluded from the calculation:
Include:
NFS daemons, text formatters such as nroff, database management applications, text editors, compilers, etc. that access or use source and/or output files stored in one or more file systems mounted on the system.
Exclude:
X-display applications, hpterm, rlogin, login shells, system daemons, telnet or uucp connections, etc. These process use very little, if any, disk I/O."
snip..........................
so i did a 'ps -el | grep -v root | wc -l' to get the number of processes running (most of my process are run for by non-root accounts), this gave me 351.
I then used 'mkfs -m' on various oracle filesystems to get the filesystem block size, the biggest one was 2048 (is this the correct way to get the filesystem block size?)
so the resultant calculations was
(351*2048)/1024 = 702
then divide 702 by the memory in the box 3072 and multiply by 100.
(702/3072)*100 = 22 <---- that as a percentage for dbc_min_pct seems high to me. Am I using the calculation correctly? presently the box has dbc_min_pct set to 2% and dbc_max_pct set to 5%.
It is a L2000 (rp5450) with 3GB of memory and the following swap configuration.
uhpatca1:/home/root# swapinfo -mt
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 2048 0 2048 0% 0 - 1 /dev/vg00/lvol2
reserve - 1974 -1974
memory 2415 746 1669 31%
total 4463 2720 1743 61% - 0 -
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2006 05:02 AM
10-06-2006 05:02 AM
Re: dbc_min_pct calculation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2006 05:13 AM
10-06-2006 05:13 AM
Re: dbc_min_pct calculation
Regarding swap I decided not to increase it (based on comments i've seen by Bill H.), but to monitor it and see if more swap is required. I haven't seen any swapping so far (swapinfo or vmstat), but I may look to up this later as we are adding 4GB of memory to the system in due course.
BTW OS is 11.00
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2006 05:15 AM
10-06-2006 05:15 AM
Re: dbc_min_pct calculation
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2006 06:36 AM
10-06-2006 06:36 AM
SolutionYou find the buffer cache sweet spot by examining the output of sar -b or using Glance to see the read-cache % hit rate and the write-cache % hit rates. You are looking for the point where increases in the size of the buffer cache only make very small increases in the hit rate. At that point, you have gone past the sweet spot and should reduce the size of the buffer cache. Be careful to gather your metrics during equivalent periods of system activity so that you are comparing apples to apples. Above all, (and regardless of improving cache hit rates) if you start seeing significant pageout rates (e.g the po column in vmstat > ~ 8 or so) then you have gone too far.
The typical sweet spots for systems with lots of memory are in this range:
11.0 ~ 400-800MiB
11.11 ~ 800-1600MiB
11.23 ~ 800-3000MiB
but, alas, yours is not a box with "lots" of memory so probably about 600MiB is about as large as you should shoot for.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2006 11:10 AM
10-06-2006 11:10 AM
Re: dbc_min_pct calculation
As Clay said, all those comments and the formula values are just guidelines. With such a small amount of RAM (by today's big server standards) your minimum dbc_min_pct set to 2 is the correct number. Note that 2 is the minimum anyway.
Now the reason that the buffer cache is an inexact science is that managing it has radically changed over the years. Many years ago, it was just a linked list and required a LOT of system overhead to search. Later it was changed to a hash table but with a lot of entries per hash. Again, a very large buffer cache would consume a lot of system CPU time to manage.
But starting at 11.11, the buffer cache has been significantly changed to reduce the CPU cycles needed to find and update the entries. Additionally, the synce has become multithreaded so clearing old writes can proceed in parallel. Couple all this with a fast Itanium box with 8 or more processors and 24Gb or more of RAM and a buffer cache of 8 to 16 Gb actually shows much better performance than a 2 Gb cache.
AS always, your mileage may vary -- just like compression depends on the actual data, the best buffer cache size depends very much on the read/write ratio, sequential or random records and large versus small record sizes.
Here is a link that explains it quite well:
http://www.docs.hp.com/en/5971-2383/5971-2383.pdf
Note also that the buffer cache handler will be dramatically different for the next HP-UX release.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-09-2006 03:26 AM
10-09-2006 03:26 AM
Re: dbc_min_pct calculation
----------------------------------------------------------------------
Physical memory usage summary (in page/byte/percent):
Physmem = 786432 3.0g 100% Physical memory
Freemem = 29562 115.5m 4% Free physical memory
Used = 756870 2.9g 96% Used physical memory
System = 234497 916.0m 30% By kernel:
Static = 44473 173.7m 6% for text/static data
Dynamic = 149140 582.6m 19% for dynamic data
Bufcache = 39321 153.6m 5% for buffer cache
Eqmem = 27 108.0k 0% for equiv. mapped memory
SCmem = 1536 6.0m 0% for critical memory
User = 526450 2.0g 67% By user processes:
Uarea = 7752 30.3m 1% for thread uareas
Disowned = 8 32.0k 0% Disowned pages
uhpatca1:/usr/capgem/scripts# file kmeminfo
P.S One of these days I need to take the time to understand virtual memory management on HPUX, I find it so difficult to get my head around.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-09-2006 11:23 AM
10-09-2006 11:23 AM
Re: dbc_min_pct calculation
However, like most commercial Unix flavors, memory is extremely complicated and the simple model of a program in memory doesn't exist. Instead, the unchanging code (the text area) is shared by all copies of the same program. Most programs are compiled to use shared libraries to reduce memory requirements. The data area is a separate area as are stack and I/O spaces. Then there's shared memory segments allowing programs to share common data. And memory mapped files. And of course, the buffer cache and all the kernel portions of RAM.
Knowing all of this doesn't help too much though. Very simply, if you run out of RAM, processes will start deactivating and then page out to swap space. This is seen with vmstat and the PO column. The fix: less processes or more RAM.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-09-2006 11:44 AM
10-09-2006 11:44 AM
Re: dbc_min_pct calculation
Bill Hassell pointed out to me the effect of changing the dbc parameters is radically different from version to version of the OS.
11.23 seems to actually have a bufer cache tha can enhance Oracle performance.
Beware.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com