Simpler Navigation for Servers and Operating Systems
Completed: a much simpler Servers and Operating Systems section of the Community. We combined many of the older boards, so you won't have to click through so many levels to get at the information you need. Check the consolidated boards here as many sub-forums are now single boards.
cancel
Showing results for 
Search instead for 
Did you mean: 

System Performance problem!

SOLVED
Go to solution
Fragon
Trusted Contributor

System Performance problem!

Hi all,
I have a L2000 box with physical memory 768M. The SWAP(lvol2) is set to 1024M. It runs a Progress DB system. The DB size is about 1.7G. All tables of the DB are integrated in DB but I can work out that the max table size is about 400M.
Now the system performance is very poor.
Of course the disk I/O is a bottleneck, this is to be tuned!
And, I found the swap of "dev" (#swapinfo -ta) is used 5%. At first I think the memory is not enough. But I found that the kernel parametes of buffer cache is strange(#sysdef):
bufpages 98304
dbc_max_pct 50
dbc_min_pct 5
Physical memory used for buffer cache is up to 384M.
Because this box is not a serious using , add more physical memory is impossible. So I have some questions:
1. Should I decrease dbc_max_pct to free up some buffer-cache-using memory ? (I know decreasing buffer cache will make poorer system performance,swap data paging will also make poorer system performance. Which should I take first?)
2. How to decide the approximate buffer cache of the system? (Of course the larger, the better! But is will make other available memory smaller.)

Very sorry for my poor English, but I'd try to make my question clear! Thanks in advance.

-ux
8 REPLIES
Con O'Kelly
Honored Contributor

Re: System Performance problem!

The first point is that you are using a fixed buffer cache as shown bufpages=98304.
This is why your buffer cache is 384MB ie (98304 x 4K)/1024. Therefore changing dbc_max_pct will have no effect.
Secondly 768MB is very little memory. if you have memory problems, then there is only 1 solution & that's more RAM.

Have a look in glance (option m) to see memory report.
Also look at vmstat & pageouts (po).

The larger the better is not really true for buffer cache. Generally a buffer cache of 400-500MB is more than sufficient. In your case you could try to reduce buffer cache & see if this helps, say to 250MB (set bufpages to 64000).

Cheers
Con
Michael Steele_2
Honored Contributor

Re: System Performance problem!

Cut dbc_max in half to begin with.

dbc_max = 25

But need to see total percent swap utilized to be sure:

swapinfo -tam
sar -b 5 5

If the vicious cycle of allocating and deallocating memory between buffer caches exists, then paging or swapping will be exhibited.

May as well attach the results of the following, during a load:

vmstat 5 5
sar -v 5 5
sar -u 5 5
sar -d 5 5

Regarding "...Of course the larger, the better! ...." Wrong. To much dynamic buffer cache will result in a vicious cycle where the kernel expands the buffer when needed and shrinks when also needed. This causes overhead and swapping.
Support Fatherhood - Stop Family Law
Steven E. Protter
Exalted Contributor

Re: System Performance problem!

Read this:
http://www2.itrc.hp.com/service/cki/search.do?category=c0&docType=Security&docType=Patch&docType=EngineerNotes&docType=BugReports&docType=Hardware&docType=ReferenceMaterials&docType=ThirdParty&searchString=UPERFKBAN00000726&search.y=8&search.x=28&mode=id&admit=-1335382922+1058237934142+28353475&searchCrit=allwords

collect data with the atatched script.

Your question is quite clear. a lot of people set dbc_max_pct to 5, or 10, making the range much smaller.

Its expensive CPU wise to switch from 5 to 50. Its expensive to switch at all.

Pay attention to HFS vinodes. They are also very expenseive. You might want to hard code a value other than the default formula. We did.

The doc above is written by a really sharp HP performance guy. He's in Montana fishing right now. If you like the doc, pull for him to catch a big one.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Tim Adamson_1
Honored Contributor

Re: System Performance problem!

Hi,

Here's my 2 cents worth - sorry 5 cents - they got rid of the 2 cent coin.

Assuming you have not set bufpages or nbuf to explicit values in /stand/system, then you are using dynamic buffer cache (ie, dbc_min_pct and dbc_max_pct).

I would set dbc_max_pct to 15 and monitor it. Don't worry if you find that its value has reached the maximum as that will typically be caused by your backups.

I would also seriously consider adding another 500Mb of swap. You can always take it off again if necessary. Adding additional swap may require a change to maxswapchunks kernel parameter.


Hope it helps

Cheers!!

Yesterday is history, tomorrow is a mystery, today is a gift. That's why it's called the present.
A. Clay Stephenson
Acclaimed Contributor

Re: System Performance problem!

Because you are actually using some of the device swap space, you will be far better off to reduce your buffer cache. I would set bufpages to no more than half of your current value. If bufpages is non-zero then dbc_max_pct and dbc_min_pct don't matter at all because you are not using dynamic buffer cache. Make sure that nbuf is zet to zero. Nothing harms your system's performance more than swapping. You really need more memory in this box.
If it ain't broke, I can fix that.

Re: System Performance problem!

I too have a Progress db on a L2000 running HP-UX 11.00. I agree that you need more memory, we have 8Gigs of RAM on this development server running 17 Progress db's. Our Db sizes are an average of 10Gigs and 4 at 30Gigs each. Our production Db is on a L3000 with 8gigs of RAM and has 2 Progress Db's one 30Gigs and the other 2 at 10Gigs.

I'm new as Sys Admin, so you may want to do some research on these below...

Our bufpages and nbuf are both 0 and this is why...
If bufpages is zero at system boot time, the system allocates two pages for every buffer header defined by nbuf. If bufpages and nbuf are both zero, the system enables dynamic buffer cache allocation and allocates a percentage of available memory not less than dbc_min_pct nor more than dbc_max_pct , depending on system needs at any given time.

What is your I/O timeout? This can be seen with pvdisplay /dev/dsk/??disk??
ours was default which is 30 I upped this to 90 and saw some improvements.

But more memory is the big thing. I pulled out our 256 DIMMs and replaced them with 512's and added another memory carrier full of 512M DIMMS. This was our most noticable imporvement when our Db was under 10Gig and when I didn't have 160 users hitting the DB.
Brian DelPizzo
Frequent Advisor

Re: System Performance problem!

set nbuf and bufpages to 0 (Activates Dynamic buffer cache). Then set dbc_max to 5 and dbc_min to 5. When you add more memory you can re-adjust these. Since you are running progress, the buffer cache isn't going to help you as much as if you were running a filesystem intensive application. Dynamic buffer cache is not as dynamic as you would think. For instance, it really never gives up memory when the system is short. It will run you right upto 100% utilization. Better to keep it fixed and low when short on physical memory. Add at least another Gig of swap. If you can spread it out over two mirrored pairs and set the swap priority equally, you can benefit from swap interleaving. Good for a boost when swaping does occur. Add more memory! This box needs another Gig.
Brad Kozak
Valued Contributor
Solution

Re: System Performance problem!