Operating System - HP-UX
1827451 Members
4428 Online
109965 Solutions
New Discussion

Re: values of these parameters

 
SOLVED
Go to solution
T G Manikandan
Honored Contributor

values of these parameters

I have some servers which are used for extensive compilations which run for around 20Hrs.

The machines are atmost having the same configuration
as
4 processors
2GB RAM

I have set the
dbc_max_pct 15%
dbc_min_pct 5%
ninode 7000

What values do you recommend?
Is the buffer cache set to 300MB is okay.
Already the machine is paging a lot,I really suggested for these values.
Increases these values would increase the paging activity on the other.

Please advise with valuable imputs with examples

Thanks
27 REPLIES 27
Stefan Farrelly
Honored Contributor

Re: values of these parameters


300MB is fine for buffer cahce - UNLESS you are paging. If paging still ocurrs then you still have execssive memory pressure so youve got to do something to ease the pressue. I would certainly reduce the buffer cache to say 100MB and see how that goes.
Im from Palmerston North, New Zealand, but somehow ended up in London...
T G Manikandan
Honored Contributor

Re: values of these parameters

It is really wonderful to have responses from everyone who is a boss and the other is a student still learning!
Great!
Thanks stefan

but wont it be the case where the buffer cache is very small for huge extensive processes which haunts the performance on the other side.

T G Manikandan
Honored Contributor

Re: values of these parameters

Is that not going opposite increasing them would really kill the server.

What is the need then hp has a default of 50%.
Was that defined for low memory servers,which really reqd that amount?
Stefan Farrelly
Honored Contributor

Re: values of these parameters

"but wont it be the case where the buffer cache is very small for huge extensive processes which haunts the performance on the other side."

If youre running extensive compilations which run for 20hrs then they sound cpu intensive to me. If this causes your server to page then this is definitely slowing down your performance. Having 300Mb of buffer cache is not helping, so reducing buffer cahce to free up some more memory should reduce (and hopefully eliminate) paging which can only increase performance.

Its swings and roundabouts. More buffer cache is supposed to help performance, but if your server is paging then this outweighs the reason for having so much buffer cache. Its a matter of priorities. Priority 1 on any server should always be NO PAGING (if possible).

"What is the need then hp has a default of 50%. Was that defined for low memory servers,which really reqd that amount "

Thats a really good question. I dont know why they set it so high. I guess they think all new servers come with Gb's and Gb's of RAM so it will only help performance to have such a large buffer cahce - until you start using that memory with applications when you will need to then reduce the buffer cache.
Im from Palmerston North, New Zealand, but somehow ended up in London...
T G Manikandan
Honored Contributor

Re: values of these parameters

I was think about the reverse decrease in performance as the more read look ups in the
disk rather than buffercache
which should take up some time.
what about ninode?


Thanks again
T G Manikandan
Honored Contributor

Re: values of these parameters

back to top
Martin Johnson
Honored Contributor

Re: values of these parameters

Everything is relative. If your processes are CPU bound and you are paging, then that is where your *CURRENT* bottleneck is and you should take steps to deal with it. You could add more memory, but that costs money. Reducing the buffer cache doesn't.

Yes, if you reduce the buffer cache enough, paging will reduce or be eliminated and you could develop an I/O bottleneck. Then that would be your *CURRENT* bottleneck. At that point you would probably increase the buffer cache. You may have to make several adjustments to find the right balance between CPU bottleneck and I/O bottleneck.

If the balance is not acceptable, then you need to look at your options. In this case, probably more hardware (more memory, faster CPUs, Fibre Channel cards, etc).

HTH
Marty
T G Manikandan
Honored Contributor

Re: values of these parameters

Martin
with your new hat you are going great!

You are right!
But when the Co. is not ready to invest in hardware and you expect a good TAT(Turn around time) for the compilation build,there is the break.

At present the compilation builds are taking around 20Hrs.
They asked me how could the TAT be increased a very good result like 10hrs.

What I suggested was that the servers definitely are lacking in memory.aCC compilation eat a lot of memory,but this should give you a 3 to 4 hour increase in TAT.
Also the drastic reduction in good TAT must be a good process in the compilation process and also a big cleanup and tuning on the application.
This should give you the proper results.

What do you admins feel about the same.

Thanks
Shannon Petry
Honored Contributor

Re: values of these parameters

For compute intensive apps, there is alot that needs to be looked at.
First, I'll guess your doing not just lots of paging, but application scratch (thus the ninode 7000 and dbc_min/max)

For FEA I actually reduce the dbc_max_pct to 10%.
Next, did you increase your shared memory areas?
Use approximately 60-80% free RAM for values on the following.
maxdsiz
maxdsiz_64bit
maxssiz
maxssiz_64bit
maxtsiz
maxtsiz_64bit
shmmax
This will allow the FEA apps to grab larger chunks of memory.
Next, how is the file structure build for your scratch areas? I typically use the largest block sizes, on a partitition created over at least 2 spindles and OFF the OS drives as wear and tear from FEA scratch can be horrible.

Hope these help.
Shannon
Microsoft. When do you want a virus today?
T G Manikandan
Honored Contributor

Re: values of these parameters

I have my maxdsiz,maxssiz and maxtsiz and the 64 equ. to the largest values for the 2GB memory.
You mean that they eat up lot of memory.

I used to get lot of "not enough memory" errors with these compilation processes.
So there was a need to bump those values.
But Bill Hassel replied to me that the 32 bit executable cannot take more than 960MB in memory.

Just check this link
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xb54ac4c76f92d611abdb0090277a778c,00.html

Thanks
A. Clay Stephenson
Acclaimed Contributor

Re: values of these parameters

Ninode will, almost certainly play no roll in this. Ninode only applies to hfs filesystems and I'll bet that your only hfs filesystem is /stand. You could very safely reduce it to a few hundred or a thousand and save precious memory. As for the sizing of the buffer cache, nothing is going to hurt you as much as excessive paging. You should always favor (at least within anything remotely considered reasonable limits) smaller buffer cache over pageouts.

In your case, I would set dbc_max_pct to no more than about 8% and I find that on most 10.20 and 11.00, the best performance is obtained by staically sizing the buffer cache at somewhere between 300-400 MB by setting bufpages to something around 80000. The default value of dbc_max_pct of 50% is in a word - stupid.
If it ain't broke, I can fix that.
T G Manikandan
Honored Contributor

Re: values of these parameters

Clay, is back for my queries.
You are simply fantastic!

But this ninode is dependable upon the kernel parameter
maxusers
which incase reduces nproc,ncallout,etc..

Clay,with the kind of experience you have how would you suggest to reduce the TAT of the compilation build processes?

Please explain.


Thanks
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: values of these parameters

You don't have to use the formula to set ninode; you can simply replace the existing formula with a fixed value. 1,000 would perhaps be a good value if the only hfs filesystem is /stand. 1,000 will allow you to compile and link new kernels which is the most intensive operation conducted in that filesystem.

If you are trying to reduce your compile/link times then the first purchase would need to be memory. This will allow you to eliminate pageouts and also increase buffer cache to about 400MB. I think you will find that fixed buffer cache will offer better performance as it will reduce CPU overhead. Unless this is an 11.11 box, I don't think you are going to see any improvements in speed over about 400MB. If this is 11.11, somewhere between 800MB and 1GB might be in order if you have plenty of memory.

I assume that you are running makefiles. I have been absolutely astonished to find development shops that don't know how to use make and simply have shell scripts to compile and link everything. Make will tremendously reduce the workload as only those objects will need to be recompiled will be processes.

In your case, you really need a fast machine with lots of memory and fast disk drives. If you are running on an older platform like a K then I suspect that a newer A-box, L-box, or N-box or the newer rpxxxx models will offer a tremendous increase in throughput. You should be able to find an A,L, or N at quite reasonable prices on the used-equipment market.

Regards, Clay



If it ain't broke, I can fix that.
Wodisch
Honored Contributor

Re: values of these parameters

Hi T.G.,

as soon as you have solved that paging issue, you could try to increase (yes) "timeslice", say to 12, as your cpu-bound processes would get longer time-slices, then. Of course, everything intercative would hate you...

How about using glance or MWA to identify your I/O hogs?

FWIW,
Wodisch
Wodisch
Honored Contributor

Re: values of these parameters

Hi T.G.,

as soon as you have solved that paging issue, you could try to increase (yes) "timeslice", say to 12, as your cpu-bound processes would get longer time-slices, then. Of course, everything intercative would hate you...

How about using glance or MWA to identify your memory hogs?

FWIW,
Wodisch
Wodisch
Honored Contributor

Re: values of these parameters

sorry about the double post - the second is the one I wanted (why arent't the forums slow when you need it???)
Sridhar Bhaskarla
Honored Contributor

Re: values of these parameters

Hi T.G,

Get ninode to as small as possible.

Also keep an eye on sar -v and observe the usage of the other parameters like nfile and nproc. Get them to only 25% more than required. They are tables in the kernel and you want them to be as small as possible.

If this box is only used for compilation not anythingelse, then it is purely CPU bound. If you have 2 GB RAM, I would not set more than 50MB for buffer cache. If you are paging a lot and you can't by more memory, then try to get swap on a seperate disk than the root disk. Define it with a higher priority so that it will be used first.

Stop unrequired daemons like snmpd etc if you are not using them.

Also if you have turned on swapmem_on parameter, turn it off. It does seem to lock the pages used for reserving swap.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try
Bill Hassell
Honored Contributor

Re: values of these parameters

You havge a no-win situation. You are severely short on memory, so all those processors are starved for data because the disks are busy paging process space in/out of memory, a total waste of time.

It is amazing that so much money would be spent on processors and disks and then cripple applications with too little RAM.

As mentioned, ninode has no significance except for HFS filesystems. The formula is obsolete and should NEVER be used. Set ninode to 500 and forget it. That will free up a few megs. The buffer cache size affects file rd/wt so larger is better until it creates memory pressure for processes that have to start paging. Paging has a much worse effect on performance than a smaller buffer cache. If your paging rate (vmstat's po value) is double digits or more, push the dbc_min down some more as well as dbc_max. Keep a minimum of 200 megs available for the buffer cache.

Otherwise, you're grasping at straws. To reduce a 20+ hour compile project to 10 hours without increasing RAM simply cannot be done. Sort of like wishing that the law of gravity should be overturned.


Bill Hassell, sysadmin
T G Manikandan
Honored Contributor

Re: values of these parameters

I have got replies from all the big ..

Thanks a lot for hitting the nail so hard.
What are the faster drives available on the market?
Increasing the memory double the size,replacing with faster drives should give a good TAT.

Hi everyone,can someone tell me a configuration which can provide faster results.


Thanks
T G Manikandan
Honored Contributor

Re: values of these parameters

Also I would like you people to check my kernel values
Attached

Thanks
U.SivaKumar_2
Honored Contributor

Re: values of these parameters

Hi,
I read this recently for my purpose.

http://www.pctoday.com/editorial/hth/970129.htm


regards,
U.SivaKumar

Innovations are made when conventions are broken
U.SivaKumar_2
Honored Contributor

Re: values of these parameters

Hi,
Details to know to be smart when dealing with
hard disk vendors. ;-)
Technology
Name
Maximum Cable
Length (meters) Maximum
Speed
(MBps) Maximum
Number of
Devices
SCSI-1 6 5 8
SCSI-2 6 5-10 8 or 16
Fast SCSI-2 3 10-20 8
Wide SCSI-2 3 20 16
Fast Wide SCSI-2 3 20 16
Ultra SCSI-3, 8-bit 1.5 20 8
Ultra SCSI-3, 16-bit 1.5 40 16
Ultra-2 SCSI 12 40 8
Wide Ultra-2 SCSI 12 80 16
Ultra-3 (Ultra160/m) SCSI 12 160 16

regards,
U.SivaKumar


Innovations are made when conventions are broken
Stefan Farrelly
Honored Contributor

Re: values of these parameters


I dont recall you saying anywhere what your server model is or what disks you are using, but as an example here are some internal HP disks and their speeds (doing a timed dd on HP-UX);

18GB ST318404LC 33 MB/s
18GB ST118202LC 17 MB/s
9GB ST39103LC 26 MB/s
9GB ST39173WC 14 MB/s

So you can see a large difference here. If you cant add more memory or find it easier to change the disks for faster ones then this may help.
Im from Palmerston North, New Zealand, but somehow ended up in London...
doug hosking
Esteemed Contributor

Re: values of these parameters

Tuning the kernel and/or adding hardware is great, but is it worth stepping back a minute and thinking about the compilations? For example, how often are they done? What is the purpose of them? How critical is performance of the resulting binaries? Would reducing the optimization level of the compilations be an option? I would assume that doing so would both save time and memory.

If, for example, you're doing development work, you might be able to get away with compiling at lower optimization levels until just before the final builds. Most compilers have abundant options for controlling optimization levels, time/space tradeoffs, etc.

You mention that you have 4 processors. How
many compilations are run in parallel? Are there opportunities for tuning here (reducing the number of parallel compilations to reduce memory pressure)? You might well find that you get better throughput by doing fewer compilations in parallel.

What have you done to optimize the system for compilation? I remember many years ago working at a place that did builds based on sources that were NFS-mounted from a source server. Processing of include files over NFS was horrible. We cut build time in half by copying all of the needed source files to a local disk at the start of each build. Seems obvious, but they ran for years over NFS until someone asked the dumb question.