- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- values of these parameters
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 12:40 AM
09-19-2002 12:40 AM
The machines are atmost having the same configuration
as
4 processors
2GB RAM
I have set the
dbc_max_pct 15%
dbc_min_pct 5%
ninode 7000
What values do you recommend?
Is the buffer cache set to 300MB is okay.
Already the machine is paging a lot,I really suggested for these values.
Increases these values would increase the paging activity on the other.
Please advise with valuable imputs with examples
Thanks
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 12:50 AM
09-19-2002 12:50 AM
Re: values of these parameters
300MB is fine for buffer cahce - UNLESS you are paging. If paging still ocurrs then you still have execssive memory pressure so youve got to do something to ease the pressue. I would certainly reduce the buffer cache to say 100MB and see how that goes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 12:53 AM
09-19-2002 12:53 AM
Re: values of these parameters
Great!
Thanks stefan
but wont it be the case where the buffer cache is very small for huge extensive processes which haunts the performance on the other side.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 12:55 AM
09-19-2002 12:55 AM
Re: values of these parameters
What is the need then hp has a default of 50%.
Was that defined for low memory servers,which really reqd that amount?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 01:11 AM
09-19-2002 01:11 AM
Re: values of these parameters
If youre running extensive compilations which run for 20hrs then they sound cpu intensive to me. If this causes your server to page then this is definitely slowing down your performance. Having 300Mb of buffer cache is not helping, so reducing buffer cahce to free up some more memory should reduce (and hopefully eliminate) paging which can only increase performance.
Its swings and roundabouts. More buffer cache is supposed to help performance, but if your server is paging then this outweighs the reason for having so much buffer cache. Its a matter of priorities. Priority 1 on any server should always be NO PAGING (if possible).
"What is the need then hp has a default of 50%. Was that defined for low memory servers,which really reqd that amount "
Thats a really good question. I dont know why they set it so high. I guess they think all new servers come with Gb's and Gb's of RAM so it will only help performance to have such a large buffer cahce - until you start using that memory with applications when you will need to then reduce the buffer cache.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 01:28 AM
09-19-2002 01:28 AM
Re: values of these parameters
disk rather than buffercache
which should take up some time.
what about ninode?
Thanks again
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 07:09 AM
09-19-2002 07:09 AM
Re: values of these parameters
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 07:20 AM
09-19-2002 07:20 AM
Re: values of these parameters
Yes, if you reduce the buffer cache enough, paging will reduce or be eliminated and you could develop an I/O bottleneck. Then that would be your *CURRENT* bottleneck. At that point you would probably increase the buffer cache. You may have to make several adjustments to find the right balance between CPU bottleneck and I/O bottleneck.
If the balance is not acceptable, then you need to look at your options. In this case, probably more hardware (more memory, faster CPUs, Fibre Channel cards, etc).
HTH
Marty
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 07:27 AM
09-19-2002 07:27 AM
Re: values of these parameters
with your new hat you are going great!
You are right!
But when the Co. is not ready to invest in hardware and you expect a good TAT(Turn around time) for the compilation build,there is the break.
At present the compilation builds are taking around 20Hrs.
They asked me how could the TAT be increased a very good result like 10hrs.
What I suggested was that the servers definitely are lacking in memory.aCC compilation eat a lot of memory,but this should give you a 3 to 4 hour increase in TAT.
Also the drastic reduction in good TAT must be a good process in the compilation process and also a big cleanup and tuning on the application.
This should give you the proper results.
What do you admins feel about the same.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 07:30 AM
09-19-2002 07:30 AM
Re: values of these parameters
First, I'll guess your doing not just lots of paging, but application scratch (thus the ninode 7000 and dbc_min/max)
For FEA I actually reduce the dbc_max_pct to 10%.
Next, did you increase your shared memory areas?
Use approximately 60-80% free RAM for values on the following.
maxdsiz
maxdsiz_64bit
maxssiz
maxssiz_64bit
maxtsiz
maxtsiz_64bit
shmmax
This will allow the FEA apps to grab larger chunks of memory.
Next, how is the file structure build for your scratch areas? I typically use the largest block sizes, on a partitition created over at least 2 spindles and OFF the OS drives as wear and tear from FEA scratch can be horrible.
Hope these help.
Shannon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 07:35 AM
09-19-2002 07:35 AM
Re: values of these parameters
You mean that they eat up lot of memory.
I used to get lot of "not enough memory" errors with these compilation processes.
So there was a need to bump those values.
But Bill Hassel replied to me that the 32 bit executable cannot take more than 960MB in memory.
Just check this link
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xb54ac4c76f92d611abdb0090277a778c,00.html
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 07:38 AM
09-19-2002 07:38 AM
Re: values of these parameters
In your case, I would set dbc_max_pct to no more than about 8% and I find that on most 10.20 and 11.00, the best performance is obtained by staically sizing the buffer cache at somewhere between 300-400 MB by setting bufpages to something around 80000. The default value of dbc_max_pct of 50% is in a word - stupid.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 07:44 AM
09-19-2002 07:44 AM
Re: values of these parameters
You are simply fantastic!
But this ninode is dependable upon the kernel parameter
maxusers
which incase reduces nproc,ncallout,etc..
Clay,with the kind of experience you have how would you suggest to reduce the TAT of the compilation build processes?
Please explain.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 10:03 AM
09-19-2002 10:03 AM
SolutionIf you are trying to reduce your compile/link times then the first purchase would need to be memory. This will allow you to eliminate pageouts and also increase buffer cache to about 400MB. I think you will find that fixed buffer cache will offer better performance as it will reduce CPU overhead. Unless this is an 11.11 box, I don't think you are going to see any improvements in speed over about 400MB. If this is 11.11, somewhere between 800MB and 1GB might be in order if you have plenty of memory.
I assume that you are running makefiles. I have been absolutely astonished to find development shops that don't know how to use make and simply have shell scripts to compile and link everything. Make will tremendously reduce the workload as only those objects will need to be recompiled will be processes.
In your case, you really need a fast machine with lots of memory and fast disk drives. If you are running on an older platform like a K then I suspect that a newer A-box, L-box, or N-box or the newer rpxxxx models will offer a tremendous increase in throughput. You should be able to find an A,L, or N at quite reasonable prices on the used-equipment market.
Regards, Clay
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 11:17 AM
09-19-2002 11:17 AM
Re: values of these parameters
as soon as you have solved that paging issue, you could try to increase (yes) "timeslice", say to 12, as your cpu-bound processes would get longer time-slices, then. Of course, everything intercative would hate you...
How about using glance or MWA to identify your I/O hogs?
FWIW,
Wodisch
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 11:17 AM
09-19-2002 11:17 AM
Re: values of these parameters
as soon as you have solved that paging issue, you could try to increase (yes) "timeslice", say to 12, as your cpu-bound processes would get longer time-slices, then. Of course, everything intercative would hate you...
How about using glance or MWA to identify your memory hogs?
FWIW,
Wodisch
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 11:19 AM
09-19-2002 11:19 AM
Re: values of these parameters
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 11:32 AM
09-19-2002 11:32 AM
Re: values of these parameters
Get ninode to as small as possible.
Also keep an eye on sar -v and observe the usage of the other parameters like nfile and nproc. Get them to only 25% more than required. They are tables in the kernel and you want them to be as small as possible.
If this box is only used for compilation not anythingelse, then it is purely CPU bound. If you have 2 GB RAM, I would not set more than 50MB for buffer cache. If you are paging a lot and you can't by more memory, then try to get swap on a seperate disk than the root disk. Define it with a higher priority so that it will be used first.
Stop unrequired daemons like snmpd etc if you are not using them.
Also if you have turned on swapmem_on parameter, turn it off. It does seem to lock the pages used for reserving swap.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 05:57 PM
09-19-2002 05:57 PM
Re: values of these parameters
It is amazing that so much money would be spent on processors and disks and then cripple applications with too little RAM.
As mentioned, ninode has no significance except for HFS filesystems. The formula is obsolete and should NEVER be used. Set ninode to 500 and forget it. That will free up a few megs. The buffer cache size affects file rd/wt so larger is better until it creates memory pressure for processes that have to start paging. Paging has a much worse effect on performance than a smaller buffer cache. If your paging rate (vmstat's po value) is double digits or more, push the dbc_min down some more as well as dbc_max. Keep a minimum of 200 megs available for the buffer cache.
Otherwise, you're grasping at straws. To reduce a 20+ hour compile project to 10 hours without increasing RAM simply cannot be done. Sort of like wishing that the law of gravity should be overturned.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 07:54 PM
09-19-2002 07:54 PM
Re: values of these parameters
Thanks a lot for hitting the nail so hard.
What are the faster drives available on the market?
Increasing the memory double the size,replacing with faster drives should give a good TAT.
Hi everyone,can someone tell me a configuration which can provide faster results.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 08:43 PM
09-19-2002 08:43 PM
Re: values of these parameters
Attached
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 10:36 PM
09-19-2002 10:36 PM
Re: values of these parameters
I read this recently for my purpose.
http://www.pctoday.com/editorial/hth/970129.htm
regards,
U.SivaKumar
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-19-2002 10:42 PM
09-19-2002 10:42 PM
Re: values of these parameters
Details to know to be smart when dealing with
hard disk vendors. ;-)
Technology
Name
Maximum Cable
Length (meters) Maximum
Speed
(MBps) Maximum
Number of
Devices
SCSI-1 6 5 8
SCSI-2 6 5-10 8 or 16
Fast SCSI-2 3 10-20 8
Wide SCSI-2 3 20 16
Fast Wide SCSI-2 3 20 16
Ultra SCSI-3, 8-bit 1.5 20 8
Ultra SCSI-3, 16-bit 1.5 40 16
Ultra-2 SCSI 12 40 8
Wide Ultra-2 SCSI 12 80 16
Ultra-3 (Ultra160/m) SCSI 12 160 16
regards,
U.SivaKumar
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-20-2002 12:03 AM
09-20-2002 12:03 AM
Re: values of these parameters
I dont recall you saying anywhere what your server model is or what disks you are using, but as an example here are some internal HP disks and their speeds (doing a timed dd on HP-UX);
18GB ST318404LC 33 MB/s
18GB ST118202LC 17 MB/s
9GB ST39103LC 26 MB/s
9GB ST39173WC 14 MB/s
So you can see a large difference here. If you cant add more memory or find it easier to change the disks for faster ones then this may help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-20-2002 12:32 AM
09-20-2002 12:32 AM
Re: values of these parameters
If, for example, you're doing development work, you might be able to get away with compiling at lower optimization levels until just before the final builds. Most compilers have abundant options for controlling optimization levels, time/space tradeoffs, etc.
You mention that you have 4 processors. How
many compilations are run in parallel? Are there opportunities for tuning here (reducing the number of parallel compilations to reduce memory pressure)? You might well find that you get better throughput by doing fewer compilations in parallel.
What have you done to optimize the system for compilation? I remember many years ago working at a place that did builds based on sources that were NFS-mounted from a source server. Processing of include files over NFS was horrible. We cut build time in half by copying all of the needed source files to a local disk at the start of each build. Seems obvious, but they ran for years over NFS until someone asked the dumb question.