- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- High IPC/Sem Wait %
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:28 AM
тАО01-02-2002 09:28 AM
I have a system with processes with high IPC & Sem Wait %, PROC_[IPC|SEM]_WAIT_PCT in MeasureWare/glance. My first reaction was to check semephores. & indeed I think semmnu & semume are too low (30 & 10)
o Does anyone have anything else I should check?
o Any suggested values for the two above (I have 120 & 50 up to 4096 & 128!)
o Any other suggestions?
Cheers
Tim
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:33 AM
тАО01-02-2002 09:33 AM
Re: High IPC/Sem Wait %
Instead of saying what I suggest..you need to decide what is best for your shop.
May I suggest checking all your semm* parms and going over what your ~init.ora is set for processes.
You can check out these couple of threads that may help you to ensure all your semmaphore parms are set properly and of course that you have the memory to run at those settings:
Doc on all parms:
http://docs.hp.com//hpux/onlinedocs/os/KCparams.OverviewAll.html
Thread on calc/semmaphores:
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x10ca6af52b04d5118fef0090279cd0f9,00.html
Hope this helps,
Rit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 09:41 AM
тАО01-02-2002 09:41 AM
Re: High IPC/Sem Wait %
Without knowing how your processes work, its a little difficult to calculate the proper semaphore requirements. Here is an example I have of one of my systems:
sema 1
semaem 16384
semmap 5200
semmni 2600
semmns 5200
semmsl 2048
semmnu 1300
semume 64
semvmx 32767
live free or die
harry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 11:02 AM
тАО01-02-2002 11:02 AM
SolutionHere is a kernel example from one of my systems running 11.00/64bit on an L3000. This server runs our ob2 application only.
Tunable parameters
STRMSGSZ 65535
bufpages 0
create_fastlinks 1
dbc_max_pct 15
dbc_min_pct 15
max_thread_proc 256
maxfiles 2048
maxfiles_lim 2048
maxswapchunks 4096
maxuprc ((NPROC*9)/10)
maxusers 250
maxvgs 80
msgmap (MSGTQL+2)
msgmax 32768
msgmnb 65535
msgmni (NPROC)
msgseg (MSGTQL*4)
msgssz 128
msgtql (NPROC*10)
ncallout 2064
nfile (15*NPROC+2048)
nflocks (NPROC)
ninode (8*NPROC+2048)
nkthread 2048
nproc ((MAXUSERS*3)+64)
nstrpty 60
nstrtel (MAXUSERS)
nswapdev 25
semmni (NPROC*5)
semmns (SEMMNI*2)
semmnu (NPROC-4)
semume 64
semvmx 32768
shmmax 0X40000000
shmmni 512
shmseg 32
timeslice 1
unlockable_mem (MAXUSERS*10)
Just a guide, not a rule of thumb. The link that rita put out there is excellent for explaining each parameter in detail and the relevance to each other.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 11:58 AM
тАО01-02-2002 11:58 AM
Re: High IPC/Sem Wait %
I just noticed that Frank has obvoiusly used one of the stupid tuned parameter sets for databases. While most of those are at least ok, the timeslice value of 1 rather than 10 almost certainly is a performance killer. Set it to 10 and leave it there.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2002 12:04 PM
тАО01-02-2002 12:04 PM
Re: High IPC/Sem Wait %
I have been smitten but ann is absolutely correct in her assessment.
Thank you ann for the correction of that, these were systems that I just inherited not too long ago.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-03-2002 07:09 AM
тАО01-03-2002 07:09 AM
Re: High IPC/Sem Wait %
Rita - I actually use informix, but point taken & thanks for the kernel param link
Harry - Thanks, these values are (with Rita's stuff) too high but a nice starting point
Frank - I'm not going to use all your values but this is great as it puts in some equations & relative values
Ann - Thank you, I read Doug Gruman's Performance Tuning cookbook & he recons it is a bad idea to fiddle with the timeslice too.
Just out of interest
nproc 2048
semmns 4096
semmni (SEMMNS) [==> 4096]
semmap (SEMMNI+2) [==> 4098]
semmnu (NPROC-4) [==> 2044]
semume 128
semaem 32768
semvmx (SEMAEM) [==> 32768]
Cheers
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-04-2002 06:32 AM
тАО01-04-2002 06:32 AM
Re: High IPC/Sem Wait %
It even made some sense to me. Being able to cchedule jobs 100 times instead of only 10 times a second, as these values seem to suggest, should advance finer more equitable CPU sharing, unless of course the scheduling itself involves high overhead, which didn't seem likely 'cause in 1/100th of a second a good processor can execute 4-5 million instructions. Does significant part of it go on the very operation of scheduling?
I hope someone from HP comes forward and clears the mass. Where I implemented it, nobody seemed to notice any appreciable difference. And then Ann comes and absolutely deprecates the practice. What's the clue, Ann?
Why is 100/100 performance killer?